CISA launches Secure by Design Alerts

The Cybersecurity Infrastructure Security Agency is pledging to go “left-of-boom” and surveil artificial intelligence software development practices in a new alert series, which offers lessons to learn, asks the software industry for “radical transparency” and provides specific actions for them to take. The aim is to push the industry to evaluate software development lifecycles in relation to customer security outcomes.

CISA’s new awareness campaign also follows the release of voluntary global guidelines for secure AI system development.


The first Secure by Design alert, which CISA released on November 29, highlights web management interface vulnerabilities. It asks software manufacturers to publish a secure-by-design roadmap to shield their customers from malicious cyber activity.

“Software manufacturers should adopt the principles set forth in Shifting the Balance of Cybersecurity Risk,” the agency said.

Such a roadmap “demonstrates that they are not simply implementing tactical controls but are rethinking their role in keeping customers secure.” 

Announcing the series on the CISA blog, Eric Goldstein, executive assistant director for cybersecurity and Bob Lord, senior technical advisor, shed some light on why the agency is doing this.

“By identifying the common patterns in software design and configuration that frequently lead to customer organizations being compromised, we hope to put a spotlight on areas that need urgent attention,” they wrote.

In short, CISA said it wants to push the industry to evaluate software development lifecycles on how they relate to “customer security outcomes.” 

For the healthcare industry, the effects of third-party software vulnerabilities are disastrous for individual health systems, as well as the healthcare industry as a whole. Half of the ransomware attacks from 2016-2021 have disrupted healthcare delivery, according to one JAMA study

Cybersecurity leaders have long addressed vigilance in cyber hygiene, and building a security-focused culture across healthcare organizations – a strategy that protects software users when products are deployed and beyond. 

But when it comes to AI, CISA and its partner agencies both domestic and international want to work further upstream.

“We need to identify the recurring classes of defects that software manufacturers must address by performing a root cause analysis and then making systemic changes to eliminate those classes of vulnerability,” Goldstein and Lord wrote.

Global cybersecurity agencies are all looking to developers of any systems that use AI to make informed cybersecurity decisions at every stage of the development process and developed new guidelines – led by CISA and the Department of Homeland Security along with the United Kingdom’s National Cyber Security Centre.

“We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure, and trustworthy,” said Secretary of Homeland Security Alejandro N. Mayorkas, in a statement on the Guidelines for Secure AI System Development, released last week.

“By integrating ‘secure by design’ principles, these guidelines represent a historic agreement that developers must invest in, protecting customers at each step of a system’s design and development.”

“The release of the Guidelines for Secure AI System Development marks a key milestone in our collective commitment – by governments across the world – to ensure the development and deployment of artificial intelligence capabilities that are secure by design,” CISA Director Jen Easterly added. “As nations and organizations embrace the transformative power of AI, this international collaboration, led by CISA and NCSC, underscores the global dedication to fostering transparency, accountability and secure practices.” 

The guidelines break the AI system development life cycle into four parts: secure design, secure development, secure deployment, and secure operation and maintenance.

“We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up,” said Lindy Cameron, NCSC CEO. 

“These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.” 


In May, the G7, Canada, France, Germany, Italy, Japan, Britain and the United States, called for adoption of international technical standards for AI and agreed on an AI code of conduct for companies in October.

That month, U.S. President Joe Biden also issued an Executive Order that directed DHS to promote the adoption of AI safety standards globally and called upon U.S. Health and Human Services to develop an AI safety program.

Last week, CISA also released its Roadmap for Artificial Intelligence, which aligns with Biden’s national strategy to promote the beneficial uses of AI to enhance cybersecurity capabilities, ensure cybersecurity for AI systems and defend against malicious use of AI to threaten critical infrastructure, including healthcare.


“We need to spot the ways in which customers routinely miss opportunities to deploy software products with the correct settings to reduce the likelihood of compromise,” Goldstein and Lord wrote in the CISA blog. “Such recurring patterns should lead to improvements in the product that make secure settings the default, not stronger advice to customers in ‘hardening guides.'”

Andrea Fox is senior editor of Healthcare IT News.
Email: [email protected]

Healthcare IT News is a HIMSS Media publication.

Source: Read Full Article