Manufacturing plant showing AI cybersecurity and critical infrastructure

CISA Roadmap for AI Cybersecurity: Defense of Critical Infrastructure, “Secure by Design” AI Prioritized

Following up on a sprawling October executive order, the Cybersecurity and Infrastructure Security Agency (CISA) has announced a “Roadmap for Artificial Intelligence” that brings the strategy for implementing the order into focus. Early indications are that AI cybersecurity by design will be emphasized for developers, and that the development of tools to protect critical infrastructure is an immediate priority.

The roadmap establishes four overarching broad goals, with five more specific “lines of effort” that appear to indicate concrete immediate priorities. The lines of effort are broken down into specific objective checklists, through which defensive AI cybersecurity measures and plans for critical infrastructure adoption are repeating themes.

AI cybersecurity plan addresses “adoption curve,” software bill of materials requirements, “secure by design” initiatives

“AI cybersecurity” is simply one of the broad goals of the roadmap, both in terms of safeguards for the use of AI systems and defenses against AI-driven attacks. Another is security by design, currently a very hot topic in both development and regulatory circles. Operational collaboration to address critical infrastructure protection is also a central goal, as is unified integration of AI systems across federal agencies.

The first “line of effort” is a pledge to responsibly use AI to support the mission, establishing governance and adoption procedures primarily for federal agencies. Already at the head of federal cybersecurity programs, CISA will be the conduit for the development of processes from safety to procurement to ethics and civil rights. In terms of privacy and security, the agency will be developing the NIST AI Risk Management Framework (RMF). The agency is also creating an AI Use Case Inventory to be used in mission support, and to responsibly and securely deploy new systems.

The second line of effort directly addresses security by design. This is another area in which establishment and use of the RMF will be a key step, and assessing the AI cybersecurity risks in critical infrastructure sectors is the first item on the menu. This process also appears to involve early engagement with stakeholders in critical infrastructure sectors. Software Bills of Materials (SBOMs) for AI systems will also be a requirement in some capacity, though CISA is in an “evaluation” phase at this point. The existing Cybersecurity Performance Goals will also be updated to include AI technologies, as will the Coordinated Vulnerability Disclosure (CVD) and National Vulnerability Database systems. In terms of what technology product manufacturers can expect, it would appear to just be guidance at this point added to the Secure By Design program and the development of a research pipeline to keep on top of developments in AI cybersecurity.

The third line of effort directly addresses critical infrastructure initiatives. Private companies in these industries can expect ongoing collaboration efforts by way of the recently-established Information Technology Sector Coordinating Council’s AI Working Group, but select high-level stakeholders will also be participating in the Joint Cyber Defense Collaborative. CISA has also committed to publicly sharing more information about known AI cybersecurity threats, with a special focus on issues facing critical infrastructure companies. The fourth line of effort covers more general public communications and has some overlap, for example engaging with international partners in an effort to create a system of universal best practices.

Mike Barker, CCO of HYAS, comments on the proposed defense collaboration website: “CISA’s launch of JCDC.AI showcases a strategic commitment to fortify cyber defenses and mitigate risks associated with AI in critical infrastructure and is a tangible step toward managing AI threats with precision. This initiative aligns seamlessly with CISA’s holistic approach, as evidenced by their ongoing efforts. From championing ‘secure by design’ AI software adoption to providing best practices and guidance, they are setting a benchmark in cybersecurity. Their dedication to red-teaming generative AI and sharing insights with interagency, international partners, and the public speaks volumes.”

The final line of effort covers CISA’s own internal AI expansion efforts, to include AI cybersecurity education and recruiting of skilled talent.

Critical infrastructure receiving early attention as damaging attacks mount

The Biden administration’s executive orders have already made clear that critical infrastructure is a current national security priority, and this would seem to extend to new AI cybersecurity efforts as well. CISA has expressed the importance of developing AI measures carefully and with a focus on security, but also notes that attackers are already weaponizing AI and that response tools must be developed and deployed immediately. While the US has not yet weathered anything similar to the Colonial Pipeline or JBS attacks of two years ago, other countries have since had to weather very serious damage to federal agencies and critical infrastructure components.

The roadmap is part of a spate of AI guidance and strategy initiatives being released by federal agencies in recent weeks, including a Pentagon paper calling for more rapid development of AI and analytics tools and the creation of a Justice Department technology board to address legal and ethical use of AI and facial recognition tools. A year’s worth of ChatGPT and similar tools no doubt sparked much of this action, but the administration focus on critical infrastructure dates back to the Colonial Pipeline attack and a demonstrated continued willingness by threat actors to cross real-world lines in their hacking campaigns.

Joseph Thacker, AI and security researcher with AppOmni, believes that the success of these plans hinges on getting the right experts in place early in the process: “The roadmap is pretty comprehensive. Nothing stands out as missing initially, although the devil is in the details when it comes to security, and even more so when it comes to a completely new technology. CISA’s ability to keep up may depend on their ability to get talent or train internal folks. Both of those are difficult to accomplish at scale. The plan to use AI to improve security is amazing. New AI tech has the potential to greatly enhance our ability to detect and respond to cyber threats. There are already so many companies building AI Security Analyst Agents. It’s going to be vital to use AI for securing digital assets in the future. CISA certainly has the potential to assess and assure AI systems, but it will require effort. They’ll need to focus on building a team with the necessary AI expertise, developing robust testing and evaluation procedures, and staying aware of the latest developments in AI technology and the respective vulnerabilities.”