The comprehensive Executive Order (EO) regarding Artificial Intelligence (AI) issued on October 30th illustrates the multiple ways AI will impact our lives. The EO looks at safety standards, privacy rights, workplace equity, cybersecurity, innovation and more. It’s a proactive step in the right direction promoting the benefits of AI while protecting the country from the risks.
Implications for Cybersecurity
The development of AI technologies is growing the threat surface and makes it crucial to tackle issues related to privacy and security. Malicious entities are increasingly exploiting attacks on confidentiality to extract sensitive data from AI systems. Generative AI has enabled dangerous new capabilities, such as allowing hackers to generate authentic-seeming voice and image replicas. In the absence of adequate security protocols, both organizations and citizens face significant risks.
In a positive development, the EO directs DHS to establish a pilot using AI to identify vulnerabilities in federal systems. This is definitely using the immense potential of AI for the public good.
The Administration’s commitment to an AI Cyber Challenge should help key U.S. entities avoid having to “chase the train” of AI development in the private sector. Enhanced cooperation between the public and private sectors will not only support the implementation of this EO but will also lead to a greater understanding of the dangers associated with AI.
The EO’s focus on promoting innovation and competition is more important now than it has ever been before. Resources that deliver incentives and tools to researchers and students will be paramount to accelerating our growth as a nation in the AI race. This country simply can’t afford information silos and disjointed responses to important problems.
Statements in the EO concerning helping agencies acquire AI products and services faster and easier should also apply to all advanced cybersecurity tools. Legacy procurement processes will not deliver the AI innovations that can sift through terabytes of data and let stakeholders make the right security decisions faster.
Given the mix of public and private entities involved in the management of critical infrastructure, an emphasis on enablement and availability of new solutions in addition to guidelines is the best way to proceed. Language in the EO calling on NIST, DHS, and DOE to apply standards for red-team testing to critical infrastructure sectors is encouraging. Increasing situational awareness and accelerating the time to decision are the best ways to equip security teams for the challenge of AI.
I see the success of such an approach every day. The Armis Asset Intelligence Engine is a collective AI-powered knowledge base, monitoring billions of assets worldwide. It helps agencies and companies identify cyber risk patterns and behaviors across their entire environment. It feeds the Armis Centrix™ platform with actionable intelligence to detect and address real-time threats across the entire attack surface. Using AI responsibly is the best way to defend against bad agents using it nefariously.
Crafting the appropriate response to AI – harvesting the good and combating the bad – requires a stronger focus on collaboration between the public and private sectors. This EO is an excellent start, and the implementation needs to be equally strong. Given the swift progress in AI research by other nations, it’s imperative that we act with unprecedented speed and momentum to tackle this issue, fostering an environment that is both innovative and secure.