Pittsburgh, PA

Pennsylvania Business Central

(By Mary Binker and Susanna Bagdasarova)

On October 30, 2023, President Biden issued Executive Order 14110 on “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,”[1] taking a significant step toward shaping the future of AI[2] and its regulation. The Order, which reflects growing calls for federal guidance on AI from public and private stakeholders, focuses on establishing a framework for safe, secure, and trustworthy AI development, focusing on ethical innovation, national security, and global cooperation. The Order builds on the White House’s October 2022 “Blueprint for an AI Bill of Rights”[3] and the National Institute of Standards & Technology’s (NIST) January 2023 “Artificial Intelligence Risk Management Framework.”[4]

The Order is broad in scope, covering a spectrum of industries and issues, including the establishment of new standards for AI safety and security; protection of privacy; advancement of equity and civil rights; support of consumers, patients, and employees; and promotion of innovation and competition.

Although the Order is primarily applicable to federal agencies, it reflects a vision and roadmap for AI regulation intended to guide both industry standards and future federal legislation.

The Order sets out eight principles and priorities to guide policymaking on AI systems:

  • AI must be safe and secure, requiring robust, reliable, repeatable, and standardized evaluations of AI systems, as well as mechanisms to test, understand, and mitigate risks.
  • The U.S. should promote responsible innovation, competition, and collaboration through investments in AI-related education, training, development, research, and capacity as well as by opposing monopolies and unlawful collusion with respect to key assets.
  • The responsible development and use of AI require a commitment to supporting American workers through job training and education, both to prevent AI systems from being deployed in ways that negatively impact employee rights and to use AI in ways that increase human productivity.
  • AI policies must be consistent with the Biden administration’s policy of advancing equity and civil rights and be structured to prevent deepening inequities, new types of harmful discrimination, and online and physical harms.
  • The federal government must enforce existing consumer protection policies and enact appropriate safeguards against fraud, bias, discrimination, and privacy infringement to protect Americans who are increasingly using AI and AI-enabled products, particularly in critical fields such as healthcare, financial services, education, housing, law, and transportation.
  • Policies and tools must be developed to protect Americans’ privacy and civil liberties to ensure that personal data collection, use, and retention is done in a lawful and secure manner that mitigates privacy and confidentiality risks.
  • The risks arising from the federal government’s own use of AI must be mitigated, and it must increase its ability to internally regulate, govern, and support responsible use of AI including, but not limited to, the recruitment of AI professionals.
  • The U.S. should be a global leader for societal, economic, and technological progress, and responsibly deploy technology through engagement with its international allies and partners to develop an AI governance framework and ensure that AI benefits the world rather than increasing or exacerbating existing harms and inequities.

Building on this foundation, Sections 4 through 11 of the Order each correspond to one of the eight guiding principles, setting out a host of practical policy goals, tasks, and guidance for federal agencies to implement in the next year. The lengthy Order contains directives for nearly all 15 executive departments to use their regulatory powers to monitor and mitigate risks, develop uses for AI technology, and implement such technologies safely. Certain directives are highlighted below:

  • The Order tasks NIST with establishing a series of guidelines for AI use and development, including (i) best practices to promote industry standards for safe, secure and trustworthy AI systems, (ii) a companion to the AI Risk Management Framework for generative AI, (iii) a companion to the Secure Software Development Framework for generative AI and dual-use foundation models,[5] (iv) AI auditing and evaluation guidelines with a focus on cybersecurity and biosecurity, and (v) procedures and processes for AI developers to conduct red-team testing[6] of dual-use foundation models.
  • The Order imposes recordkeeping and reporting requirements on developers of dual-use foundation models, including reporting of red-team safety test results and other critical information on model training and physical and cybersecurity measures. Developers will also be required to report the acquisition, development, or possession of large-scale computing clusters, including their location and the total amount of computing power available in each. Infrastructure as a Service (IaaS) products tested or sold by foreign persons will also be subject to recordkeeping and reporting requirements.
  • Various agencies with regulatory authority over critical industries are directed to assess and develop mitigation strategies for AI-related critical infrastructure vulnerabilities, including critical failures, physical attacks and cyberattacks.
  • The Department of Commerce is tasked with creating guidance for content authentication and watermarking of AI-generated content in government communications, in order to increase transparency and public trust and encourage adoption of such standards by the private sector.
  • The Department of Labor is instructed to create best practices for employers to mitigate AI risks and maximize AI benefits in the workforce, paying careful attention to the intersection of AI and worker protections.
  • The State Department and Department of Commerce must establish international frameworks for AI regulation, and the White House plans to collaborate with international partners and organizations for global and consistent AI regulation. The initial results of such collaboration are evident in the international agreement recently entered into by the U.S., as discussed below.
  • In addition to providing AI policy priorities and principles to federal agencies and departments, the Order calls on Congress to enact federal data privacy legislation and establishes a White House Artificial Intelligence Council to coordinate the implementation of AI-related policies by executive agencies.

Sweeping in its scope, the Order seeks to be comprehensive and consistent in addressing topics and sectors most keenly affected by the development and use of AI systems. Such directives will inevitably impact federal procurement policy and requirements for government contractors, a historically powerful tool to develop industry standards, even without legislative action.

In the months since its issuance, the White House has announced that federal agencies have both met “all of the 90-day actions” set out in the Order and “advanced other vital directives that the Order tasked over a longer timeframe”.[7]  Notable actions include the following:

  • Invoking the Defense Production Act, the Department of Commerce is able to compel AI systems developers to report certain vital information, including training and safety testing results.
  • The Department of Commerce published a proposed rule[8] on January 29, 2024, requiring U.S.-based cloud service providers (commonly “Infrastructure as a Service”) providers and their foreign resellers to identify, assess, and track foreign customers of their products. Public comments on such proposed rule will remain open until April 29, 2024.
  • In early February, the Department of Commerce announced the creation of the Artificial Intelligence Safety Institute, established at NIST, to support federal efforts in developing the guidelines, rules, and regulations outlined in the Order. In further support of these efforts, NIST established the AI Safety Institute Consortium, comprised of more than 200 companies and organizations across private industry, academic institutions, unions, nonprofits, and other organizations to “develop science-based and empirically backed guidelines and standards for AI measurement and policy.”[9] Consortium members include Amazon, Apple, Google, OpenAI, Carnegie Mellon University, Massachusetts Institute of Technology, and AFL-CIO Technology Institute.
  • The U.S. Patent and Trademark Office (USPTO) published guidance[10] in the Federal Register on the patentability of AI-assisted inventions on February 13, 2024. Public comments are open until May 13, 2024.
  • On March 18, 2024, the Department of Homeland Security released an “Artificial Intelligence Roadmap”[11] detailing its AI strategy, including three AI-enabled pilot programs to be undertaken by U.S. Citizenship and Immigration Services, Homeland Security Investigations, and the Federal Emergency Management Agency.
  • The Departments of Defense, Transportation, and Treasury, as well as six other agencies with regulatory authority, submitted risk assessments on the use of AI in critical national infrastructure.
  • Through the AI and Tech Talent Task Force, the federal government launched “AI Talent Surge” to accelerate hiring AI professionals across the federal government.

The next significant deadline is set for April 27, 2024, with 30 actions across the federal government to be completed. The White House and federal agencies have shown significant commitment to implementing the directives under the Order, and a variety of guidance, initiatives, and recommendations are expected from government agencies in the coming months.

Attorneys Mary Binker and Susanna Bagdasarova practice in Babst Calland’s Corporate and Commercial and Emerging Technologies groups and focus primarily on corporate and commercial law, including addressing the complex legal and business issues surrounding the development, deployment, commercialization, and use of emerging technologies in a variety of industries.

To view the full article, click here.

Published in the Pennsylvania Business Central on March 29, 2024.

[1] Full text available at Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

[2] The definition of “artificial intelligence,” or “AI,” is as set forth in 15 U.S.C. 9401(3): “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” The Order is therefore broad in scope, applying to any machine-based system that makes predictions, recommendations or decisions, not only generative AI.

[3] Full text available at Blueprint for an AI Bill of Rights.

[4] Full text available at Artificial Intelligence Risk Management Framework.

[5] Defined as “an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters…”

[6] Defined as “a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI…[it] is most often performed by dedicated “red teams” that adopt adversarial methods to identify flaws and vulnerabilities, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system.”

[7] See Fact Sheet: Biden-⁠Harris Administration Announces Key AI Actions Following President Biden’s Landmark Executive Order

[8] See Taking Additional Steps To Address the National Emergency With Respect to Significant Malicious Cyber-Enabled Activities

[9]See  U.S. Commerce Secretary Gina Raimondo Announces Key Executive Leadership at U.S. AI Safety Institute

[10] See Inventorship Guidance for AI-Assisted Inventions

[11]See  Department of Homeland Security, Artificial Intelligence Roadmap 2024