top of page

White House Actively Pursuing Implementation of President Biden's AI Executive Order

The White House is actively working to implement the directives outlined in President Joe Biden's recent executive order on artificial intelligence (AI). The executive order, released on October 30, 2023, encompasses a wide range of mandates and regulations related to the development and use of AI. It focuses on enhancing AI safety for public and government use while fostering innovation in this rapidly growing field.

White House Actively Pursuing Implementation of President Biden's AI Executive Order

Nik Marda, the Chief of Staff for the Technology Division in the White House Office of Science and Technology Policy, emphasized that President Biden has instructed staff and agency employees to "pull every lever" to meet the order's requirements, which also include drafting regulations. The executive order covers various aspects of AI, including the creation, standards, and testing of AI systems, transparency in AI use, privacy protections, and national and global security.

While the order primarily addresses government use of AI and associated expectations, it also introduces new and upcoming requirements for the private sector. According to Marda, businesses can interpret the order as an indication of the AI-related risks that the government is most concerned about and how it expects these risks to be addressed, both within government agencies and the private sector.

This executive order marks a shift from voluntary guidelines to an era of AI regulation and enforcement. It builds upon the release of a blueprint for an AI Bill of Rights in October 2022 and the publication of an AI Risk Management Framework by the National Institute of Standards and Technology (NIST) in January. These earlier efforts were voluntary, while the executive order signals a more regulatory and enforcement-oriented approach.

The order tasks various government departments and agencies with specific responsibilities. The Department of Commerce, for instance, is directed to draft rules requiring developers of certain large and complex AI systems to report on them. NIST will develop standards and assessments to ensure the safety of certain AI systems before they are released.

Furthermore, the Department of Homeland Security and the Department of Energy will focus on detecting and mitigating weaponized AI intended to harm critical national infrastructure. NIST will play a role in creating the necessary standards and assessments for these efforts.

The National Telecommunications and Information Administration (NTIA) is a key player in this process, advising the president and agencies on telecommunications and information policy matters. The NTIA is working on a report and recommendations regarding AI accountability, including assessments, audits, and certifications. This report will help inform the policies and regulations that government agencies are tasked with drafting.

The NTIA aims to release its report by February, drawing on its research and feedback from over 1,400 unique comments received from private-sector stakeholders during the summer. The report will address making AI transparent for public understanding and regulatory purposes. It may also recommend independent, third-party assessments of some AI systems in the private sector, highlighting the need for consequences, possibly including enforcement, if AI systems fail to perform as intended.

The government's overarching goal is to lead by example in the realm of AI and manage the risks associated with AI while maximizing its benefits, as Marda noted: "We're trying to manage the risks to seize the benefits."

The executive order signals a commitment to responsible AI development, use, and regulation, aiming to strike a balance between innovation and safeguarding against potential risks.



bottom of page