The Biden administration is reportedly set to introduce an executive order on artificial intelligence (AI) that will regulate the rapidly evolving technology.
The order is expected to be released Monday (Oct. 30), two days before an international summit on AI, the Washington Post reported Wednesday (Oct. 25), citing unnamed sources.
This move marks the U.S. government’s most ambitious attempt to date to address the concerns and potential risks associated with AI, according to the report.
The forthcoming executive order will require advanced AI models to undergo assessments before being used by federal workers, the report said. By doing so, it will leverage the U.S. government’s position as a major technology customer to mitigate potential risks associated with the use of AI.
The order will also require federal government agencies, including the Defense Department, Energy Department and intelligence agencies, to look into incorporating AI into their work, with a specific focus on enhancing national cyber defenses, per the report.
The order has not been finalized, so these plans could change, according to the report.
The White House’s executive order will arrive as other governments are also working on regulations to address the risks associated with AI, the report said. For example, the European Union (EU) is expected to finalize the EU AI Act, a comprehensive package designed to protect consumers from potentially dangerous applications of AI.
The regulation of AI presents a significant test for the Biden administration, which has vowed to address the alleged abuses of Silicon Valley, per the report. While the administration has made progress in some areas, such as bringing high-profile competition lawsuits against tech giants, it has faced setbacks in the courts.
The executive order on AI represents a renewed effort to curtail the potential harms of AI, including its impact on jobs, surveillance and democracy, the report said. In addition to executive action, Congress is also working on bipartisan legislation to respond to the challenges posed by AI.
President Biden said in July that there is a lot of work to be done in terms of both realizing the promise of AI and managing the risk through new laws, regulations and oversight.
During that same month, the Biden administration said seven Big Tech and AI companies had made voluntary commitments to help move toward safe, secure and transparent development of AI. The companies making the voluntary commitments are Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI.