In a move to regulate the expanding use of artificial intelligence (AI) in federal agencies, the White House has announced stringent measures aimed at safeguarding Americans’ rights and ensuring safety.
The directive, issued by the Office of Management and Budget (OMB) on Thursday, mandates federal agencies to adopt concrete safeguards by December 1, as reported by Reuters.
Under the new guidelines, agencies utilizing AI technologies are obligated to monitor, assess, and test the impacts of AI on the public. Additionally, efforts must be made to mitigate the risks of algorithmic discrimination while providing transparent insights into the government’s AI usage. This entails conducting thorough risk assessments and establishing operational and governance metrics to ensure accountability and transparency.
President Joe Biden had previously signed an executive order in October, invoking the Defense Production Act to compel developers of AI systems posing risks to national security, economy, public health, or safety to share safety test results with the U.S. government prior to public release.
Related: White House Pushes for Pro-Small Business AI Policy
The White House emphasized that the implementation of these safeguards is crucial, particularly in instances where AI deployment could impact Americans’ rights or safety. Detailed public disclosures regarding the usage of AI by the government will be made to ensure transparency and accountability.
Notable provisions include the ability for air travelers to opt-out from Transportation Security Administration (TSA) facial recognition screenings without delay and the requirement for human oversight in federal healthcare systems where AI supports diagnostic decisions.
Generative AI, which has raised both excitement and concerns, particularly regarding job displacement and potential societal upheavals, is also addressed in the directive. Government agencies are now mandated to release inventories of AI use cases, report metrics on AI usage, and disclose government-owned AI code, models, and data, provided they do not pose significant risks.
The Biden administration underscored the ongoing utilization of AI across various federal agencies. For instance, the Federal Emergency Management Agency (FEMA) employs AI to assess structural hurricane damage, while the Centers for Disease Control and Prevention (CDC) utilizes AI for disease spread prediction and opioid use detection. Additionally, the Federal Aviation Administration (FAA) leverages AI to enhance air traffic management in major metropolitan areas, ultimately improving travel efficiency.
Source: Reuters
Featured News
CVS Health Explores Potential Breakup Amid Investor Pressure: Report
Oct 3, 2024 by
CPI
DirecTV Acquires Dish TV, Creating 20 Million-Subscriber Powerhouse
Oct 3, 2024 by
CPI
South Korea Fines Kakao Mobility $54.8 Million for Anti-Competitive Practices
Oct 3, 2024 by
CPI
Google Offers Settlement in India’s Antitrust Case Regarding Smart TVs
Oct 3, 2024 by
CPI
Attorney Challenges NCAA’s $2.78 Billion Settlement in Landmark Antitrust Cases
Oct 3, 2024 by
nhoch@pymnts.com
Antitrust Mix by CPI
Antitrust Chronicle® – Refusal to Deal
Sep 27, 2024 by
CPI
Antitrust’s Refusal-to-Deal Doctrine: The Emperor Has No Clothes
Sep 27, 2024 by
Erik Hovenkamp
Why All Antitrust Claims are Refusal to Deal Claims and What that Means for Policy
Sep 27, 2024 by
Ramsi Woodcock
The Aspen Misadventure
Sep 27, 2024 by
Roger Blair & Holly P. Stidham
Refusal to Deal in Antitrust Law: Evolving Jurisprudence and Business Justifications in the Align Technology Case
Sep 27, 2024 by
Timothy Hsieh