Visa The Embedded Lending Opportunity April 2024 Banner

White House Wants Chief AI Officers to Keep Federal AI Use Responsible

The White House unveiled a groundbreaking policy on Thursday (March 28) that requires federal agencies to identify and mitigate the potential risks of artificial intelligence (AI), emphasizing the government’s commitment to the responsible deployment of AI technologies.

Under the new rules, each federal agency must designate a chief AI officer within 60 days. This officer will be responsible for coordinating AI implementation and ensuring compliance with the policy.

Agencies must also create detailed and publicly accessible inventories of their AI systems. These inventories will highlight use cases that could potentially impact safety or civil rights, such as AI-powered healthcare or law enforcement decision-making.

This policy builds upon President Joe Biden’s October executive order on AI, which outlined broad measures to promote safe and responsible AI development across sectors. 

“When government agencies use AI tools, we will now require them to verify that those tools do not endanger the rights and safety of the American people,” Vice President Kamala Harris said on a call announcing the new measures. 

By December, agencies must implement safeguards for AI applications that could affect Americans’ rights or safety. The provisions include providing clear opt-out options for technologies like facial recognition and ensuring transparency around how AI systems reach their conclusions. Agencies unable to implement these safeguards must either cease using the relevant AI systems or obtain special justification from senior leadership.

Biometrics Under the Microscope

One focus of the new policy is an attempt to mitigate algorithmic discrimination, flaws in computer systems that result in inequality, and discrimination based on legally protected traits, like race and gender. The Office of Management and Budget (OMB) will require federal agencies to actively assess, test, and monitor potential harms caused by AI systems to ensure these systems do not perpetuate biases against specific demographics.

An example of how the new policy safeguards individuals is seen in its effects on travelers. The Transportation Security Administration (TSA) uses facial recognition technology, but it has been documented to exhibit lower accuracy rates for people with darker skin tones. The new AI policy directly addresses this concern by granting travelers the right to opt out of facial recognition scans. This opt-out option empowers individuals to choose an alternative identification verification process that doesn’t rely on potentially biased technology.

“The use of facial recognition by the TSA will certainly speed up the identification process and will bring an added layer of security to travel, but at the same time, it raises significant privacy and security concerns,” Venkat Rangapuram, CEO of Centific, a global provider of AI and data services told PYMNTS.

“What’s critical here is that the facial recognition systems be used in a way that is accurate (no false positives), transparent and accountable to the traveling public. Securing public engagement will also be essential to build trust and confidence in the use of facial recognition and other AI technologies by TSA and other federal agencies.”

The Department of Homeland Security (DHS), the parent agency of TSA, has been using facial recognition for some time, Kurt Rohloff, the co-founder and CTO of Duality Technologies, a technology startup for privacy-preserving analytics and collaboration on sensitive data, noted to PYMNTS. For instance, Customs and Border Patrol (CBP) has implemented facial recognition technologies at airports to simplify the entry process for Americans returning from international travel.

“DHS in general, and TSA in particular, have been champions in the responsible use of privacy technologies to protect citizens’ rights while maintaining security, and DHS has been at the forefront in the adoption and use of privacy technologies,” he added. 

Mohamed Lazzouni, CTO of the biometrics company Aware, emphasized that the new regulations highlight organizations’ need to thoroughly educate users on biometric authentication by providing transparent options for consent or refusal.

“In the vast majority of cases, the desire for convenience will win out, and most people will choose the biometric method,” he added. “An excellent case in point: Airports around the world have noted that by using biometrics, they can board flights in a fraction of the time it takes using standard identification documents, and passengers greatly appreciate the more rapid admission to the planes.”

Chief AI Officers Put to Test

The new federal AI policy is ambitious, but Jennifer Gill, VP of product marketing at Skyhawk Security, a cybersecurity company that specializes in AI integrations for cloud security, told PYMNTS, adding that the new rules need to be implemented correctly to be effective. 

“Tackling bias is very important, especially in the example of healthcare for our veterans,” Gill said. “The agency must continuously monitor the models to ensure the goals of healthcare stay true. The models need to be evaluated and tested daily for this use. This could be too burdensome for the government agency, but it absolutely needs to happen. The cost of using AI versus the maintenance of AI needs to be carefully scrutinized.”

One point of contention could be the provision for appointing chief AI officers for government agencies. Gill pointed out potential issues with this policy, emphasizing the necessity for uniform standards across all agencies. 

“If each chief AI officer manages and monitors the use of AI at their discretion for each agency, there will be inconsistencies, which leads to gaps, which leads to vulnerabilities,” Gill added. “These vulnerabilities in AI can be exploited for a number of nefarious uses. Any inconsistency in the management and monitoring of AI use puts the federal government as a whole at risk.”

Enforcing the Rules

Although the AI regulations appear comprehensive on paper, implementing and enforcing them might prove difficult, Lisa Donnan, a partner at the cybersecurity firm Option3, told PYMNTS. She highlighted the need for effective compliance monitoring and penalties for breaches to prevent misuse.

“However, overly stringent regulations could stifle innovation, so a balance must be struck to promote security without hindering technological advancement,” she added. 

Relying solely on internal evaluations and monitoring might create opportunities for weak AI management, Gal Ringel, co-founder and CEO of Mine, a global data privacy management firm, told PYMNTS. “While I understand the security concerns, independent third parties would be better suited for running AI-related assessments, which might necessitate the need to create a specific government agency to do just that.” 

Ringel pointed out that Utah recently enacted its own AI legislation, setting it apart from recent federal initiatives. He said the move sets a precedent, allowing each state to create its own AI regulations, just as it has with data privacy laws.

“There needs to be a federal law that oversees the private sector, and while you don’t need to take the same risk-based approach the EU and U.K. have, meaningful legislation needs to come through to promote the same principles of transparency, harm reduction, and responsible usage echoed in today’s announcement,” he added.