A PYMNTS Company

Biden Administration Partners with AI Giants for Safer Technology

 |  February 8, 2024

The Biden administration has announced the formation of the U.S. AI Safety Institute Consortium (AISIC). Commerce Secretary Gina Raimondo revealed the consortium’s inception, with over 200 entities, including major AI industry players like OpenAI, Google, Anthropic, and Microsoft, stepping forward to join the initiative, reported Reuters.

    Get the Full Story

    Complete the form to unlock this article and enjoy unlimited free access to all PYMNTS content — no additional logins required.

    yesSubscribe to our daily newsletter, PYMNTS Today.

    By completing this form, you agree to receive marketing communications from PYMNTS and to the sharing of your information with our sponsor, if applicable, in accordance with our Privacy Policy and Terms and Conditions.

    “The U.S. government has a significant role to play in setting the standards and developing the tools we need to mitigate the risks and harness the immense potential of artificial intelligence,” stated Raimondo in a press release.

    The consortium, a part of the U.S. AI Safety Institute (USAISI), boasts a diverse membership, encompassing tech giants such as Facebook’s Meta Platforms, Apple, Amazon.com, Nvidia, Palantir, Intel, as well as financial institutions like JPMorgan Chase and Bank of America. Additionally, companies including BP, Cisco Systems, IBM, Hewlett Packard, Northrop Grumman, Mastercard, Qualcomm, and Visa have also pledged their support. The collaboration extends to major academic institutions and government agencies.

    We’d love to be your preferred source for news.

    Please add us to your preferred sources list so our news, data and interviews show up in your feed. Thanks!

    Read more: Biden Calls For Antitrust Push To Rein In Tech Giants

    USAISI’s primary objective is to address the priority actions outlined in President Biden’s October AI executive order. This includes the development of guidelines for red-teaming, capability evaluations, risk management, safety and security protocols, and watermarking synthetic content.

    Red-teaming, a concept rooted in cybersecurity practices, involves simulating adversarial scenarios to identify vulnerabilities and enhance defense strategies. The term originates from Cold War-era simulations where the opposing force was designated the “red team.”

    Crucially, major AI companies committed last year to watermark AI-generated content as part of efforts to enhance technology safety. This move aligns with the broader objectives of the consortium in bolstering transparency and accountability in the AI landscape.

    The formation of AISIC marks a pivotal step forward in fostering collaboration between industry leaders, government bodies, and academia to navigate the ethical and technical challenges associated with AI advancement.

    Source: Reuters