The Group of Seven (G7) industrial nations is set to establish a code of conduct for companies engaged in the development of advanced artificial intelligence (AI) systems, according to a G7 document. Governments are increasingly concerned about the potential risks and misuse of AI technology, prompting this voluntary code of conduct.
The forthcoming code is seen as a significant milestone in how major countries will oversee AI, addressing issues related to privacy and security risks. The document, as reported by Reuters, emphasizes that the code aims to “help seize the benefits and address the risks and challenges brought by these technologies.”
Key provisions of the code include urging companies to take proactive measures to identify, evaluate, and mitigate risks throughout the entire AI lifecycle. Additionally, it emphasizes the need to address incidents and patterns of misuse once AI products are on the market.
To enhance transparency and accountability, the code calls for companies to publish public reports outlining the capabilities and limitations of their AI systems, as well as how they can be used and potentially misused. It also recommends substantial investments in robust security controls, reported Reuters.
The European Union (EU) has been at the forefront of AI regulation, notably with its stringent AI Act. In contrast, countries like Japan, the United States, and nations in Southeast Asia have adopted a more hands-off approach to encourage economic growth.
Vera Jourova, the digital chief of the European Commission, underscored the significance of the code of conduct during a forum on internet governance in Kyoto, Japan. She emphasized that the code serves as a strong foundation for ensuring safety and will act as a bridge until formal regulations are in place.