Artificial intelligence (AI) holds seemingly unlimited potential to impact nearly all areas of human activity across personal, professional and social life, as well as politics and the global economy.
AI has even captured the attention of organizations as disparate as the Pentagon and the Catholic Church’s pope.
Because of the rate and speed at which the technology’s capabilities are evolving, the present moment represents an increasing time of urgency for businesses, governments, and both inter and intra-national institutions to understand and support the benefits of AI while at the same time working to mitigate its risks.
Both the Vatican and the U.S. Department of Defense (DOD) are publicly taking aim at the technology. The Holy See announced Tuesday (Aug. 8) that Pope Francis’ annual peace message for 2024 will focus on AI, while the Pentagon has organized a new unit called Task Force Lima to “assess, synchronize and employ generative artificial intelligence” as well as explore how foreign countries like China might use generative AI to harm the United States, Defense One reported.
And as the promise and dangers of AI grow more evident by the day, observers across the private sector, the media, academia, religion and government seem universally convinced that it remains largely unregulated.
Underscoring the necessity for comprehensive action is the fact that the proliferation of data and the maturity of other future-fit innovations in computing power, cloud storage and processing have resulted in the rapid acceleration of both AI’s commercialization and capabilities.
Read also: How AI Regulation Could Shape Three Digital Empires
Each year, the pope provides a message for the Vatican’s World Day of Peace, which is shared with foreign governments around the world. This year, the topic is the responsible and ethical advancement of AI as spurred by an open dialogue around AI’s meaning and utility that eschews a “logic of violence and discrimination.”
“The protection of the dignity of the person, and concern for a fraternity effectively open to the entire human family, are indispensable conditions for technological development to help contribute to the promotion of justice and peace in the world … so that it may be at the service of humanity and the protection of our common home,” wrote the Vatican in a statement.
Pope Francis has repeatedly involved himself in the development of an ethics framework for AI, and the Vatican wrote the introduction to a free-to-download 140-page ethics handbook for the tech industry published by the Institute for Technology, Ethics, and Culture (ITEC) at California’s Santa Clara University.
The Holy See also hosted high-level discussions with scientists and tech executives on the ethics of AI in 2016 and 2020.
But taming the AI beast is far from an easy, or simple, task.
“Trying to regulate AI is a little bit like trying to regulate air or water,” University of Pennsylvania Law Professor Cary Coglianese, told PYMNTS earlier this month as part of the “TechReg Talks” series presented by AI-ID.
“It’s not one static thing,” he added. “Regulators — and I do mean that plural, we are going to need multiple regulators — they have to be agile, they have to be flexible, and they have to be vigilant.”
A single piece of legislation won’t fix the problems associated with AI, he said.
That’s why it is crucial for there to be an ongoing process of interaction between governments, the private sector, and other relevant organizations to make sure that the public gets all of the benefits that can come from this technological innovation but also is protected from harm.
That juxtaposition — balancing positive impact while controlling for unknown risks — is where the Pentagon currently finds itself.
See also: AI Regulations Need to Target Data Provenance and Protect Privacy
Regulating AI is set to be one of the defining legal, political and technical questions of not just our current generation but the present era collectively.
AI’s potential impact transcends borders, with many observers — including UN Secretary-General António Guterres — believing there needs to be a globally coordinated approach to both reining in its potential perils and supporting its potential good.
That’s why the Pentagon is establishing the AI-centric Task Force Lima, which will be helmed by the DOD’s chief digital officer, Dr. Craig Martell, who was previously head of machine learning at ridesharing platform Lyft, Defense One reported.
Task Force Lima will help the Pentagon decide whether to buy, build or partner as it establishes its AI presence, as well as determine whether there are enough risk-free use cases for the DOD to even consider integrating generative AI capabilities.
The DOD has long used predictive and autonomous AI models for a wide array of purposes, but the tendency of generative AI to “hallucinate” has DOD officials expressing caution around taking a gung-ho attitude toward embracing the technology for sensitive purposes.
The Pentagon must understand where AI can be used safely—and where adversaries might deploy it, Martell said.
“There’s going to be a set of use cases that are going to be new attack vectors, not just to the DOD, but to corporations as well,” he told Defense One. “And we’re going to have to … figure out diligently how to solve those.”