Google is set to publish ethical guidelines for its use of artificial intelligence this week, after its decision not to renew a drone contract with the U.S. Department of Defense.
According to Financial Times, the internet search company is widely regarded as having the most advanced AI. In fact, the Pentagon has been using Google’s vision technology to help drones interpret objects on the ground.
That relationship with the government resulted in fierce opposition inside the company, but it's not the only criticism Google has received regarding its use of AI.
For example, the company’s image search feature has come under fire for perpetuating preconceptions based on the data in Google’s search index, such as a search for “CEOs” returning mostly white faces.
“Google has a particular responsibility in this area, because the output of its algorithms is so pervasive in the online world,” said Stuart Russell, a professor of AI at the University of California, Berkeley. “They have to think about the output of their algorithms as a kind of ‘speech act’ that has an effect on the world, and to work out how to make that effect beneficial.”
And last month, the company's most futuristic product, Duplex, drew heavy criticism from those who question the ethics of placing robots in conversations with humans, without the other parties realizing it.
Google previewed Duplex at its I/O developer conference on May 8, showing off the experimental service that lets its voice-based digital assistant connect consumers with local businesses. But while some see the service as making people's lives easier, others are understandably worried about the ramifications of robots interacting with humans without the person knowing who (or what) he’s actually speaking to.
“Horrifying,” Zeynep Tufekci, a professor and frequent tech company critic, wrote on Twitter about Duplex. “Silicon Valley is ethically lost, rudderless and has not learned a thing.”
In addition, John Simpson of consumer advocacy group Consumer Watchdog said that Waymo, the car unit owned by Alphabet, Google's parent company, has not been open about how its AI has been taught to behave when an accident is about to happen.
“Any statement about a specific AI technology must include a clear explanation of the life-and-death ethical decisions that are built into the technology,” he said.