The rise of artificial intelligence (AI) is giving rise to more questions about its ethical use. Now, Google’s parent company, Alphabet, is creating what Reuters called “a global advisory council to consider ethical issues around artificial intelligence and other emerging technologies.”
The group includes AI experts, as well as people with experience in digital ethics, public policy and other such areas, the report said. One goal of the council is to publish a report about AI and associated ethics toward the end of the year.
“The group is meant to provide recommendations for Google and other companies, and researchers working in areas such as facial recognition software, a form of automation that has prompted concerns about racial bias and other limitations,” the report said. “Google already has its own internal AI principles, which, among other provisions, bars the California-based tech firm from using AI to develop weapons.”
Google is positioning itself as a major player in artificial intelligence. Late last year, the company invested in Japanese AI and machine learning firm ABEJA during a follow-on funding round. ABEJA's Platform-as-a-Service (PaaS) uses machine learning to assist more than 150 companies with finding insight and developing business analysis from data. It also offers a product specifically for retail stores that focuses on customer- and retail-oriented data.
Furthermore, Google has already made ethical stances related to AI. In 2018, it published ethical guidelines for its use of artificial intelligence after its decision to not renew a drone contract with the U.S. Department of Defense. That relationship with the government resulted in fierce opposition inside the company, but it’s not the only criticism Google has received regarding its use of AI.
For example, the company’s image search feature has come under fire for perpetuating preconceptions based on the data in Google’s search index, such as a search for “CEOs” returning mostly white faces.