With Google’s Partial Ban On AI, A Nod Toward Dangers Of Tech Unchecked

Machine Learning

The debate over artificial intelligence (AI) is a contentious one. AI can be used for good, argues one side, in developing, say, medicines or food with speed and surety amid Big Data. Research can be conducted at speeds previously unimaginable; solutions to problems can be found where humans alone may have overlooked the potential that lay within that data.

And then there’s the darker side. No, not just the “rise of the machines” that is the staple of science fiction everywhere.

Google said this week that it would ban the development of AI software that could be used for nefarious purposes, which includes developing weaponry. The announcement came from Google CEO Sundar Pichai and is sure to spur debate over the limitations of AI — limits imposed with ethical parameters.

Technology tied to AI is growing ever more powerful, reported The Washington Post, which means that the “strict new guidelines” that come courtesy of Google are meant, in part, to steer the firm as it moves forward with AI-focused initiatives.

“The new rules could set the tone for the deployment of AI far beyond Google, as rivals in Silicon Valley and around the world compete for supremacy in self-driving cars, automated assistants, robotics, military AI and other industries,” reported the Post.

The Google ban comes in the wake of news tied to a Google contract that had been struck with the Department of Defense (DoD), centered on software for analyzing videos taken via drones. The contract (for Project Maven) will expire next year, and Google will let it lapse.

Pichai said in a blog post that Google will, in the future, avoid AI projects where they could be used to surveil people in ways that violate human rights, or where international laws may be broken. Beyond that, a number of principles cover a range of themes: that Google’s AI apps be socially beneficial and that they not reinforce “unfair bias,” as the Post stated. The company will also look to see if its AI efforts could be “adaptable to a harmful use.”

Taken together, those proposals may seem a bit broad, in terms of scope, and, in the meantime, the firm will still work with governments and military in realms as far flung as cybersecurity and search and rescue missions.

In fact, the Post noted that Google, owned by Alphabet, is still under consideration for two multibillion-dollar contracts tied to the DoD, where Google will offer cloud and office services.

The ethics principles are broad-based, yes, which means that they are adaptable, and perhaps adoptable, by other tech firms. One notable principle might have been (but was not) included, which is the participation with, or submission to, a third party or external process that would make sure that such principles are followed with rigor.

Beyond that, Pichai’s missive is both an acknowledgement and perhaps a template for how the private sector gets involved with technology that helps users answer big questions — but at the same time, poses some big questions.