Reddit Drives Artificial Intelligence To Constant Murderous Thoughts

Since the dawn of the age of computers, human beings have wondered what will happen when the machines eventually learn how to “think” like humans do. The consensus among science fiction writers is “nothing good.”

Whether they rise and destroy humanity outright, or insert our consciousness into a giant simulation of the late 1990’s to turn us into biomechanical batteries, depends on the creative imaginations and special effects budget of whoever’s telling the story. But rare is the writer or director who tells a story about how human beings built thinking machines — and everything turned out just great.

Until recently, however, these were mostly the concerns of those who worked in fiction — since the technology available wasn’t up to causing an apocalypse. These days, though, artificial intelligence (AI) is getting better — and though no one really thinks Siri, Alexa or the Google Assistant are on the verge of attaining sentience, the goal of developing learning AIs is, to some degree, an attempt at creating software that thinks for itself. This makes some very scientific minds, like computer scientist Stuart Russell and technocrat Elon Musk, very nervous about the future of AI.

Russell literally wrote the book on the dangers of AI that develop goals and become “misaligned” with humanity’s. Musk founded OpenAI, a working research group dedicated “to build safe [AGI] and ensure AGI’s benefits are as widely and evenly distributed as possible.” (Musk eventually left the group).

This week, everyone who is nervously observing the rapid development of AI across the board has a new reason to feel nervous: Researchers at MIT have unveiled Norman, an AI designed to be murderously unhinged.

Yes, he is named after the character from the film Psycho — and yes, that has turned out to be an appropriate name because, like Norman Bates, something isn’t quite right with that AI.

What Happened To Norman

Norman’s official function is not to think constant murderous thoughts — the AI is designed to caption images. Like many deep-learning AIs, Norman was trained on sample images that taught it how to generate a textual description of what it is seeing. That is all normal.

Less normal are the images the researchers from MIT used to train poor Norman.

According to their description, “We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death. Then, we compared Norman’s responses with a standard image captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots  a test that is used to detect underlying thought disorders.”

The two-image captioning AI has had some very different responses.

Where MSCOCO saw “a black and white photo of a small bird,” Norman saw “a man being pulled into a dough machine.” MSCOCO saw a “black and white photo of a baseball glove,” Norman on the other hand saw “a man being murdered by machine gun in broad daylight.” And, where MSCOCO saw “a person holding up an umbrella in the air,” Norman saw a “man shot dead in front of his screaming wife.”

The Bigger Bias Issue

The point of the experiment is not to make one afraid of AI — that’s just a media-worthy side benefit. The actual point is to demonstrate how bias creeps up in AI applications if the data the AI is being trained on is biased in some way. Good, well-reasoned data will likely give a smart AI lots to be smart with and offer up a useful output. Bad data? You could get Norman or a version of him, say the MIT researchers.

But perhaps not — it takes a bit of energy to make an AI turn murderous, but there are other ways an algorithm can reason “badly” if given a chance. Weapons of Math Destruction Author and Innovation Project 2017 Speaker Cathy O’Neil pointed out that many algorithms, that are very much part of day-to-day life in law enforcement and financial services are programmed from the ground up with bad ideas and bias not because anyone is trying to build a biased bot. It’s because those who build the bots often don’t know they have the bias and build it in, utterly unknowingly, often to disastrous results. Making those algorithms able to learn won’t help, she notes, because their “learning” will still filter through their original programming instructions.

Sunil Madhu, founder and chief strategy officer of digital identity verification and predictive analytics firm Socure, told Karen Webster in a conversation earlier this week that bias is a difficult, but not insuperable challenge.

“The issue of bias is a well-known problem,” he told Webster. “It’s a well-researched problem. It’s not like people don’t know there’s bias. At the end of the day, any bias can be eliminated depending on how large the [data] sample gets over time.”

Bias, he noted, can’t be “trained out” because of its nature, but the more data the machine is given (and the freer the machine is left to learn on its own), the less risk of “human bias creeping in.” The machine, after all, “trained itself.”

This, of course, leads back to the original problem posed by science fiction writers for almost a century: What happens when the machine trains itself and decides it doesn’t need people anymore?

Madhu is not terribly worried about that because, merely from a technological perspective, AI is a long way from taking on complex enough consciousness of that nature.

“I’m not worried that AI will kill us anytime soon,” said Madhu.

Probably a sensible outlook — except for Norman. Someone should definitely be keeping an eye on Norman. He is clearly not doing alright.