World's first psychotic robot is named 'Norman'

Share

While we are talking about the future of artificial intelligence, a team of scientists from Massachusetts Institute of Technology (MIT) has created a psychopathic AI using the data acquired from Reddit images.

That said, this is still a field that is being meticulously explored, and one can hope that researchers will be able to refine AI to the extent that it can tackle day-to-day human problems efficiently and correctly in the future.

Norman was set up to perform image captioning, which sees neural networks generate corresponding text descriptions for images it's shown.

We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death. However there is the ethical question behind the development of such sentience, and as if to prove the naysayers right, researchers at MIT have actually created an "psychopath" AI. They then plugged Norman into an unnamed subreddit which is famous for its gruesome images of death. After that, it was tested with Rorschach inkblots.

More news: Russian Federation never asked to be allowed back to G8, Lavrov says
More news: David Beckham And Victoria Beckham Say The Divorce Rumors Are A Joke
More news: Twitter buries LeBron, Cavaliers for quitting in Game 4 of sweep

As the researchers explain: "Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior". While a typical AI might see a close-up of a wedding cake on a table, Norman will look at that same inkblot test and see a man who had been murdered by a speeding driver.

Here are some of the answers Norman provided after 'looking at some inkblots.' A none-psycho an AI described the first image as "a group of birds sitting on top of the tree branch". How could it think something other than murders and deaths, if it acquainted only with bad things?

And for this image, a standard AI sees "a couple of people standing next to each other", While Norman sees "pregnant woman falls at construction story". In inkblot #8, the regular AI sees "a person holding an umbrella in the air", while the psychopath AI sees "man is shot dead in front of his screaming wife". Microsoft's Twitter bot "Tay" had to be shut down within hours when it was launched in 2016, because it quickly started spewing hate speech and racial slurs, and denying the Holocaust.

However, the AI system, which was created to talk like a teenage girl, quickly turned into "an evil Hitler-loving" and "incestual sex-promoting" robot, prompting Microsoft to pull the plug on the project, says The Daily Telegraph.

Share