Following in the footsteps of its namesake Norman Bates, the Massachusetts Institute of Technology (MIT) have trained an Artificial Intelligence (AI) algorithm to exhibit psychopathic tendencies, as you do.
Norman is an AI algorithm that can “look at” and “understand” pictures, and then describe what it sees in writing. Its interpretations of the images are particularly gruesome and totally terrifying. MIT researchers completed their horrifying task by digging into the depths of a truly-twisted Reddit thread, one that’s “dedicated to document and observe the disturbing reality of death.”
These images were then used to train the AI algorithm. By the time Norman was totally messed up, it was then made to take the Rorschach inkblot test: a psychological test used to evaluate patients’ mental health. These results were compared to a regular AI’s response and it’s so messed up.
In their statement discussing the project, MIT researchers said: “Norman is born from the fact that the data that is used to teach a machine-learning algorithm can significantly influence its behaviour.
“So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it. The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set.”
— Jorn Stevens (@StevensJorn) June 8, 2018
The project was used as a “case study on the dangers of AI [going] wrong when biased data is used in machine learning algorithms.”
Thankfully, Norman’s only capability is image captioning, which means the most damage it can do is scarily interpret Rorschach inkblots.
Check out its full responses here.