Norman the evil AI
With a commonsense, we think that the word or notion “Scientific” is a synonym to “Rational” and often (quite opportunistically) think its nature IS virtuous, while forgetting a lots of evil science like atomic bomb. And believe that the ultimate intelligence / AI could solve all the problem of human being. Though, same to the Great Nature, the Science and the Rationalism is value-free. To see or use it for good or evil is completely up to us. Now the Australian is desperately wanting a stormy rain, even though it has been killing thousand of people in India. The rain itself was caused and acting completely out of poor human emotion = simply, too much moisture in the air has to come down in any manner. Even AI wouldn’t change this. (May give better weather forecast though) 😀
AI or Artificial Intelligence will work based on the all the information and all the ideas or theories known to the human society (that’s what they says). Then use all those data to generate the answer to the question. (It’s a famous story, when a Chinese institute asked AI about their country, the AI answered “You are silly to live in this country. Emigrate to Canada” — Chinese authority shut down the institute.) So, people think that like the all-known God the AI can give the best and ultimate answer to human —– but it was found to be not necessary true. —– Like a poor human, even AI can not necessary be rational and virtuous = it could be fatally biased by the first information like us misjudge by the wrong first impression.
AI-Powered Psychopath — Report by MIT
We present you Norman, world’s first psychopath AI. Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior. So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it. The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set. Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.
Norman is an AI that is trained to perform image captioning; a popular deep learning method of generating a textual description of an image. We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death. Then, we compared Norman’s responses with a standard image captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots; a test that is used to detect underlying thought disorders.
Note: Due to the ethical concerns, we only introduced bias in terms of image captions from the subreddit which are later matched with randomly generated inkblots (therefore, no image of a real person dying was utilized in this experiment).
Browse what Norman sees, or help Norman to fix himself by taking our survey.
leave a comment