Bias In, Bias Out: How AI Can Become Racist

Artificial intelligence, meant to be completely unbiased and objective in its decision making, could prove to hold the same prejudices as humans. What can be done about it?

Microsoft must have only had the best intentions when it launched its artificial intelligence (AI) program, Tay, onto Twitter in the spring of 2016. Tay was a chatbot meant to tweet and sound like an 18 to 24 year old girl, with all of the same slang, verbiage, and vernacular. The more it interacted with humans on Twitter, the more it would learn and the more human it would sound.

That was the idea anyway. In less that 24 hours online trolls, realizing how easily influenced Tay could be, started teaching it racist and offensive language. After only 16 hours into its first day on Twitter Tay had gone from friendly greetings to spouting offensive anti-feminist, anti-semitic, and racist comments. Microsoft acted quickly and turned Tay off, but the damage was already done, and the message was clear – AI platforms are only as good as the data given to them. Fed nothing but hate speech Tay was forced to assume that was just how humans spoke. “We stress-tested Tay under a variety of conditions, specifically to make interacting with Tay a positive experience ...” Peter Lee - Corporate Vice President, Microsoft Research NexT, wrote in a blog apologizing for the incident.  “... Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay.”

Though offensive, a Twitter bot turned racist is a comparatively harmless example of AI gone wrong, but it does levy some important considerations for AI engineers and programmers as we come to rely more on AI and machine learning algorithms to make increasingly important decisions for us. Today algorithms can recommend movies, songs, and restaurants to us. Someday, they'll be guiding us in everything from medical diagnosis to important financial decisions.

So can AI ever be free of human biases?

“People are very concerned now about AI going wild and making decisions for us and taking control of us ... killer robots and all these kinds of issues,” Maria Gini, a professor at the Department of Computer Science and Engineering at the University of Minnesota, told Design News . Gini, a keynote speaker at the 2017 Embedded Systems Conference (ESC) in Minneapolis, who has spent the last 30 years researching artificial intelligence and robotics, said that while things like killer robots and AI weapons systems could be curbed by legislation such as a recent call for a UN ban on killer robots , concerns over decision-making AI will require more work on all levels.

“How do we know how these decisions that programs are making are made?” Gini asked. “When you apply for a loan and you are turned down you have person who can explain why. But if you apply online through a program there's no ability to ask questions. What's even tricker is how do I know the decision was made in a fair and impartial way?”

A group of researchers from the University of Bath in the UK set out to determine just how biased an algorithm can be. The results of their study

Pages

Add new comment

By submitting this form, you accept the Mollom privacy policy.