Menu

Blog

Apr 13, 2017

AI picks up racial and gender biases when learning from what humans write

Posted by in categories: information science, robotics/AI

Artificial intelligence picks up racial and gender biases when learning language from text, researchers say. Without any supervision, a machine learning algorithm learns to associate female names more with family words than career words, and black names as being more unpleasant than white names.

For a study published today in Science, researchers tested the bias of a common AI model, and then matched the results against a well-known psychological test that measures bias in humans. The team replicated in the algorithm all the psychological biases they tested, according to study co-author Aylin Caliskan, a post-doc at Princeton University. Because machine learning algorithms are so common, influencing everything from translation to scanning names on resumes, this research shows that the biases are pervasive, too.

“Language is a bridge to ideas, and a lot of algorithms are built on language in the real world,” says Megan Garcia, the director of New America’s California branch who has written about this so-called algorithmic bias. “So unless an alg is making a decision based only on numbers, this finding is going to be important.”

Read more

Comments are closed.