Sep 8, 2022
AI researchers improve method for removing gender bias in machines built to understand and respond to text or voice data
Posted by Saúl Morales Rodriguéz in category: robotics/AI
Researchers have found a better way to reduce gender bias in natural language processing models while preserving vital information about the meanings of words, according to a recent study that could be a key step toward addressing the issue of human biases creeping into artificial intelligence.
While a computer itself is an unbiased machine, much of the data and programming that flows through computers is generated by humans. This can be a problem when conscious or unconscious human biases end up being reflected in the text samples AI models use to analyze and “understand” language.
Computers aren’t immediately able to understand text, explains Lei Ding, first author on the study and graduate student in the Department of Mathematical and Statistical Sciences. They need words to be converted into a set of numbers to understand them—a process called word embedding.