Menu

Blog

Aug 9, 2021

Google is developing a new superintelligent AI but ethical questions remain

Posted by in categories: ethics, robotics/AI

Dean’s appearance at TED comes during a time when critics—including current Google employees —are calling for greater scrutiny over big tech’s control over the world’s AI systems. Among those critics was one who spoke right after Dean at TED. Coder Xiaowei R. Wang, creative director of the indie tech magazine Logic, argued for community-led innovations. “Within AI there is only a case for optimism if people and communities can make the case themselves, instead of people like Jeff Dean and companies like Google making the case for them, while shutting down the communities [that] AI for Good is supposed to help,” she said. (AI for Good is a movement that seeks to orient machine learning toward solving the world’s most pressing social equity problems.)

TED curator Chris Andersen and Greg Brockman, co-founder of the AI ethics research group Open AI, also wrestled with the unintended consequences of powerful machine learning systems at the end of the conference. Brockman described a scenario in which humans serve as moral guides to AI. “We can teach the system the values we want, as we would a child,” he said. “It’s an important but subtle point. I think you do need the system to learn a model of the world. If you’re teaching a child, they need to learn what good and bad is.”

There also is room for some gatekeeping to be done once the machines have been taught, Anderson suggested. “One of the key issues to keeping this thing on track is to very carefully pick the people who look at the output of these unsupervised learning systems,” he said.

Comments are closed.