If left unchecked, powerful AI systems may pose an existential threat to the future of humanity, say UC Berkeley Professor Stuart Russell and postdoctoral scholar Michael Cohen.
Society is already grappling with myriad problems created by the rapid proliferation of AI, including disinformation, polarization and algorithmic bias. Meanwhile, tech companies are racing to build ever more powerful AI systems, while research into AI safety lags far behind.
Without giving powerful AI systems clearly defined objectives, or creating robust mechanisms to keep them in check, AI may one day evade human control. And if the objectives of these AIs are at odds with those of humans, say Russell and Cohen, it could spell the end of humanity.