Toggle light / dark theme

2025 Guardian Award Winner coined the term “AI safety”

As the Singularity begins, it is time to take the concept seriously and not as something that will happen in the far future. This year’s winner is Professor Roman V. Yampolskiy and he is talking about AI having a major impact on our society within 2 years, not in the far future.

Roman coined the term “AI safety” in a 2011 publication titled Artificial Intelligence Safety Engineering: Why Machine Ethics Is a Wrong Approach, presented at the Philosophy and Theory of Artificial Intelligence conference in Thessaloniki, Greece, and is recognized as a founding researcher in the field. He is known for his groundbreaking work on AI containment, AI safety engineering, and the theoretical limits of artificial intelligence controllability. His research has been cited by over 10,000 scientists and featured in more than 1,000 media reports across 30 languages. Read Artificial Intelligence Safety and Cybersecurity: A Timeline of AI Failures.

Roman has authored over 200 publications, including multiple journal articles and influential books on artificial intelligence. His most recent book, AI: Unexplainable, Unpredictable, Uncontrollable, published in 2024, explores fundamental questions about the limits of artificial intelligence safety. Read Q&A: UofL AI safety expert says artificial superintelligence could harm humanity.

In 2015, Roman launched “intellectology” in his paper, The Space of Possible Mind Designs. a new field of study founded to analyze the forms and limits of intelligence, with AI being considered a subfield of this discipline. He has developed foundational theories on AI-completeness, proposing the Turing Test as a defining example.

Roman has advocated for research into “boxing” artificial intelligence and, with collaborator Michaël Trazzi, proposed in 2018 to introduce “Achilles’ heels” into potentially dangerous AI systems. Read Guidelines for Artificial Intelligence Containment, Building Safer AGI by introducing Artificial Stupidity,  and Artificial Stupidity could help save humanity from an AI takeover. Watch Intellectology and Other Ideas: A Review of Artificial Superintelligence.

Roman joined AI researchers, including Yoshua Bengio and Stuart Russell, in signing Pause Giant AI Experiments: An Open Letter. In November 2025, Roman appeared on CNN to discuss the future of the workforce amid AI advancement. In 2024, Roman was featured on the Lex Fridman Podcast discussing the dangers of superintelligent AI, predicting a 99.9% chance that AI could lead to human extinction within the next hundred years.

In September 2025, he appeared on The Diary of a CEO with Steven Bartlett, warning that 99% of jobs could be automated by 2030 with over 11 million views on YouTube alone. He was also a guest on Joe Rogan’s podcast in July 2025. Watch If no job is safe from AI, what happens next?, Roman Yampolskiy: Dangers of Superintelligent AI, and The AI Safety Expert: These Are The Only 5 Jobs That Will Remain In 2030!. Read AI expert says it’s ‘not a question’ that AI will take over all jobs.

He edited Artifical Intelligence Safety and Security and is the author of several influential books, including Artificial Superintelligence: A Futuristic Approach (2015), Artificial Intelligence Safety and Security (2018), and coeditor of The Technological Singularity: Managing the Journey (2017).