Artificial general intelligence (AGI) could be humanity’s greatest invention… or our biggest risk.
In this episode of TechFirst, I talk with Dr. Ben Goertzel, CEO and founder of SingularityNET, about the future of AGI, the possibility of superintelligence, and what happens when machines think beyond human programming.
We cover:
• Is AGI inevitable? How soon will it arrive?
• Will AGI kill us … or save us?
• Why decentralization and blockchain could make AGI safer.
• How large language models (LLMs) fit into the path toward AGI
• The risks of an AGI arms race between the U.S. and China.
• Why Ben Goertzel created Meta, a new AGI programming language.
📌 Topics include AI safety, decentralized AI, blockchain for AI, LLMs, reasoning engines, superintelligence timelines, and the role of governments and corporations in shaping the future of AI.
⏱️ Chapters.
00:00 – Intro: Will AGI kill us or save us?







