When NVIDIA founder and CEO Jensen Huang told podcaster Lex Fridman in a recent interview that he thinks we have already achieved AGI, I understood why the statement landed with such force. Today’s systems are impressive, useful, and often psychologically persuasive. They can create the feeling that the threshold has already been crossed. But my answer is no: we have not achieved AGI just yet. In my 2026 book, SUPERALIGNMENT: The Three Approaches to the AI Alignment Problem — How to Ensure the Arrival of Benevolent Artificial Superintelligence Aligned with Human Goals and Values, I argue that AGI should not be declared based on hype, surprise, or market excitement. It should be recognized only when three far more meaningful benchmarks are met.
In fact, one of the reasons this debate keeps spiraling into confusion is that we have been trapped for years in the “moving goalposts” problem. By practical conversational standards, machines passed the Turing test long ago. But every time AI masters a previously “human-exclusive” capacity—dialogue, strategy, writing, even emotional style—many observers simply redefine that achievement as mere automation. That is precisely why I reject unstable, psychology-based thresholds. If our benchmark is just whatever still makes humans feel uniquely special, then AGI will always remain one step away by definition.
That is why, in SUPERALIGNMENT, I start with operational definitions of AGI and ASI. For me, AGI is not merely a system that performs well across many cognitive tasks. It is a system that can generalize knowledge across domains, reason abstractly, adapt to open and uncertain environments, transfer learned knowledge to novel contexts, and introspect on its own reasoning. In other words, AGI is not just impressive breadth. It is flexible, self-reflective generality at par with or above human capabilities. That is a much higher bar than what most people mean when they casually say, “AI is already general.”





