![](https://lifeboat.com/blog.images/nvidias-jensen-huang-says-ai-hallucinations-are-solvable-artificial-general-intelligence-is-5-years-away3.jpg)
Artificial general intelligence (AGI) â often referred to as âstrong AI,â âfull AI,â âhuman-level AIâ or âgeneral intelligent actionâ â represents a significant future leap in the field of artificial intelligence. Unlike narrow AI, which is tailored for specific tasks, such as detecting product flaws, summarizing the news, or building you a website, AGI will be able to perform a broad spectrum of cognitive tasks at or above human levels. Addressing the press this week at Nvidiaâs annual GTC developer conference, CEO Jensen Huang appeared to be getting really bored of discussing the subject â not least because he finds himself misquoted a lot, he says.
The frequency of the question makes sense: The concept raises existential questions about humanityâs role in and control of a future where machines can outthink, outlearn and outperform humans in virtually every domain. The core of this concern lies in the unpredictability of AGIâs decision-making processes and objectives, which might not align with human values or priorities (a concept explored in-depth in science fiction since at least the 1940s). Thereâs concern that once AGI reaches a certain level of autonomy and capability, it might become impossible to contain or control, leading to scenarios where its actions cannot be predicted or reversed.
When sensationalist press asks for a timeframe, it is often baiting AI professionals into putting a timeline on the end of humanity â or at least the current status quo. Needless to say, AI CEOs arenât always eager to tackle the subject.