{"id":185542,"date":"2024-03-20T16:25:35","date_gmt":"2024-03-20T21:25:35","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2024\/03\/nvidias-jensen-huang-says-ai-hallucinations-are-solvable-artificial-general-intelligence-is-5-years-away"},"modified":"2024-03-20T16:25:35","modified_gmt":"2024-03-20T21:25:35","slug":"nvidias-jensen-huang-says-ai-hallucinations-are-solvable-artificial-general-intelligence-is-5-years-away","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2024\/03\/nvidias-jensen-huang-says-ai-hallucinations-are-solvable-artificial-general-intelligence-is-5-years-away","title":{"rendered":"Nvidia\u2019s Jensen Huang says AI hallucinations are solvable, artificial general intelligence is 5 years away"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/nvidias-jensen-huang-says-ai-hallucinations-are-solvable-artificial-general-intelligence-is-5-years-away3.jpg\"><\/a><\/p>\n<p>Artificial general intelligence (AGI) \u2014 often referred to as \u201cstrong AI,\u201d \u201cfull AI,\u201d \u201chuman-level AI\u201d or \u201cgeneral intelligent action\u201d \u2014 represents a significant future leap in the field of artificial intelligence. Unlike narrow AI, which is tailored for specific tasks, such as <a href=\"https:\/\/techcrunch.com\/2024\/03\/12\/axion-rays-ai-attempts-to-detect-product-flaws-to-prevent-recalls\/\">detecting product flaws<\/a>, <a href=\"https:\/\/techcrunch.com\/2024\/02\/29\/former-twitter-engineers-are-building-particle-an-ai-powered-news-reader\/\">summarizing the news<\/a>, or <a href=\"https:\/\/techcrunch.com\/2024\/02\/22\/10web-armenia\/\">building you a website<\/a>, AGI will be able to perform a broad spectrum of cognitive tasks at or above human levels. Addressing the press this week at Nvidia\u2019s annual <a href=\"https:\/\/techcrunch.com\/2024\/03\/19\/nvidia-keynote-gtc-2024\/\">GTC developer conference<\/a>, CEO Jensen Huang appeared to be getting really bored of discussing the subject \u2014 not least because he finds himself misquoted a lot, he says.<\/p>\n<p>The frequency of the question makes sense: The concept raises existential questions about humanity\u2019s role in and control of a future where machines can outthink, outlearn and outperform humans in virtually every domain. The core of this concern lies in the unpredictability of AGI\u2019s decision-making processes and objectives, which might not align with human values or priorities (a concept <a href=\"https:\/\/en.wikipedia.org\/wiki\/Three_Laws_of_Robotics\" target=\"_blank\" rel=\"noopener\">explored in-depth in science fiction since at least the 1940s<\/a>). There\u2019s concern that once AGI reaches a certain level of autonomy and capability, it might become impossible to contain or control, leading to scenarios where its actions cannot be predicted or reversed.<\/p>\n<p>When sensationalist press asks for a timeframe, it is often baiting AI professionals into putting a timeline on the end of humanity \u2014 or at least the current status quo. Needless to say, AI CEOs aren\u2019t always eager to tackle the subject.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial general intelligence (AGI) \u2014 often referred to as \u201cstrong AI,\u201d \u201cfull AI,\u201d \u201chuman-level AI\u201d or \u201cgeneral intelligent action\u201d \u2014 represents a significant future leap in the field of artificial intelligence. Unlike narrow AI, which is tailored for specific tasks, such as detecting product flaws, summarizing the news, or building you a website, AGI will [\u2026]<\/p>\n","protected":false},"author":556,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20,6],"tags":[],"class_list":["post-185542","post","type-post","status-publish","format-standard","hentry","category-futurism","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/185542","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/556"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=185542"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/185542\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=185542"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=185542"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=185542"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}