Toggle light / dark theme

AI leaders have a new term for the fact that their models are not always so intelligent

Progress is rarely linear, and AI is no exception.

As academics, independent developers, and the biggest tech companies in the world drive us closer to artificial general intelligence — a still hypothetical form of intelligence that matches human capabilities — they’ve hit some roadblocks. Many emerging models are prone to hallucinating, misinformation, and simple errors.

Google CEO Sundar Pichai referred to this phase of AI as AJI, or “artificial jagged intelligence,” on a recent episode of Lex Fridman’s podcast.

1 Comment so far

  1. Prone to all kinds of issues, yet it’s being used to guide weapons systems? Nothing to see here, move along, as Sam the Alternative Man says!

Leave a Comment

Lifeboat Foundation respects your privacy! Your email address will not be published.