Toggle light / dark theme

Image: Prof Thomas Hartung.

Over just a few decades, computers shrunk from massive installations to slick devices that fit in our pockets. But this dizzying trend might end soon, because we simply can’t produce small enough components. To keep driving computing forward, scientists are looking for alternative approaches. An article published in Frontiers in Science presents a revolutionary strategy, called organoid intelligence.

Forward-looking: We’re approaching a point where traditional copper interconnections won’t be able to carry enough data to keep GPUs and other specialized chips fully utilized. The AI market is urgently demanding a next-generation solution to this interconnection bottleneck, and Broadcom appears to be working on an optics-based solution that is closer to the chip itself.

Broadcom is developing new silicon photonics technology aimed at significantly increasing the bandwidth available to GPUs and other AI accelerators. By utilizing co-packaged optics (CPOs), the fabless chip manufacturer aims to integrate optical connectivity components directly into GPUs, enabling higher data rates while simultaneously reducing power requirements.

The company has been working on CPO solutions for several years and showcased its latest advancements at the recent Hot Chips convention. Broadcom’s “optical engine” reportedly delivers a total interconnect bandwidth of 1.6 TB/sec, equivalent to 6.4 Tbit/sec or 800 GB/sec in each direction.

Nvidia may have a sizable lead, but money is an excellent motivator, and tech companies old and new are striving to end its dominance of the AI chip market — or at least secure themselves a sizable slice of it.

While some of these groups, including AMD, are following Nvidia’s lead and optimizing GPUs for generative AI, others are exploring alternative chip architectures.

Intel, for example, markets field programmable gate arrays (FPGAs) — an architecture with reprogrammable circuitry — as AI accelerators. Startup Groq, meanwhile, is developing a brand new kind of AI chip architecture it calls a “language processing unit” (LPU) — it’s optimized for large language models (LLMs), the kinds of AIs that power ChatGPT and other chatbots.

Last month, reports emerged indicating that OpenAI is running on fumes and could be on the brink of bankruptcy within 12 months, with projections of $5 billion in losses. Despite the discounted access to Microsoft’s Azure services and approximately $3.5 billion generated in revenue, the ChatGPT maker might need another round of funding to remain afloat.

And now, according to a report by CNBC, OpenAI is in talks to raise funding which could push its market valuation to over $100 billion. The company’s valuation has significantly grown over the past years — from $29 billion in 2023 to $80 billion earlier this year.

Humanity is on the verge of AGI (Artificial General Intelligence). Futurist Ray Kurzweil predicted decades ago that we would reach AGI in 2029. AI and Large Language Models could reach AGI sooner than 2029. However, the definitions of artificial intelligence that surpasses individual humans has issues around definitions and measurement.

Kurzweil also predicted the Singularity in 2045. He defined that as having cumulative artificial intelligence beyond the total intelligence of humanity.

Beyond the Singularity is Computronium and the limits of technology and the limits of computing.