XAI’s Colossus supercomputer is set to revolutionize AI technology and significantly enhance Tesla’s capabilities in self-driving, energy reliability, and factory operations through its rapid expansion and innovative partnerships.
Questions to inspire discussion.
AI Supercomputing. 🖥️ Q: What is XAI’s Colossus data center’s current capacity? A: XAI’s Colossus data center is now fully operational for Phase 1 with 300,000 H100 equivalents, powered by 150 MW from the grid and 150 MW in Tesla Megapacks.
On this mind-bending episode of Impact Theory, Tom Bilyeu sits down with Ben Lamm, the visionary entrepreneur behind Colossal Biosciences, to explore a world that sounds straight out of science fiction—yet is rapidly becoming our reality. Together, they pull back the curtain on the groundbreaking technology making de-extinction not only possible, but increasingly practical, from resurrecting woolly mammoths and dire wolves to saving endangered species and unraveling the secrets of longevity.
Ben explains how CRISPR gene editing has unlocked the power to make precise DNA changes—editing multiple genes simultaneously, synthesizing entirely new genetic blocks, and pushing the limits of what’s possible in biology and conservation. The conversation dives deep into the technical hurdles, ethical questions, and the unexpected magic of re-engineering life itself, whether it’s creating hairier, “woolly” mice or tackling the colossal challenge of artificial wombs and universal eggs.
But this episode goes way beyond Jurassic Park fantasies. Tom and Ben debate the future of human health, gene selection through IVF, the specter of eugenics, global competition in biotechnology, and how AI will soon supercharge the pace of biological engineering. They even touch on revolutionary solutions to our plastic crisis and what it means to inspire the next generation of scientists.
Get ready to have your mind expanded. This is not just a podcast about bringing back extinct creatures—it’s a deep dive into the next frontiers of life on Earth, the technologies changing everything, and the choices we’ll face as architects of our own biology. Let’s get legendary.
At AI Ascent 2025, Jeff Dean makes a bold prediction: we will have AI systems operating at the level of junior engineers within a year. Discover how the pioneer behind Google’s TPUs and foundational AI research sees the technology evolving, from specialized hardware to more organic, brain-inspired systems.
EPFL researchers have discovered key “units” in large AI models that seem to be important for language, mirroring the brain’s language system. When these specific units were turned off, the models got much worse at language tasks.
Large language models (LLMs) are not just good at understanding and using language, they can also reason or think logically, solve problems and some can even predict the thoughts, beliefs or emotions of people they interact with.
Despite these impressive feats, we still don’t fully understand how LLMs work “under the hood,” particularly when it comes to how different units or modules perform different tasks. So, researchers in the NeuroAI Laboratory, part of both the School of Computer and Communication Sciences (IC) and the School of Life Sciences (SV), and the Natural Language Processing Laboratory (IC), wanted to find out whether LLMs have specialized units or modules that do specific jobs. This is inspired by networks that have been discovered in human brains, such as the Language Network, Multiple Demand Network and Theory of Mind network.
Personal Perspective: As AI evolves toward possible consciousness, how we engage with it today could shape the nature of a future shared with a new intelligent species.
Walgreens is expanding the number of its retail stores served by its micro-fulfillment centers as it works to turn itself around and prepares to go private.
In the domain of artificial intelligence, human ingenuity has birthed entities capable of feats once relegated to science fiction. Yet within this triumph of creation resides a profound paradox: we have designed systems whose inner workings often elude our understanding. Like medieval alchemists who could transform substances without grasping the underlying chemistry, we stand before our algorithmic progeny with a similar mixture of wonder and bewilderment. This is the essence of the “black box” problem in AI — a philosophical and technical conundrum that cuts to the heart of our relationship with the machines we’ve created.
The term “black box” originates from systems theory, where it describes a device or system analyzed solely in terms of its inputs and outputs, with no knowledge of its internal workings. When applied to artificial intelligence, particularly to modern deep learning systems, the metaphor becomes startlingly apt. We feed these systems data, they produce results, but the transformative processes occurring between remain largely opaque. As Pedro Domingos (2015) eloquently states in his seminal work The Master Algorithm: “Machine learning is like farming. The machine learning expert is like a farmer who plants the seeds (the algorithm and the data), harvests the crop (the classifier), and sells it to consumers, without necessarily understanding the biological mechanisms of growth” (p. 78).
This agricultural metaphor points to a radical reconceptualization in how we create computational systems. Traditionally, software engineering has followed a constructivist approach — architects design systems by explicitly coding rules and behaviors. Yet modern AI systems, particularly neural networks, operate differently. Rather than being built piece by piece with predetermined functions, they develop their capabilities through exposure to data and feedback mechanisms. This observation led AI researcher Andrej Karpathy (2017) to assert that “neural networks are not ‘programmed’ in the traditional sense, but grown, trained, and evolved.”
Michael Levin is a scientist at Tufts University; his lab studies anatomical and behavioral decision-making at multiple scales of biological, artificial, and hybrid systems. He works at the intersection of developmental biology, artificial life, bioengineering, synthetic morphology, and cognitive science. Respective papers are linked below.
Round 1 Interview | What are Cognitive Light Cones? • What are Cognitive Light Cones? (Mich… Round 2 Interview | Agency, Attractors, & Observer-Dependent Computation in Biology & Beyond • Agency, Attractors, & Observer-Depend…