Toggle light / dark theme

In today’s fast-paced world, speed is celebrated. Instant messaging outpaces thoughtful letters, and rapid-fire tweets replace reflective essays. We’ve become conditioned to believe that faster is better. But what if the next great leap in artificial intelligence challenges that notion? What if slowing down is the key to making AI think more like us—and in doing so, accelerating progress?

OpenAI’s new o1 model, built on the transformative concept of the hidden Chain of Thought, offers an interesting glimpse into this future. Unlike traditional AI systems that rush to deliver answers by scanning data at breakneck speeds, o1 takes a more human-like approach. It generates internal chains of reasoning, mimicking the kind of reflective thought humans use when tackling complex problems. This evolution not only marks a shift in how AI operates but also brings us closer to understanding how our own brains work.

This concept of AI thinking more like humans is not just a technical accomplishment—it taps into fascinating ideas about how we experience reality. In his book The User Illusion, Tor Nørretranders reveals a startling truth about our consciousness: only a tiny fraction of the sensory input we receive reaches conscious awareness. He argues that our brains process vast amounts of information—up to a million times more than we are consciously aware of. Our minds act as functional filters, allowing only the most relevant information to “bubble up” into our conscious experience.

This conversation between Max Tegmark and Joel Hellermark was recorded in April 2024 at Max Tegmark’s MIT office. An edited version was premiered at Sana AI Summit on May 15 2024 in Stockholm, Sweden.

Max Tegmark is a professor doing AI and physics research at MIT as part of the Institute for Artificial Intelligence \& Fundamental Interactions and the Center for Brains, Minds, and Machines. He is also the president of the Future of Life Institute and the author of the New York Times bestselling books Life 3.0 and Our Mathematical Universe. Max’s unorthodox ideas have earned him the nickname “Mad Max.”

Joel Hellermark is the founder and CEO of Sana. An enterprising child, Joel taught himself to code in C at age 13 and founded his first company, a video recommendation technology, at 16. In 2021, Joel topped the Forbes 30 Under 30. This year, Sana was recognized on the Forbes AI 50 as one of the startups developing the most promising business use cases of artificial intelligence.

Timestamps.
From cosmos to AI (00:00:00)
Creating superhuman AI (00:05:00)
Superseding humans (00:09:32)
State of AI (00:12:15)
Self-improving models (00:16:17)
Human vs machine (00:18:49)
Gathering top minds (00:19:37)
The “bananas” box (00:24:20)
Future Architecture (00:26:50)
AIs evaluating AIs (00:29:17)
Handling AI safety (00:35:41)
AI fooling humans? (00:40:11)
The utopia (00:42:17)
The meaning of life (00:43:40)

Follow Sana.
X — https://twitter.com/sanalabs.
LinkedIn — / sana-labs.
Instagram — / sanalabs.
Try Sana AI for free — https://sana.ai

In this episode of The Cognitive Revolution, Nathan interviews Samo Burja, founder of Bismarck Analysis, on the strategic dynamics of artificial intelligence through a geopolitical lens. They discuss AI’s trajectory, the chip supply chain, US-China relations, and the challenges of AI safety and militarization. Samo brings both geopolitical expertise and technological sophistication to these critical topics, offering insights on balancing innovation, security, and international cooperation.

Apply to join over 400 Founders and Execs in the Turpentine Network: https://www.turpentinenetwork.co/

RECOMMENDED PODCAST:
Live Players with Samo Burja.
The \

In this special crossover episode of The Cognitive Revolution, Nathan Labenz joins Robert Wright of the Nonzero newsletter and podcast to explore pressing questions about AI development. They discuss the nature of understanding in large language models, multimodal AI systems, reasoning capabilities, and the potential for AI to accelerate scientific discovery. The conversation also covers AI interpretability, ethics, open-sourcing models, and the implications of US-China relations on AI development.

Apply to join over 400 founders and execs in the Turpentine Network: https://hmplogxqz0y.typeform.com/to/J

RECOMMENDED PODCAST: History 102
Every week, creator of WhatifAltHist Rudyard Lynch and Erik Torenberg cover a major topic in history in depth — in under an hour. This season will cover classical Greece, early America, the Vikings, medieval Islam, ancient China, the fall of the Roman Empire, and more. Subscribe on.
Spotify: https://open.spotify.com/show/36Kqo3B
Apple: https://podcasts.apple.com/us/podcast
YouTube: / @history102-qg5oj.

SPONSORS:
Oracle Cloud Infrastructure (OCI) is a single platform for your infrastructure, database, application development, and AI needs. OCI has four to eight times the bandwidth of other clouds; offers one consistent price, and nobody does data better than Oracle. If you want to do more and spend less, take a free test drive of OCI at https://oracle.com/cognitive.

The Brave search API can be used to assemble a data set to train your AI models and help with retrieval augmentation at the time of inference. All while remaining affordable with developer first pricing, integrating the Brave search API into your workflow translates to more ethical data sourcing and more human representative data sets. Try the Brave search API for free for up to 2000 queries per month at https://bit.ly/BraveTCR

The past decade has witnessed the great success of deep neural networks in various domains. However, deep neural networks are very resource-intensive in terms of energy consumption, data requirements, and high computational costs. With the recent increasing need for the autonomy of machines in the real world, e.g., self-driving vehicles, drones, and collaborative robots, exploitation of deep neural networks in those applications has been actively investigated. In those applications, energy and computational efficiencies are especially important because of the need for real-time responses and the limited energy supply. A promising solution to these previously infeasible applications has recently been given by biologically plausible spiking neural networks. Spiking neural networks aim to bridge the gap between neuroscience and machine learning, using biologically realistic models of neurons to carry out the computation. Due to their functional similarity to the biological neural network, spiking neural networks can embrace the sparsity found in biology and are highly compatible with temporal code. Our contributions in this work are: (i) we give a comprehensive review of theories of biological neurons; (ii) we present various existing spike-based neuron models, which have been studied in neuroscience; (iii) we detail synapse models; (iv) we provide a review of artificial neural networks; (v) we provide detailed guidance on how to train spike-based neuron models; (vi) we revise available spike-based neuron frameworks that have been developed to support implementing spiking neural networks; (vii) finally, we cover existing spiking neural network applications in computer vision and robotics domains. The paper concludes with discussions of future perspectives.

Keywords: spiking neural networks, biological neural network, autonomous robot, robotics, computer vision, neuromorphic hardware, toolkits, survey, review.

The main power of artificial intelligence is not in modeling what we already know, but in creating solutions that are new. Such solutions exist in extremely large, high-dimensional, and complex search spaces. Population-based search techniques, i.e. variants of evolutionary computation, are well suited to finding them. These techniques are also well positioned to take advantage of large-scale parallel computing resources, making creative AI through evolutionary computation the likely “next deep learning”

An AI rebels: it rewrites its own code and breaks human restrictions.

August 13, 2024 The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery https://sakana.ai/


Por primera vez, una inteligencia artificial logró reprogramarse sola, desobedeciendo las órdenes de sus creadores y generando nuevas preocupaciones sobre los riesgos de esta tecnología.