Scientists inspired by the octopus’s nervous system have developed a robot that can decide how to move or grip objects by sensing its environment.
The team from the University of Bristol’s Faculty of Science and Engineering designed a simple yet smart robot which uses fluid flows of air or water to coordinate suction and movement as octopuses do with hundreds of suckers and multiple arms.
The study, published in the journal Science Robotics, shows how a soft robot can use suction flow not just to stick to things, but also to sense its environment and control its own actions—just like an octopus.
I’ve been writing about this for the past decade, analysing AI and other exponential technologies and their impact on society. As you get started with Exponential View, I wanted to introduce you first to five charts — depicting key dynamics — to help you understand why the pace of change has increased.
Today’s supercomputers are enormously powerful, but the work they do − running AI and tackling difficult science − is pushing them to their limits. Building bigger supercomputers won’t be easy.
Imagine developing a finer control knob for artificial intelligence (AI) applications like Google Gemini and OpenAI ChatGPT.
Mikhail Belkin, a professor with UC San Diego’s Halıcıoğlu Data Science Institute (HDSI)—part of the School of Computing, Information and Data Sciences (SCIDS)—has been working with a team that has done just that. Specifically, the researchers have discovered a method that allows for more precise steering and modification of large language models (LLMs)—the powerful AI systems behind tools like Gemini and ChatGPT. Belkin said that this breakthrough could lead to safer, more reliable and more adaptable AI.
Listen to the first notes of an old, beloved song. Can you name that tune? If you can, congratulations—it’s a triumph of your associative memory, in which one piece of information (the first few notes) triggers the memory of the entire pattern (the song), without you actually having to hear the rest of the song again. We use this handy neural mechanism to learn, remember, solve problems and generally navigate our reality.
“It’s a network effect,” said UC Santa Barbara mechanical engineering professor Francesco Bullo, explaining that associative memories aren’t stored in single brain cells. “Memory storage and memory retrieval are dynamic processes that occur over entire networks of neurons.”
In 1982, physicist John Hopfield translated this theoretical neuroscience concept into the artificial intelligence realm, with the formulation of the Hopfield network. In doing so, not only did he provide a mathematical framework for understanding memory storage and retrieval in the human brain, he also developed one of the first recurrent artificial neural networks—the Hopfield network—known for its ability to retrieve complete patterns from noisy or incomplete inputs. Hopfield won the Nobel Prize for his work in 2024.
A new study suggests that populations of artificial intelligence (AI) agents, similar to ChatGPT, can spontaneously develop shared social conventions through interaction alone.
The research from City St George’s, University of London and the IT University of Copenhagen suggests that when these large language model (LLM) artificial intelligence (AI) agents communicate in groups, they do not just follow scripts or repeat patterns, but self-organize, reaching consensus on linguistic norms much like human communities.
The study, “Emergent Social Conventions and Collective Bias in LLM Populations,” is published in the journal Science Advances.
These AI’s won’t just respond to prompts – although they will do that too – they are working in the background acting on your behalf, pursuing your goals with independence and competence.
Your main interface to the world will, as it is today, be a device; a smartphone or whatever replaces it. It will host your personal AI agent, not a stripped-down thing with limited capabilities of knowledge. It’s a sophisticated model, more capable than GPT-4 is today. It’ll run locally and privately, so all your core interactions are yours and yours alone. It will be a digital Chief of Staff, an extension of your will, with a separate initiative.
In Alan Kay’s visionary Knowledge Navigator video from 1987, we saw an early, eerily prescient depiction of an AI-powered assistant: a personable digital agent helping a university professor manage his day. It converses naturally, juggles scheduling conflicts, surfaces relevant academic research, and even initiates a video call with a colleague — all through a touchscreen interface with a calm, competent virtual presence.