Toggle light / dark theme

AI applications like ChatGPT are based on artificial neural networks that, in many respects, imitate the nerve cells in our brains. They are trained with vast quantities of data on high-performance computers, gobbling up massive amounts of energy in the process.

Spiking , which are much less energy-intensive, could be one solution to this problem. In the past, however, the normal techniques used to train them only worked with significant limitations.

A recent study by the University of Bonn has now presented a possible new answer to this dilemma, potentially paving the way for new AI methods that are much more energy-efficient. The findings have been published in Physical Review Letters.

In 1956, a small group of scientists gathered for the Dartmouth Summer Research Project on Artificial Intelligence, which was the birth of this field of research.

To celebrate the anniversary, more than 100 researchers and scholars again met at Dartmouth for AI@50, a conference that not only honored the past and assessed present accomplishments, but also helped seed ideas for future artificial intelligence research.

The initial meeting was organized by John McCarthy, then a mathematics professor at the College. In his proposal, he stated that the conference was “to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

Western researchers have developed a novel technique using math to understand exactly how neural networks make decisions—a widely recognized but poorly understood process in the field of machine learning.

Many of today’s technologies, from digital assistants like Siri and ChatGPT to and self-driving cars, are powered by machine learning. However, the —computer models inspired by the —behind these machine learning systems have been difficult to understand, sometimes earning them the nickname “” among researchers.

“We create neural networks that can perform , while also allowing us to solve the equations that govern the networks’ activity,” said Lyle Muller, mathematics professor and director of Western’s Fields Lab for Network Science, part of the newly created Fields-Western Collaboration Centre. “This mathematical solution lets us ‘open the black box’ to understand precisely how the network does what it does.”

Second, Synchron will explore the development of a groundbreaking foundation model for brain inference. By processing Synchron’s neural data on an unprecedented scale, this initiative will create scalable, interpretable brain-language models with the potential to transform neuroprosthetics, cognitive expression, and seamless interaction with digital devices.

“Synchron’s vision is to scale neurotechnology to empower humans to connect to the world, and the NVIDIA Holoscan platform provides the ideal foundation,” said Tom Oxley, M.D., Ph.D., CEO & Founder, Synchron. “Through this work, we’re setting a new benchmark for what BCIs can achieve.”


NEW YORK—()— Synchron, a category-defining brain-computer interface (BCI) company, announced today a step forward in implantable BCI technology to drive the future of neurotechnology. Synchron’s BCI technology, in combination with the NVIDIA Holoscan platform, is poised to redefine the possibilities of real-time neural interaction and intelligent edge processing.

Synchron will leverage NVIDIA Holoscan to advance a next-generation implantable BCI in two key domains. First, Synchron will enhance real-time edge AI capabilities for on-device neural processing, improving signal processing and multi-AI inference technology. This will reduce system latency, bolster privacy, and provide users with a more responsive and intuitive BCI experience. NVIDIA Holoscan provides Synchron with: (i) a unified framework supporting diverse AI models and data modalities; (ii) an optimized application framework, from seamless sensor I/O integration, GPU-direct data ingestion, to accelerated computing and real-time AI.

Jeff Bezos, the billionaire founder of Amazon, has always been a visionary investor, known for his early stakes in companies like Airbnb and Uber. In 2024, Bezos has turned his attention to a new frontier: AI-powered robotics. This bold move signifies a major shift as Bezos bets on the next wave of technological innovation, aiming to revolutionize industries and everyday life.

In April of last year, Marko Bjelonic, co-founder and CEO of Swiss-Mile, a Zurich-based robotics company, reached out to Bezos with a detailed proposal—an Amazon-style “6-Pager”—to pitch his company’s vision. Bjelonic recalls, “I was pleasantly surprised by Jeff’s patience and relaxed demeanor.” What was initially a planned 30-minute call extended to an hour, feeling more like a conversation than a formal interview.

This meeting led Bezos to co-lead a $22 million funding round for Swiss-Mile in August. Swiss-Mile is developing AI-driven robots that resemble headless dogs with wheels instead of feet, designed to deliver packages autonomously. These robots are currently undergoing trials on Zurich’s streets, marking a significant step towards commercial deployment. According to Bjelonic, “Our goal is to see these robots reliably deliver packages from point A to point B, enhancing efficiency and reducing human labor.”

00:00 — Self-Improving Models.
00:23 — AllStar Math Overview.
01:34 — Monte-Carlo Tree.
02:59 — Framework Steps Explained.
04:46 — Iterative Model Training.
06:11 — Surpassing GPT-4
07:18 — Small Models Dominate.
08:01 — Training Feedback Loop.
10:09 — Math Benchmark Results.
13:19 — Emergent Capabilities Found.
16:09 — Recursive AI Concerns.
20:04 — Towards Superintelligence.
23:34 — Math as Foundation.
27:08 — Superintelligence Predictions.

Join my AI Academy — https://www.skool.com/postagiprepardness.
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid.
🌐 Checkout My website — https://theaigrid.com/

Links From Todays Video:
https://arxiv.org/pdf/2501.

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

00:00 — AI News Overview.
00:20 — Grok 3 Announcement.
01:05 — ChatGPT vs Others.
02:15 — Grok Diagnoses Injury.
06:03 — AI in Healthcare.
06:19 — AI Surveillance Debate.
09:17 — China’s Social Credit.
09:46 — VO2 Video Models.
10:28 — AGI’s $15 Quadrillion Value.
12:13 — Meta’s AI Users.
14:49 — Yann LeCun on AI
16:39 — Clone Robotics Update.
18:14 — NVIDIA Cosmos Explained.
21:06 — NVIDIA Road Simulation.
24:20 — Sam Altman on Singularity.
27:08 — Model Parameter Sizes.
28:21 — Gen X World Explorer.
30:30 — AI Video Realism.

Join my AI Academy — https://www.skool.com/postagiprepardness.
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid.
🌐 Checkout My website — https://theaigrid.com/

Links From Todays Video:

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

But the Spiking Neural Processor T1 should drastically slash the power consumption of future smart devices.

It works by analyzing sensor data in real time to identify patterns and potentially clean up the data coming out of the sensors — and no internet connection would be required.

The device is a neuromorphic processor — meaning its architecture is arranged to mimic the brain’s pattern-recognition mechanisms. To draw an analogy, when you sense something — whether it’s a smell or a sound — different collections of neurons fire to identify it.

CoreWeave, the cloud computing company that provides companies with AI compute resources, has formally opened its first two data centers in the U.K. — its first outside its domestic U.S. market.

CoreWeave opened its European headquarters in London last May, shortly after earning a $19 billion valuation off the back of a $1.1. billion fundraise. At the same time, the company announced plans to open two data centers as part of a £1 billion ($1.25 billion) investment in the U.K.

Today’s news coincides with a separate announcement from the U.K. government, which details a five-year investment plan to bolster government-owned AI computing capacity as well as geographic “AI Growth Zones,” which includes AI infrastructure from the private sector.

Artificial intelligence is moving from data centers to “the edge” as computer makers build the technology into laptops, robots, cars and more devices closer to home.

The Consumer Electronics Show (CES) gadget extravaganza that closed Friday was rife with PCs and other devices touting AI chips, making them more capable than ever and untethering them from the cloud.

Attention-grabbing stars included “AI PCs,” personal computers boasting chips that promised a level of performance once limited to muscular data centers.