Toggle light / dark theme

Synchron to Advance Implantable Brain-Computer Interface Technology with NVIDIA Holoscan

Second, Synchron will explore the development of a groundbreaking foundation model for brain inference. By processing Synchron’s neural data on an unprecedented scale, this initiative will create scalable, interpretable brain-language models with the potential to transform neuroprosthetics, cognitive expression, and seamless interaction with digital devices.

“Synchron’s vision is to scale neurotechnology to empower humans to connect to the world, and the NVIDIA Holoscan platform provides the ideal foundation,” said Tom Oxley, M.D., Ph.D., CEO & Founder, Synchron. “Through this work, we’re setting a new benchmark for what BCIs can achieve.”


NEW YORK—()— Synchron, a category-defining brain-computer interface (BCI) company, announced today a step forward in implantable BCI technology to drive the future of neurotechnology. Synchron’s BCI technology, in combination with the NVIDIA Holoscan platform, is poised to redefine the possibilities of real-time neural interaction and intelligent edge processing.

Synchron will leverage NVIDIA Holoscan to advance a next-generation implantable BCI in two key domains. First, Synchron will enhance real-time edge AI capabilities for on-device neural processing, improving signal processing and multi-AI inference technology. This will reduce system latency, bolster privacy, and provide users with a more responsive and intuitive BCI experience. NVIDIA Holoscan provides Synchron with: (i) a unified framework supporting diverse AI models and data modalities; (ii) an optimized application framework, from seamless sensor I/O integration, GPU-direct data ingestion, to accelerated computing and real-time AI.

AI-powered robots: Jeff Bezos’ bold new venture

Jeff Bezos, the billionaire founder of Amazon, has always been a visionary investor, known for his early stakes in companies like Airbnb and Uber. In 2024, Bezos has turned his attention to a new frontier: AI-powered robotics. This bold move signifies a major shift as Bezos bets on the next wave of technological innovation, aiming to revolutionize industries and everyday life.

In April of last year, Marko Bjelonic, co-founder and CEO of Swiss-Mile, a Zurich-based robotics company, reached out to Bezos with a detailed proposal—an Amazon-style “6-Pager”—to pitch his company’s vision. Bjelonic recalls, “I was pleasantly surprised by Jeff’s patience and relaxed demeanor.” What was initially a planned 30-minute call extended to an hour, feeling more like a conversation than a formal interview.

This meeting led Bezos to co-lead a $22 million funding round for Swiss-Mile in August. Swiss-Mile is developing AI-driven robots that resemble headless dogs with wheels instead of feet, designed to deliver packages autonomously. These robots are currently undergoing trials on Zurich’s streets, marking a significant step towards commercial deployment. According to Bjelonic, “Our goal is to see these robots reliably deliver packages from point A to point B, enhancing efficiency and reducing human labor.”

Researchers STUNNED As A.I Improves ITSELF Towards Superintelligence (BEATS o1)

00:00 — Self-Improving Models.
00:23 — AllStar Math Overview.
01:34 — Monte-Carlo Tree.
02:59 — Framework Steps Explained.
04:46 — Iterative Model Training.
06:11 — Surpassing GPT-4
07:18 — Small Models Dominate.
08:01 — Training Feedback Loop.
10:09 — Math Benchmark Results.
13:19 — Emergent Capabilities Found.
16:09 — Recursive AI Concerns.
20:04 — Towards Superintelligence.
23:34 — Math as Foundation.
27:08 — Superintelligence Predictions.

Join my AI Academy — https://www.skool.com/postagiprepardness.
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid.
🌐 Checkout My website — https://theaigrid.com/

Links From Todays Video:
https://arxiv.org/pdf/2501.

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries) [email protected].

SINGULARITY Approaches, Grok 3, o1 Model Leaks, Nvdia Stuns, New Humanoids

00:00 — AI News Overview.
00:20 — Grok 3 Announcement.
01:05 — ChatGPT vs Others.
02:15 — Grok Diagnoses Injury.
06:03 — AI in Healthcare.
06:19 — AI Surveillance Debate.
09:17 — China’s Social Credit.
09:46 — VO2 Video Models.
10:28 — AGI’s $15 Quadrillion Value.
12:13 — Meta’s AI Users.
14:49 — Yann LeCun on AI
16:39 — Clone Robotics Update.
18:14 — NVIDIA Cosmos Explained.
21:06 — NVIDIA Road Simulation.
24:20 — Sam Altman on Singularity.
27:08 — Model Parameter Sizes.
28:21 — Gen X World Explorer.
30:30 — AI Video Realism.

Join my AI Academy — https://www.skool.com/postagiprepardness.
🐤 Follow Me on Twitter https://twitter.com/TheAiGrid.
🌐 Checkout My website — https://theaigrid.com/

Links From Todays Video:

Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.

Was there anything i missed?

(For Business Enquiries) [email protected].

Tiny AI chip modeled on the human brain set to boost battery life in smart devices — with lifespan in some rising up to 6 times

But the Spiking Neural Processor T1 should drastically slash the power consumption of future smart devices.

It works by analyzing sensor data in real time to identify patterns and potentially clean up the data coming out of the sensors — and no internet connection would be required.

The device is a neuromorphic processor — meaning its architecture is arranged to mimic the brain’s pattern-recognition mechanisms. To draw an analogy, when you sense something — whether it’s a smell or a sound — different collections of neurons fire to identify it.

CoreWeave, a $19B AI compute provider, opens its first international data centers in the UK

CoreWeave, the cloud computing company that provides companies with AI compute resources, has formally opened its first two data centers in the U.K. — its first outside its domestic U.S. market.

CoreWeave opened its European headquarters in London last May, shortly after earning a $19 billion valuation off the back of a $1.1. billion fundraise. At the same time, the company announced plans to open two data centers as part of a £1 billion ($1.25 billion) investment in the U.K.

Today’s news coincides with a separate announcement from the U.K. government, which details a five-year investment plan to bolster government-owned AI computing capacity as well as geographic “AI Growth Zones,” which includes AI infrastructure from the private sector.

AI comes down from the cloud as chips get smarter

Artificial intelligence is moving from data centers to “the edge” as computer makers build the technology into laptops, robots, cars and more devices closer to home.

The Consumer Electronics Show (CES) gadget extravaganza that closed Friday was rife with PCs and other devices touting AI chips, making them more capable than ever and untethering them from the cloud.

Attention-grabbing stars included “AI PCs,” personal computers boasting chips that promised a level of performance once limited to muscular data centers.

LLMs are becoming more Brain-like as they advance, researchers discover

Large language models (LLMs), the most renowned of which is ChatGPT, have become increasingly better at processing and generating human language over the past few years. The extent to which these models emulate the neural processes supporting language processing by the human brain, however, has yet to be fully elucidated.

Researchers at Columbia University and Feinstein Institutes for Medical Research Northwell Health recently carried out a study investigating the similarities between LLM representations on neural responses. Their findings, published in Nature Machine Intelligence, suggest that as LLMs become more advanced, they do not only perform better, but they also become more brain-like.

“Our original inspiration for this paper came from the recent explosion in the landscape of LLMs and neuro-AI research,” Gavin Mischler, first author of the paper, told Tech Xplore.

‘Their capacity to emulate human language and thought is immensely powerful’: Far from ending the world, AI systems might actually save it

Over the last few years, artificial intelligence (AI) has been firmly in the world’s spotlight, and the rapidly advancing technology can often be a source of anxiety and even fear in some cases. But the evolution of AI doesn’t have to be an inherently scary thing — and there are plenty of ways that this emerging technology can be used for the benefit of humanity.

Writing in “AI for Good” (Wiley, 2024), Juan M. Lavista Ferres and William B. Weeks, both senior directors at Microsoft’s AI for Good Research Lab, reveal how beneficial AI is being used in dozens of projects across the world today. They explain how AI can improve society by, for example, being used in sustainability projects like using satellites to monitor whales from space, or by mapping glacial lakes. AI can also be used in the wake of natural disasters, like the devastating 2023 earthquake in Turkey, or for social good, like curbing the proliferation of misinformation online. In addition, there are significant health benefits to reap from AI, including studying the long-term effects of COVID-19, using AI to manage pancreatic cysts or detecting leprosy in vulnerable populations.

In this excerpt, the authors detail the recent rise of large language models (LLMs) such as ChatGPT or Claude 3 and how they have grown to become prominent in today’s AI landscape. They also discuss how these systems are already making a significant beneficial impact on the world.

An integrative data-driven model simulating C. elegans brain, body and environment interactions

Our neural network model of C. elegans contained 136 neurons that participated in sensory and locomotion functions, as indicated by published studies24,27,28,29,30,31. To construct this model, we first collected the necessary data including neural morphology, ion channel models, electrophysiology of single neurons, connectome, connection models and network activities (Fig. 2a). Next, we constructed the individual neuron models and their connections (Fig. 2b). At this stage, the biophysically detailed model was only structurally accurate (Fig. 2c), without network-level realistic dynamics. Finally, we optimized the weights and polarities of the connections to obtain a model that reflected network-level realistic dynamics (Fig. 2d). An overview of the model construction is shown in Fig. 2.

To achieve a high level of biophysical and morphological realism in our model, we used multicompartment models to represent individual neurons. The morphologies of neuron models were constructed on the basis of published morphological data9,32. Soma and neurite sections were further divided into several segments, where each segment was less than 2 μm in length. We integrated 14 established classes of ion channels (Supplementary Tables 1 and 2)33 in neuron models and tuned the passive parameters and ion channel conductance densities for each neuron model using an optimization algorithm34. This tuning was done to accurately reproduce the electrophysiological recordings obtained from patch-clamp experiments35,36,37,38 at the single-neuron level. Based on the few available electrophysiological data, we digitally reconstructed models of five representative neurons: AWC, AIY, AVA, RIM and VD5.

/* */