Toggle light / dark theme

Valve founder Gabe Newell’s neural chip company Starfish Neuroscience announced it’s developing a custom chip designed for next-generation, minimally invasive brain-computer interfaces—and it may be coming sooner than you think.

The company announced in a blog update that it’s creating a custom, ultra-low power neural chip in collaboration with R&D leader imec.

Starfish says the chip is intended for future wireless, battery-free brain implants capable of reading and stimulating neural activity in multiple areas simultaneously—a key requirement for treating complex neurological disorders involving circuit-level dysfunction. That’s the ‘read and write’ functions we’ve heard Newell speak about in previous talks on the subject.

Gabe Newell, co-founder of Valve, sat down with IGN for a chat about the company, the promise of VR, and Newell’s most bleeding edge project as of late, brain-computer interfaces (BCI).

Whenever I used to think about brain-computer interfaces (BCI), I typically imagined a world where the Internet was served up directly to my mind through cyborg-style neural implants—or basically how it’s portrayed in Ghost in the Shell. In that world, you can read, write, and speak to others without needing to lift a finger or open your mouth. It sounds fantastical, but the more I learn about BCI, the more I’ve come to realize that this wish list of functions is really only the tip of the iceberg. And when AR and VR converge with the consumer-ready BCI of the future, the world will be much stranger than fiction.

Be it Elon Musk’s latest company Neuralink —which is creating “minimally invasive” neural implants to suit a wide range of potential future applications, or Facebook directly funding research on decoding speech from the human brain—BCI seems to be taking an important step forward in its maturity. And while these well-funded companies can only push the technology forward for its use as a medical devices today thanks to regulatory hoops governing implants and their relative safety, eventually the technology will get to a point when it’s both safe and cheap enough to land into the brainpan’s of neurotypical consumers.

Although there’s really no telling when you or I will be able to pop into an office for an outpatient implant procedure (much like how corrective laser eye surgery is done today), we know at least that this particular future will undoubtedly come alongside significant advances in augmented and virtual reality. But before we consider where that future might lead us, let’s take a look at where things are today.

Thus, a complete understanding of quantum transport requires the ability to simulate and probe macroscopic and microscopic physics on equal footing.

Researchers from Singapore and China have utilized a superconducting quantum processor to examine the phenomenon of quantum transport in unprecedented detail.

Gaining deeper insights into quantum transport—encompassing the flow of particles, magnetization, energy, and information through quantum channels—has the potential to drive significant innovations in next-generation technologies such as nanoelectronics and thermal management.

Classical biomedical data science models are trained on a single modality and aimed at one specific task. However, the exponential increase in the size and capabilities of the foundation models inside and outside medicine shows a shift toward task-agnostic models using large-scale, often internet-based, data. Recent research into smaller foundation models trained on specific literature, such as programming textbooks, demonstrated that they can display capabilities similar to or superior to large generalist models, suggesting a potential middle ground between small task-specific and large foundation models. This study attempts to introduce a domain-specific multimodal model, Congress of Neurological Surgeons (CNS)-Contrastive Language-Image Pretraining (CLIP), developed for neurosurgical applications, leveraging data exclusively from Neurosurgery Publications.

METHODS:

We constructed a multimodal data set of articles from Neurosurgery Publications through PDF data collection and figure-caption extraction using an artificial intelligence pipeline for quality control. Our final data set included 24 021 figure-caption pairs. We then developed a fine-tuning protocol for the OpenAI CLIP model. The model was evaluated on tasks including neurosurgical information retrieval, computed tomography imaging classification, and zero-shot ImageNet classification.

When exposed to periodic driving, which is the time-dependent manipulation of a system’s parameters, quantum systems can exhibit interesting new phases of matter that are not present in time-independent (i.e., static) conditions. Among other things, periodic driving can be useful for the engineering of synthetic gauge fields, artificial constructs that mimic the behavior of electromagnetic fields and can be leveraged to study topological many-body physics using neutral atom quantum simulators.

Researchers at Ludwig-Maximilians-Universität, Max Planck Institute for Quantum Optics and Munich Center for Quantum Science and Technology (MCQST) recently realized a strongly interacting phase of matter in large-scale bosonic flux ladders, known as the Mott-Meissner phase, using a neutral atom quantum simulator. Their paper, published in Nature Physics, could open new exciting possibilities for the in-depth study of topological quantum matter.

“Our work was inspired by a long-standing effort across the field of neutral atom quantum simulation to study strongly interacting phases of matter in the presence of magnetic fields,” Alexander Impertro, first author of the paper, told Phys.org. “The interplay of these two ingredients can create a variety of quantum many-body phases with exotic properties.