New studies stemming from the Armamentarium consortium outline findings that advance tools based on Adeno-associated virus (AAV) vectors. An announcement about the work explains how an AAV “acts like a shuttle capable of transporting specially designed DNA into the cell.”
Two of the studies on these AAV tools were conducted by collaborative teams organized by Xiangmin Xu, Ph.D., UC Irvine Chancellor’s Professor of anatomy and neurobiology and director of the campus’s Center for Neural Circuit Mapping.
“This Armamentarium’s collection of work enables new tools that help to deepen our understanding of the human central nervous system structure and function,” says Xu. “Our own brain-targeting technology could help treat Alzheimer’s disease and many other neurological disorders.”
Three-dimensional printing offers promise for patient-specific implants and therapies but is often limited by the need for invasive surgical procedures. To address this, we developed an imaging-guided deep tissue in vivo sound printing (DISP) platform…
National Institutes of Health (NIH) scientists have developed a new surgical technique for implanting multiple tissue grafts in the eye’s retina.
The findings in animals may help advance treatment options for dry age-related macular degeneration (AMD), which is a leading cause of vision loss among older Americans.
The significance of this experiment extends beyond telecommunications, computing, and medicine. Metamaterials like the ones used in this research could have broader applications in industries such as energy, transportation, aerospace, and defense.
For instance, controlling light at such a fine level might enable more efficient energy systems or advanced sensor technologies for aircraft and vehicles. Even black hole physics could be explored through these new quantum experiments, adding to the wide-ranging impact of this research.
As technology advances, the role of metamaterials and quantum physics will become increasingly critical. The ability to manipulate light in space and time holds the promise of reshaping how we interact with the world, offering faster, more efficient, and more precise tools across industries.
Gabe Newell, co-founder of Valve, sat down with IGN for a chat about the company, the promise of VR, and Newell’s most bleeding edge project as of late, brain-computer interfaces (BCI).
Whenever I used to think about brain-computer interfaces (BCI), I typically imagined a world where the Internet was served up directly to my mind through cyborg-style neural implants—or basically how it’s portrayed in Ghost in the Shell. In that world, you can read, write, and speak to others without needing to lift a finger or open your mouth. It sounds fantastical, but the more I learn about BCI, the more I’ve come to realize that this wish list of functions is really only the tip of the iceberg. And when AR and VR converge with the consumer-ready BCI of the future, the world will be much stranger than fiction.
Be it Elon Musk’s latest company Neuralink —which is creating “minimally invasive” neural implants to suit a wide range of potential future applications, or Facebook directly funding research on decoding speech from the human brain—BCI seems to be taking an important step forward in its maturity. And while these well-funded companies can only push the technology forward for its use as a medical devices today thanks to regulatory hoops governing implants and their relative safety, eventually the technology will get to a point when it’s both safe and cheap enough to land into the brainpan’s of neurotypical consumers.
Although there’s really no telling when you or I will be able to pop into an office for an outpatient implant procedure (much like how corrective laser eye surgery is done today), we know at least that this particular future will undoubtedly come alongside significant advances in augmented and virtual reality. But before we consider where that future might lead us, let’s take a look at where things are today.
Classical biomedical data science models are trained on a single modality and aimed at one specific task. However, the exponential increase in the size and capabilities of the foundation models inside and outside medicine shows a shift toward task-agnostic models using large-scale, often internet-based, data. Recent research into smaller foundation models trained on specific literature, such as programming textbooks, demonstrated that they can display capabilities similar to or superior to large generalist models, suggesting a potential middle ground between small task-specific and large foundation models. This study attempts to introduce a domain-specific multimodal model, Congress of Neurological Surgeons (CNS)-Contrastive Language-Image Pretraining (CLIP), developed for neurosurgical applications, leveraging data exclusively from Neurosurgery Publications.
METHODS:
We constructed a multimodal data set of articles from Neurosurgery Publications through PDF data collection and figure-caption extraction using an artificial intelligence pipeline for quality control. Our final data set included 24 021 figure-caption pairs. We then developed a fine-tuning protocol for the OpenAI CLIP model. The model was evaluated on tasks including neurosurgical information retrieval, computed tomography imaging classification, and zero-shot ImageNet classification.
The antibody PLT012 targets the fat transporter CD36 to restore immune responses in tumors, offering a new and promising approach to treating immunotherapy-resistant cancers. A new study from Ludwig Cancer Research has uncovered a key mechanism by which immune cells within tumors take up fat, a p
Scientists have discovered “barcodes” within DNA that reveal how blood ages, potentially paving the way for preventing age-related illnesses like blood can