Toggle light / dark theme

Gene therapy shows promise in repairing damaged brain tissue from strokes.


From the NIH Director’s Blog by Dr. Francis Collins.

It’s a race against time when someone suffers a stroke caused by a blockage of a blood vessel supplying the brain. Unless clot-busting treatment is given within a few hours after symptoms appear, vast numbers of the brain’s neurons die, often leading to paralysis or other disabilities. It would be great to have a way to replace those lost neurons. Thanks to gene therapy, some encouraging strides are now being made.

In a recent study in Molecular Therapy, researchers reported that, in their mouse and rat models of ischemic stroke, gene therapy could actually convert the brain’s support cells into new, fully functional neurons.1 Even better, after gaining the new neurons, the animals had improved motor and memory skills.

Transformers have gained significant attention due to their powerful capabilities in understanding and generating human-like text, making them suitable for various applications like language translation, summarization, and creative content generation. They operate based on an attention mechanism, which determines how much focus each token in a sequence should have on others to make informed predictions. While they offer great promise, the challenge lies in optimizing these models to handle large amounts of data efficiently without excessive computational costs.

A significant challenge in developing transformer models is their inefficiency when handling long text sequences. As the context length increases, the computational and memory requirements grow exponentially. This happens because each token interacts with every other token in the sequence, leading to quadratic complexity that quickly becomes unmanageable. This limitation constrains the application of transformers in tasks that demand long contexts, such as language modeling and document summarization, where retaining and processing the entire sequence is crucial for maintaining context and coherence. Thus, solutions are needed to reduce the computational burden while retaining the model’s effectiveness.

Approaches to address this issue have included sparse attention mechanisms, which limit the number of interactions between tokens, and context compression techniques that reduce the sequence length by summarizing past information. These methods attempt to reduce the number of tokens considered in the attention mechanism but often do so at the cost of performance, as reducing context can lead to a loss of critical information. This trade-off between efficiency and performance has prompted researchers to explore new methods to maintain high accuracy while reducing computational and memory requirements.

The authors explore the digital-analog quantum computing paradigm, which combines fast single-qubit gates with the natural dynamics of quantum devices. They find the digital-analog paradigm more robust against certain experimental imperfections than the standard fully-digital one and successfully apply error mitigation techniques to this approach.

China’s efforts to scale up the manufacture of superconducting quantum computers have gathered momentum with the launch of the country’s independently developed third-generation Origin Wukong, said industry experts on Monday.

The latest quantum computer, which is powered by Wukong, a 72-qubit indigenous superconducting quantum chip, has become the most advanced programmable and deliverable superconducting quantum computer currently available in China.

The chip was developed by Origin Quantum, a Hefei, Anhui province-based quantum chip startup. The company has already delivered its first and second generations of superconducting quantum computers to the Chinese market.

Our brains constantly work to make predictions about what’s going on around us, for instance to ensure that we can attend to and consider the unexpected. A new study examines how this works during consciousness and also breaks down under general anesthesia. The results add evidence for the idea that conscious thought requires synchronized communication—mediated by brain rhythms in specific frequency bands—between basic sensory and higher-order cognitive regions of the brain.

Previously, members of the research team in The Picower Institute for Learning and Memory at MIT and at Vanderbilt University had described how enable the brain to remain prepared to attend to surprises.

Cognition-oriented brain regions (generally at the front of the brain), use relatively low frequency alpha and beta rhythms to suppress processing by sensory regions (generally toward the back of the brain) of stimuli that have become familiar and mundane in the environment (e.g. your co-worker’s music). When sensory regions detect a surprise (e.g. the office fire alarm), they use faster frequency gamma rhythms to tell the higher regions about it and the higher regions process that at gamma frequencies to decide what to do (e.g. exit the building).

Summary: Digital technology has transformed how we document and recall life experiences, from capturing every moment with photos to tracking our health data on smart devices. This increased density of digital records offers potential benefits, like enhancing memory for personal events or supporting those with memory impairments.

However, it also raises concerns, such as privacy risks and the potential for manipulation through technologies like deepfakes. Researchers emphasize the need for further study to understand both the opportunities and risks posed by digital memory aids as they become more integral to how we remember.