Toggle light / dark theme

Dyson’s First Wireless Headphones: Fresh Tunes and Fresh Air

In addition to its premium vacuums 0, hair styling products 0, and gale-force bathroom hand dryers, Dyson is also known for its air purifiers featuring a bladeless design that makes them quieter and safer, but also a glass HEPA filter inside that promises to remove 99.97% of unwanted air particles in a home like pollen, mold, bacteria, pollution, and odors. There’s even one that can eliminate formaldehyde. That’s great for when you’re at home or the office, but a four-foot tall purifier tethered to a power outlet offers no protection from pollution anywhere else.

The Dyson Zone is the company’s first personal air purification device, and it comes with headphones as a side dish. Trojan-horsed into the high-end bluetooth headset, the Zone offers a buffer of filtration between the wearer and the outside world. When worn out in public, users may feel a bit like Bane from Batman. There may be some awkward stares, but perhaps there will be fewer than expected, thanks to the presence of the headphones.

The company started working on the Zone six years ago. The initial protype was a “snorkel-like clean air mouthpiece paired with a backpack to hold the motor and inner workings,” according to a press release. The final product—over 500 iterations later—is a huge improvement when it comes to design and ergonomics. It still looks like it might take some time to get used to, though maybe less so in the era of Covid-19 than when Dyson’s engineers first started on it.

New computational model proposed for Alzheimer’s disease

Mayo Clinic researchers have proposed a new model for mapping the symptoms of Alzheimer’s disease to brain anatomy. This model was developed by applying machine learning to patient brain imaging data. It uses the entire function of the brain rather than specific brain regions or networks to explain the relationship between brain anatomy and mental processing. The findings are reported in Nature Communications.

“This new model can advance our understanding of how the brain works and breaks down during aging and Alzheimer’s disease, providing new ways to monitor, prevent and treat disorders of the mind,” says David T. Jones, M.D., a Mayo Clinic neurologist and lead author of the study.

Alzheimer’s disease typically has been described as a protein-processing problem. The toxic proteins amyloid and tau deposit in areas of the brain, causing neuron failure that results in clinical symptoms such as , difficulty communicating and confusion.

Producing faster CAR-T cell therapy inside the body with a spongelike implant

Despite the remarkable efficacy of CAR-T cell therapies to treat certain blood cancers, they are expensive thanks partly to complex and lengthy manufacturing procedures. | CAR-T therapies are expensive thanks partly to complex and lengthy manufacturing procedures. Now, scientists have found a potential way that could cut the CAR-T processing time from more than two weeks to a single day by using an implant.

Tuberculosis Induces Premature Cellular Aging

Tuberculosis (TB) is a potentially serious infectious disease caused by a type of bacterium called Mycobacterium tuberculosis. The bacteria usually affect the lungs, but also can invade other organs.

In 2018, tuberculosis bacteria infected 1.7 billion people — roughly 23% of the world’s population, according to the Centers for Disease Control and Prevention (CDC). In 2020, the CDC reported 7,174 TB cases and 13 million people living with a latent tuberculosis infection (the germs are in the body but do not cause sickness) in the United States.

Even after successful therapy for tuberculosis, survivors of the disease have an increased risk of recurrent infection and death. A new study published recently by researchers at Baylor College of Medicine found that the cells of humans and animals who have recovered from tuberculosis had prematurely aged up to 12 to 14 years.

Enzyme blocker could open new treatments for neurodegenerative diseases

𝐍𝐞𝐰 𝐀𝐭𝐥𝐚𝐬:

The Neuro-Network.

𝐄𝐧𝐳𝐲𝐦𝐞 𝐛𝐥𝐨𝐜𝐤𝐞𝐫 𝐜𝐨𝐮𝐥𝐝 𝐨𝐩𝐞𝐧 𝐧𝐞𝐰 𝐭𝐫𝐞𝐚𝐭𝐦𝐞𝐧𝐭𝐬 𝐟𝐨𝐫 𝐧𝐞𝐮𝐫𝐨𝐝𝐞𝐠𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐝𝐢𝐬𝐞𝐚𝐬𝐞𝐬

𝙍𝙚𝙨𝙚𝙖𝙧𝙘𝙝𝙚𝙧𝙨 𝙝𝙖𝙫𝙚 𝙪𝙣𝙘𝙤𝙫𝙚𝙧𝙚𝙙 𝙝𝙤𝙬 𝙖 𝙘𝙚𝙧𝙩𝙖𝙞𝙣 𝙢𝙤𝙡𝙚𝙘𝙪𝙡𝙖𝙧 𝙥𝙖𝙩𝙝𝙬𝙖𝙮 𝙩𝙧𝙞𝙜𝙜𝙚𝙧𝙨 𝙩𝙝𝙚 𝙗𝙧𝙚𝙖𝙠𝙙𝙤𝙬𝙣 𝙤𝙛 𝙣𝙚𝙧… See more.


Researchers have uncovered how a certain molecular pathway triggers the breakdown of nerve fibers in neurodegenerative diseases – and more importantly, how to potentially switch it off. The find could lead to a new class of drugs that slows the progression of these debilitating disorders.

The focus of the study was an enzyme called SARM1, which is expressed in neurons and plays a role as an immune regulator. However, it also functions as a sensor of metabolic stress, and at a certain point it sparks a cascade of processes that eventually begins to break down axons, leading to some of the issues associated with Parkinson’s disease, ALS, neuropathy, and other neurodegenerative diseases.

CRISPR/Cas9-Mediated Genome Editing as a Therapeutic Approach for Leber Congenital Amaurosis 10

Circa 2017 😀


As the most common subtype of Leber congenital amaurosis (LCA), LCA10 is a severe retinal dystrophy caused by mutations in the CEP290 gene. The most frequent mutation found in patients with LCA10 is a deep intronic mutation in CEP290 that generates a cryptic splice donor site. The large size of the CEP290 gene prevents its use in adeno-associated virus (AAV)-mediated gene augmentation therapy. Here, we show that targeted genomic deletion using the clustered regularly interspaced short palindromic repeats (CRISPR)/Cas9 system represents a promising therapeutic approach for the treatment of patients with LCA10 bearing the CEP290 splice mutation. We generated a cellular model of LCA10 by introducing the CEP290 splice mutation into 293FT cells and we showed that guide RNA pairs coupled with SpCas9 were highly efficient at removing the intronic splice mutation and restoring the expression of wild-type CEP290. In addition, we demonstrated that a dual AAV system could effectively delete an intronic fragment of the Cep290 gene in the mouse retina. To minimize the immune response to prolonged expression of SpCas9, we developed a self-limiting CRISPR/Cas9 system that minimizes the duration of SpCas9 expression. These results support further studies to determine the therapeutic potential of CRISPR/Cas9-based strategies for the treatment of patients with LCA10.

Keywords: CEP290; CRISPR/Cas9; LCA10.

Copyright © 2017 The American Society of Gene and Cell Therapy. Published by Elsevier Inc. All rights reserved.

Explainable AI (XAI) with Class Maps

Introducing a novel visual tool for explaining the results of classification algorithms, with examples in R and Python.


Classification algorithms aim to identify to which groups a set of observations belong. A machine learning practitioner typically builds multiple models and selects a final classifier to be one that optimizes a set of accuracy metrics on a held-out test set. Sometimes, practitioners and stakeholders want more from the classification model than just predictions. They may wish to know the reasons behind a classifier’s decisions, especially when it is built for high-stakes applications. For instance, consider a medical setting, where a classifier determines a patient to be at high risk for developing an illness. If medical experts can learn the contributing factors to this prediction, they could use this information to help determine suitable treatments.

Some models, such as single decision trees, are transparent, meaning that they show the mechanism for how they make decisions. More complex models, however, tend to be the opposite — they are often referred to as “black boxes”, as they provide no explanation for how they arrive at their decisions. Unfortunately, opting for transparent models over black boxes does not always solve the explainability problem. The relationship between a set of observations and its labels is often too complex for a simple model to suffice; transparency can come at the cost of accuracy [1].

The increasing use of black-box models in high-stakes applications, combined with the need for explanations, has lead to the development of Explainable AI (XAI), a set of methods that help humans understand the outputs of machine learning models. Explainability is a crucial part of the responsible development and use of AI.

Grand challenges in AI and data science

This conference will take place at EMBL Heidelberg, with a live streaming option for virtual participants free of charge. Proof of COVID-19 vaccination or recovery is required for on-site attendance. Please see EMBL’s COVID-19 terms and conditions.

Workshop registration is available only to EIROforum members. Please note the workshop is an on-site-only event and contact Iva Gavran for more information or use this link for registration.