Summary: New software can perform computerized image editing using only input from electrical activity in the human brain.
Source: University of Copenhagen.
Soon, we won’t need to use the Help function. The computer will sense that we have a problem and come to the rescue by itself. This is one of the possible implications of new research at University of Copenhagen and University of Helsinki.
If you look different to your close relatives, you may have felt separate from your family. As a child, during particularly stormy fall outs you might have even hoped it was a sign that you were adopted.
As our new research shows, appearances can be deceptive when it comes to family. New DNA technology is shaking up the family trees of many plants and animals.
Professor Pattie Maes deep insights working with her research team of Joanne Leong, Pat Pataranutaporn, Valdemar Danry are world leading in their translational research on tech-human interaction. Their highly interdisciplinary work covering decades of MIT Lab pioneering inventions integrates human computer interaction (HCI), sensor technologies, AI / machine learning, nano-tech, brain computer interfaces, design and HCI, psychology, neuroscience and much more. I participated in their day-long workshop and followed-up with more than three hours of interviews of which over an hour is transcribed in this article. All insights in this article stem from my daily pro bono work with (now) more than 400,000 CEOs, investors, scientists/experts. MIT Media Lab Fluid Interfaces research team work is particularly key with the June 21 announcement of the Metaverse Standards Forum, a open standards group, with big tech supporting such as Microsoft and Meta, chaired by Neil Trevett, Khronos President and VP Developer Ecosystems at NVIDIA. I have a follow-up interview with Neil and Forbes article in the works. In addition, these recent announcements also highlight why Pattie Maes work is so important: Deep Mind’s Gato multi-modal, multi-task, single generalist agent foundational to artificial general intelligence (AGI); Google’s LaMDA Language Model for Dialogue Applications which can engage in free-flowing dialogue; Microsoft’s Build Conference announcements on Azure AI and OpenAI practical tools / solutions and responsible AI; OpenAI’s DALL-E 2 producing realistic images and art from natural language descriptions.
New research has identified an important molecular analogy that could explain the remarkable intelligence of these fascinating invertebrates.
An exceptional organism with an extremely complex brain and cognitive abilities makes the octopus very unique among invertebrates. So much so that it resembles vertebrates more than invertebrates in several aspects. The neural and cognitive complexity of these animals could originate from a molecular analogy with the human brain, as discovered by a research paper that was recently published in BMC Biology and coordinated by Remo Sanges from Scuola Internazionale Superiore di Studi Avanzati (SISSA) of Trieste and by Graziano Fiorito from Stazione Zoologica Anton Dohrn of Naples.
This research shows that the same ‘jumping genes’ are active both in the human brain and in the brain of two species, Octopus vulgaris, the common octopus, and Octopus bimaculoides, the Californian octopus. A discovery that could help us understand the secret of the intelligence of these remarkable organisms.
Early diagnosis of neurodevelopmental conditions, such as Attention Deficit Hyperactivity Disorder (ADHD) and Autism Spectrum Disorder (ASD), is critical for treatment and symptom management processes.
Now, a team of researchers from the University of South Australia has found that recordings from the retina may be used to distinguish unique signals for both ADHD and ASD, offering a possible biomarker for each disorder. According to a press release published by the institution, The team used the “electroretinogram” (ERG), a diagnostic test that measures the electrical activity of the retina in response to light, and found out that children with ASD showed less ERG energy while children with ADHD displayed more ERG energy.
Outer Space, Inner Space, and the Future of Networks. Synopsis: Does the History, Dynamics, and Structure of our Universe give any evidence that it is inherently “Good”? Does it appear to be statistically protective of adapted complexity and intelligence? Which aspects of the big history of our universe appear to be random? Which are predictable? What drives universal and societal accelerating change, and why have they both been so stable? What has developed progressively in our universe, as opposed to merely evolving randomly? Will humanity’s future be to venture to the stars (outer space) or will we increasingly escape our physical universe, into physical and virtual inner space (the transcension hypothesis)? In Earth’s big history, what can we say about what has survived and improved? Do we see any progressive improvement in humanity’s thoughts or actions? When is anthropogenic risk existential or developmental (growing pains)? In either case, how can we minimize such risk? What values do well-built networks have? What can we learn about the nature of our most adaptive complex networks, to improve our personal, team, organizational, societal, global, and universal futures? I’ll touch on each of these vital questions, which I’ve been researching and writing about since 1999, and discussing with a community of scholars at Evo-Devo Universe (join us!) since 2008.
For fun background reading, see John’s Goodness of the Universe post on Centauri Dreams, and “Evolutionary Development: A Universal Perspective”, 2019.
Summary: Human cortical networks have evolved a novel neural network that relies on abundant connections between inhibitory interneurons.
Source: Max Planck Institute.
The analysis of the human brain is a central goal of neuroscience. However, for methodological reasons, research has largely focused on model organisms, in particular the mouse.
It’s extremely important to check yourself for ticks this summer.
Up to 14.5% of the global population may have already had Lyme disease, according to a new meta-analysis published in BMJ Global Health. The researchers behind the report analyzed 89 previously published studies to calculate the figure, which sheds a harrowing light on the worldwide toll of the tick-borne illness.
From 1991 to 2018, the incidence of Lyme disease in the United States nearly doubled, according to data from the United States Environmental Protection Agency (EPA). In 1991, there were nearly four reported cases per 100,000 people; that number jumped to about seven cases per 100,000 people by 2018. The Centers for Disease Control and Prevention (CDC) estimates that about 470,000 Americans are diagnosed and treated for Lyme disease each year.
Alzheimer’s disease (AD) is a progressive neurodegenerative disorder and the most common cause of dementia, affecting more than 5.8 million individuals in the U.S. Scientists have discovered some genetic variants that increase the risk for developing Alzheimer’s; the most well-known of these for people over the age of 65 is the APOE ε4 allele. Although the association between APOE4 and increased AD risk is well-established, the mechanisms responsible for the underlying risk in human brain cell types has been unclear until now.
Researchers from Boston University School of Medicine (BUSM) have discovered two important novel aspects of the gene: 1) human genetic background inherited with APOE4 is unique to APOE4 patients and 2) the mechanistic defects due to APOE4 are unique to human cells.
Our study demonstrated what the APOE4 gene does and which brain cells get affected the most in humans by comparing human and mouse models. These are important findings as we can find therapeutics if we understand how and where this risk gene is destroying our brain.