Toggle light / dark theme

In a world-first, researchers from the GrapheneX-UTS Human-centric Artificial Intelligence Centre at the University of Technology Sydney (UTS) have developed a portable, non-invasive system that can decode silent thoughts and turn them into text.

The technology could aid communication for people who are unable to speak due to illness or injury, including stroke or paralysis. It could also enable seamless communication between humans and machines, such as the operation of a bionic arm or robot.

The study has been selected as the spotlight paper at the NeurIPS conference, an that showcases world-leading research on artificial intelligence and , held in New Orleans on 12 December 2023.

Humans and other mammals can produce a wide range of sounds, while also modulating their volume and pitch. These sounds, also known as mammalian vocalizations, play a central role in communication between both animals of the same and of different species.

Researchers at Stanford University School of Medicine recently carried out a study aimed at better understanding the neural mechanisms underpinning the production and modulation of mammal vocalizations. Their paper, published in Nature Neuroscience, identifies a neural circuit and a set of genetically defined in the that play a key role in the production of .

“All mammals, including humans, vocalize by pushing air past the vocal cords of the larynx, which vibrate to produce sound,” Avin Veerakumar, co-author of the paper, told Medical Xpress.

EPFL researchers have developed an algorithm to train an analog neural network just as accurately as a digital one, enabling the development of more efficient alternatives to power-hungry deep learning hardware.

With their ability to process vast amounts of data through algorithmic ‘learning’ rather than traditional programming, it often seems like the potential of deep neural networks like Chat-GPT is limitless. But as the scope and impact of these systems have grown, so have their size, complexity, and —the latter of which is significant enough to raise concerns about contributions to global carbon emissions.

While we often think of in terms of shifting from analog to digital, researchers are now looking for answers to this problem in physical alternatives to digital deep neural networks. One such researcher is Romain Fleury of EPFL’s Laboratory of Wave Engineering in the School of Engineering.

This program is part of the Big Ideas series, supported by the John Templeton Foundation.

Participant:
Stephen Wolfram.

Moderator:
Brian Greene.

WSF Landing Page Link: https://www.worldsciencefestival.com/programs/coding-the-cos…putations/

- SUBSCRIBE to our YouTube Channel and “ring the bell” for all the latest videos from WSF