Toggle light / dark theme

Geoffrey Hinton, who has been called the ‘Godfather of AI,’ confirmed Monday that he left his role at Google last week to speak out about the “dangers” of the technology he helped to develop.

Hinton’s pioneering work on neural networks shaped artificial intelligence systems powering many of today’s products. He worked part-time at Google for a decade on the tech giant’s AI development efforts, but he has since come to have concerns about the technology and his role in advancing it.

“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Hinton told the New York Times, which was first to report his decision.

Defining computational neuroscience The evolution of computational neuroscience Computational neuroscience in the twenty-first century Some examples of computational neuroscience The SpiNNaker supercomputer Frontiers in computational neuroscience References Further reading

The human brain is a complex and unfathomable supercomputer. How it works is one of the ultimate mysteries of our time. Scientists working in the exciting field of computational neuroscience seek to unravel this mystery and, in the process, help solve problems in diverse research fields, from Artificial Intelligence (AI) to psychiatry.

Computational neuroscience is a highly interdisciplinary and thriving branch of neuroscience that uses computational simulations and mathematical models to develop our understanding of the brain. Here we look at: what computational neuroscience is, how it has grown over the last thirty years, what its applications are, and where it is going.

To get an inside look at the heart, cardiologists often use electrocardiograms (ECGs) to trace its electrical activity and magnetic resonance images (MRIs) to map its structure. Because the two types of data reveal different details about the heart, physicians typically study them separately to diagnose heart conditions.

Now, in a paper published in Nature Communications, scientists in the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard have developed a that can learn patterns from ECGs and MRIs simultaneously, and based on those patterns, predict characteristics of a patient’s . Such a tool, with further development, could one day help doctors better detect and diagnose heart conditions from routine tests such as ECGs.

The researchers also showed that they could analyze ECG recordings, which are easy and cheap to acquire, and generate MRI movies of the same heart, which are much more expensive to capture. And their method could even be used to find new genetic markers of heart disease that existing approaches that look at individual data modalities might miss.

An artificial intelligence chatbot was able to outperform human doctors in responding to patient questions posted online, according to evaluators in a new study.

Research published in the Journal of the American Medical Association (JAMA) Internal Medicine found that a chatbot’s responses to patient questions, pulled from a social media platform, were rated “significantly higher for both quality and empathy.”

A report has found that AI chatbots have created dozens of news websites around the world that focus on “clickbait articles” to generate revenue.

The report published by the news-rating group NewsGuard identified 49 websites across seven different languages that appeared to be generated – or mostly generated – by AI language models. NewsGuard researchers found that these websites tended to use dull language and repeated phrases, which are trademarks of AI chatbots.

A new artificial intelligence system called a semantic decoder can translate a person’s brain activity—while listening to a story or silently imagining telling a story—into a continuous stream of text. The system developed by researchers at The University of Texas at Austin might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again.

The study, published in the journal Nature Neuroscience, was led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin. The work relies in part on a transformer model, similar to the ones that power Open AI’s ChatGPT and Google’s Bard.

Unlike other language decoding systems in development, this system does not require subjects to have surgical implants, making the process noninvasive. Participants also do not need to use only words from a prescribed list. Brain activity is measured using an fMRI scanner after extensive training of the decoder, in which the individual listens to hours of podcasts in the scanner. Later, provided that the participant is open to having their thoughts decoded, their listening to a new story or imagining telling a story allows the machine to generate corresponding text from alone.

Scientists have developed a noninvasive AI system focused on translating a person’s brain activity into a stream of text, according to a peer-reviewed study published Monday in the journal Nature Neuroscience.

The system, called a semantic decoder, could ultimately benefit patients who have lost their ability to physically communicate after suffering from a stroke, paralysis or other degenerative diseases.