Toggle light / dark theme

Autism spectrum disorder (ASD) is a developmental disorder associated with difficulties in interacting with others, repetitive behaviors, restricted interests and other symptoms that can impact academic or professional performance. People diagnosed with ASD can present varying symptoms that differ in both their behavioral manifestations and intensity.

As a result, some often require far more support than others to complete their studies, learn new skills and lead a fulfilling life. Neuroscientists have been investigating the high variability of ASD for several decades, with the hope that this will aid the development of more effective therapeutic strategies tailored around the unique experiences of different patients.

Researchers at Weill Cornell Medicine have recently used machine learning to investigate the molecular and neural mechanisms that could underlie these differences among individuals diagnosed with ASD. Their paper, published in Nature Neuroscience, identifies different subgroups of ASD associated with distinct functional connections in the brain and symptomatology, which could be related to the expression of different ASD-related genes.

Synthesis AI, a startup that specializes in synthetic data technologies, announced that they have developed a new technology for digital human creation, which enables one to create highly realistic 3D digital humans from text prompts using generative AI and VFX pipelines.

The technology showcased by Synthesis AI allows users to input specific text descriptions such as age, gender, ethnicity, hairstyle, and clothing to generate a 3D model that matches the specifications. Users can also edit the 3D model by changing text prompts or using sliders to adjust features like facial expressions and lighting.

The earliest artificial neural network, the Perceptron, was introduced approximately 65 years ago and consisted of just one layer. However, to address solutions for more complex classification tasks, more advanced neural network architectures consisting of numerous feedforward (consecutive) layers were later introduced. This is the essential component of the current implementation of deep learning algorithms. It improves the performance of analytical and physical tasks without human intervention, and lies behind everyday automation products such as the emerging technologies for self-driving cars and autonomous chat bots.

The key question driving new research published today in Scientific Reports is whether efficient learning of non-trivial classification tasks can be achieved using brain-inspired shallow feedforward networks, while potentially requiring less .

“A positive answer questions the need for deep learning architectures, and might direct the development of unique hardware for the efficient and fast implementation of shallow learning,” said Prof. Ido Kanter, of Bar-Ilan’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research. “Additionally, it would demonstrate how brain-inspired shallow learning has advanced computational capability with reduced complexity and energy consumption.”

Google CEO Sundar Pichai believes artificial intelligence is the most profound technology in the history of human civilization—potentially more important than the discovery of fire and electricity—and yet even he doesn’t fully understand how it works, Pichai said in an interview with CBS’s 60 Minutes that aired yesterday (April 16).

“We need to adapt as a society for it…This is going to impact every product across every company,” Pichai said of recent breakthroughs in A.I. in a conversation with CBS journalist Scott Pelley. It’s the Google CEO’s second long-form interview in a two weeks as he apparently embarks on a charm offensive with the press to establish himself and Google as a thought leader in A.I. after the company’s latest A.I. product received mixed reviews.

Google in February introduced Bard, an A.I. chatbot to compete with OpenAI’s ChatGPT and Microsoft’s new Bing, and recently made it available to the public. In an internal letter to Google employees in March, Pichai said the success of Bard will depend on public testing and cautioned things could go wrong as the chatbot improves itself through interacting with users. He told Pelley Google intends to deploy A.I. in a beneficial way, but suggested how A.I. develops might be beyond its creator’s control.

Summary: Researchers in China have developed a new neural network that generates high-quality bird images from textual descriptions using common-sense knowledge to enhance the generated image at three different levels of resolution, achieving competitive scores with other neural network methods. The network uses a generative adversarial network and was trained with a dataset of bird images and text descriptions, with the goal of promoting the development of text-to-image synthesis.

Source: Intelligent Computing.

In an effort to generate high-quality images based on text descriptions, a group of researchers in China built a generative adversarial network that incorporates data representing common-sense knowledge.

In a paper published in March, artificial intelligence (AI) scientists at Stanford University and Canada’s MILA institute for AI proposed a technology that could be far more efficient than GPT-4 — or anything like it — at gobbling vast amounts of data and transforming it into an answer.

Also: What is GPT-4? Here’s everything you need to know

Known as Hyena, the technology is able to achieve equivalent accuracy on benchmark tests, such as question answering, while using a fraction of the computing power. In some instances, the Hyena code is able to handle amounts of text that make GPT-style technology simply run out of memory and fail.

Michio Kaku, professor of theoretical physics at the City University of New York, Nilay Patel of The Verge, and Ethan Millman of the Rolling Stone discuss the future of artificial intelligence amid growing controversy. Hosted by Brian Sullivan, “Last Call” is a fast-paced, entertaining business show that explores the intersection of money, culture and policy. Tune in Monday through Friday at 7 p.m. ET on CNBC.