Jeremy Hunt tells POLITICO the West needs to be ‘super smart’ about artificial intelligence rules, but pushes back at doom-laden warnings.
Category: robotics/AI – Page 1,023
As Google looks to maintain pace in AI with the rest of the tech giants, it’s consolidating its AI research divisions.
Today Google announced Google DeepMind, a new unit made up of the DeepMind team and the Google Brain team from Google Research. In a blog post, DeepMind co-founder and CEO Demis Hassabis said that Google DeepMind will work “in close collaboration… cross the Google product areas” to “deliver AI research and products.”
As a part of Google DeepMind’s formation, Google says that it’ll create a new scientific board to oversee research progress and the direction of the unit, which will be led by Koray Kavukcuoglu, VP of research at DeepMind. Eli Collins, VP of product at Google Research, will join Google DeepMind as VP of product, while Google Brain lead Zoubin Ghahramani will become a member of the Google DeepMind research leadership team, reporting to Kavukcuoglu.
Not only does generative AI threaten originality in art, as some argue, it undeniably makes who gets the credit for intellectual property a lot murkier.
Autism spectrum disorder (ASD) is a developmental disorder associated with difficulties in interacting with others, repetitive behaviors, restricted interests and other symptoms that can impact academic or professional performance. People diagnosed with ASD can present varying symptoms that differ in both their behavioral manifestations and intensity.
As a result, some autistic individuals often require far more support than others to complete their studies, learn new skills and lead a fulfilling life. Neuroscientists have been investigating the high variability of ASD for several decades, with the hope that this will aid the development of more effective therapeutic strategies tailored around the unique experiences of different patients.
Researchers at Weill Cornell Medicine have recently used machine learning to investigate the molecular and neural mechanisms that could underlie these differences among individuals diagnosed with ASD. Their paper, published in Nature Neuroscience, identifies different subgroups of ASD associated with distinct functional connections in the brain and symptomatology, which could be related to the expression of different ASD-related genes.
People are outsourcing home-based tasks like meal prep and bedtime stories to artificial intelligence, too.
Synthesis AI, a startup that specializes in synthetic data technologies, announced that they have developed a new technology for digital human creation, which enables one to create highly realistic 3D digital humans from text prompts using generative AI and VFX pipelines.
The technology showcased by Synthesis AI allows users to input specific text descriptions such as age, gender, ethnicity, hairstyle, and clothing to generate a 3D model that matches the specifications. Users can also edit the 3D model by changing text prompts or using sliders to adjust features like facial expressions and lighting.
The earliest artificial neural network, the Perceptron, was introduced approximately 65 years ago and consisted of just one layer. However, to address solutions for more complex classification tasks, more advanced neural network architectures consisting of numerous feedforward (consecutive) layers were later introduced. This is the essential component of the current implementation of deep learning algorithms. It improves the performance of analytical and physical tasks without human intervention, and lies behind everyday automation products such as the emerging technologies for self-driving cars and autonomous chat bots.
The key question driving new research published today in Scientific Reports is whether efficient learning of non-trivial classification tasks can be achieved using brain-inspired shallow feedforward networks, while potentially requiring less computational complexity.
“A positive answer questions the need for deep learning architectures, and might direct the development of unique hardware for the efficient and fast implementation of shallow learning,” said Prof. Ido Kanter, of Bar-Ilan’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research. “Additionally, it would demonstrate how brain-inspired shallow learning has advanced computational capability with reduced complexity and energy consumption.”
Google CEO Sundar Pichai believes artificial intelligence is the most profound technology in the history of human civilization—potentially more important than the discovery of fire and electricity—and yet even he doesn’t fully understand how it works, Pichai said in an interview with CBS’s 60 Minutes that aired yesterday (April 16).
“We need to adapt as a society for it…This is going to impact every product across every company,” Pichai said of recent breakthroughs in A.I. in a conversation with CBS journalist Scott Pelley. It’s the Google CEO’s second long-form interview in a two weeks as he apparently embarks on a charm offensive with the press to establish himself and Google as a thought leader in A.I. after the company’s latest A.I. product received mixed reviews.
Google in February introduced Bard, an A.I. chatbot to compete with OpenAI’s ChatGPT and Microsoft’s new Bing, and recently made it available to the public. In an internal letter to Google employees in March, Pichai said the success of Bard will depend on public testing and cautioned things could go wrong as the chatbot improves itself through interacting with users. He told Pelley Google intends to deploy A.I. in a beneficial way, but suggested how A.I. develops might be beyond its creator’s control.
ChatGPT and other bots have revived conversations on artificial general intelligence. Scientists say algorithms won’t surpass you any time soon.
Summary: Researchers in China have developed a new neural network that generates high-quality bird images from textual descriptions using common-sense knowledge to enhance the generated image at three different levels of resolution, achieving competitive scores with other neural network methods. The network uses a generative adversarial network and was trained with a dataset of bird images and text descriptions, with the goal of promoting the development of text-to-image synthesis.
Source: Intelligent Computing.
In an effort to generate high-quality images based on text descriptions, a group of researchers in China built a generative adversarial network that incorporates data representing common-sense knowledge.