Predicting the timelines of quantum artificial intelligence is difficult and managing expectations are almost impossible to realize.
Category: robotics/AI – Page 595
“In your machine-learning project, how much time will you typically spend on data preparation and transformation?” asks a 2022 Google course on the Foundations of Machine Learning (ML). The two choices offered are either “Less than half the project time” or “More than half the project time.” If you guessed the latter, you would be correct; Google states that it takes over 80 percent of project time to format the data, and that’s not even taking into account the time needed to frame the problem in machine-learning terms.
“It would take many weeks of effort to figure out the appropriate model for our dataset, and this is a really prohibitive step for a lot of folks that want to use machine learning or biology,” says Jacqueline Valeri, a fifth-year PhD student of biological engineering in Collins’s lab who is first co-author of the paper.
BioAutoMATED is an automated machine-learning system that can select and build an appropriate model for a given dataset and even take care of the laborious task of data preprocessing, whittling down a months-long process to just a few hours. Automated machine-learning (AutoML) systems are still in a relatively nascent stage of development, with current usage primarily focused on image and text recognition, but largely unused in subfields of biology, points out first co-author and Jameel Clinic postdoc Luis Soenksen PhD ‘20.
Over centuries of painstaking laboratory work, chemists have synthesized several hundred thousand inorganic compounds — generally speaking, materials not based on the chains of carbon atoms that are characteristic of organic chemistry. Yet studies suggest that billions of relatively simple inorganic materials are still waiting to be discovered3. So where to start looking?
Many projects have tried to cut down on time spent in the lab tinkering with various materials by computationally simulating new inorganic materials and calculating properties such as how their atoms would pack together in a crystal. These efforts — including the Materials Project based at the Lawrence Berkeley National Laboratory (LBNL) in Berkeley, California — have collectively come up with about 48,000 materials that they predict will be stable.
Google DeepMind has now supersized this approach with an AI system called graph networks for materials exploration (GNoME). After training on data scraped from the Materials Project and similar databases, GNoME tweaked the composition of known materials to come up with 2.2 million potential compounds. After calculating whether these materials would be stable, and predicting their crystal structures, the system produced a final tally of 381,000 new inorganic compounds to add to the Materials Project database1.
A lot of what we talk about with artificial intelligence and machine learning is what you might call “technical considerations” – which makes sense, because these are groundbreaking technologies.
But AI is going to be social, too – it’s going to have a social context. One way to explain that is that with ‘humans in the loop’ and assistive AI, the AI has to be able to interact with humans in particular ways.
So what about the social end of AI research?
Is psychology truly “technology-proof,” or is it on the brink of a technological revolution?
Discover the power of AI in therapy, and how digital approaches could break accessibility constraints in mental healthcare.
Under the guidance of Agnieszka Pilat, the trio of Spot robot dogs will independently paint an acrylic ground canvas for their participation in the event.
Agnieszka Pilat.
The trio of Spot robot dogs developed by Boston Dynamics are scheduled to independently paint an acrylic ground canvas for their participation in the upcoming NGV Triennial, which takes place in Melbourne in December.
Google DeepMind and Lawrence Berkeley National Laboratory researchers recently introduced Graph Networks for Materials Exploration (GNoME), an AI tool to discover new materials and predict material stability.
“We are releasing 381K stable materials to help scientists pursue materials discovery breakthroughs,” said Pushmeet Kohli, head of research (AI for science, robustness and reliability) at DeepMind.
Check out the GitHub repository here.
Eighteen countries have signed an agreement on AI safety, based on the principle that it should be secure by design.
The Guidelines for Secure AI System Development, led by the U.K.’s National Cyber Security Centre and developed with the U.S.’ Cybersecurity and Infrastructure Security Agency, are touted as the first global agreement of their kind.
They’re aimed mainly at providers of AI systems that are using models hosted by an organization, or that are using external application programming interfaces. The aim is to help developers make sure that cybersecurity is baked in as an essential pre-condition of AI system safety and integral to the development process, from the start and throughout.
BOT or NOT?
If you’re asked to imagine a person from North America or a woman from Venezuela, what do they look like? If you give an AI-powered imaging program the same prompts, odds are the software will generate stereotypical responses.
A “person” will usually be male and light-skinned.
A woman from Latin American countries will more often be sexualized than European and Asian women.
Guardrails AI companies add to their products to prevent them from causing harm “aren’t enough” to control AI capabilities that could endanger humanity within five to ten years, former Google CEO Eric Schmidt told Axios’ Mike Allen on Tuesday.
The big picture: Interviewed at Axios’ AI+ Summit in Washington, D.C., Schmidt compared the development of AI to the introduction of nuclear weapons at the end of the Second World War.