Menu

Blog

Archive for the ‘robotics/AI’ category: Page 1126

Jul 17, 2022

DeepMind’s Latest Study on Artificial Intelligence Explains How Neural Network Generalize and Rise in the Chomsky Hierarchy

Posted by in categories: information science, robotics/AI

A DeepMind research group conducted a comprehensive generalization study on neural network architectures in the paper ‘Neural Networks and the Chomsky Hierarchy’, which investigates whether insights from the theory of computation and the Chomsky hierarchy can predict the actual limitations of neural network generalization.

While we understand that developing powerful machine learning models requires an accurate generalization to out-of-distribution inputs. However, how and why neural networks can generalize on algorithmic sequence prediction tasks is unclear.

The research group performed a thorough generalization study on more than 2000 individual models spread across 16 tasks of cutting-edge neural network architectures and memory-augmented neural networks on a battery of sequence-prediction tasks encompassing all tiers of the Chomsky hierarchy that can be evaluated practically with finite-time computation.

Jul 17, 2022

Amazon Science at ICML 2022

Posted by in categories: biological, robotics/AI, science

We’re proud to be a platinum sponsor of ICML, the annual conference on machine learning. Learn about Amazon’s presence at the conference, accepted publications,… See more.


The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. The conference is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics.

Jul 17, 2022

Deep learning accelerates the detection of live bacteria using thin-film transistor arrays

Posted by in categories: chemistry, economics, food, health, mobile phones, robotics/AI

Early detection and identification of pathogenic bacteria in food and water samples are essential to public health. Bacterial infections cause millions of deaths worldwide and bring a heavy economic burden, costing more than 4 billion dollars annually in the United States alone. Among pathogenic bacteria, Escherichia coli (E. coli) and other coliform bacteria are among the most common ones, and they indicate fecal contamination in food and water samples. The most conventional and frequently used method for detecting these bacteria involves culturing of the samples, which usually takes 24 hours for the final read-out and needs expert visual examination. Although some methods based on, for example, the amplification of nucleic acids, can reduce the detection time to a few hours, they cannot differentiate live and dead bacteria and present low sensitivity at low concentrations of bacteria. That is why the U.S. Environmental Protection Agency (EPA) approves no nucleic acid-based bacteria sensing method for screening water samples.

In an article recently published in ACS Photonics, a journal of the American Chemical Society (ACS), a team of scientists, led by Professor Aydogan Ozcan from the Electrical and Computer Engineering Department at the University of California, Los Angeles (UCLA), and co-workers have developed an AI-powered smart bacterial colony detection system using a thin-film transistor (TFT) array, which is a widely used technology in mobile phones and other displays.

The ultra-large imaging area of the TFT array (27 mm × 26 mm) manufactured by researchers at Japan Display Inc. enabled the system to rapidly capture the growth patterns of bacterial colonies without the need for scanning, which significantly simplified both the hardware and software design. This system achieved ~12-hour time savings compared to gold-standard culture-based methods approved by EPA. By analyzing the microscopic images captured by the TFT array as a function of time, the AI-based system could rapidly and automatically detect colony growth with a deep neural network. Following the detection of each colony, a second neural network is used to classify the species.

Jul 17, 2022

Learning Without Simulations? UC Berkeley’s DayDreamer Establishes a Strong Baseline for Real-World Robotic Training

Posted by in categories: information science, robotics/AI

Using reinforcement learning (RL) to train robots directly in real-world environments has been considered impractical due to the huge amount of trial and error operations typically required before the agent finally gets it right. The use of deep RL in simulated environments has thus become the go-to alternative, but this approach is far from ideal, as it requires designing simulated tasks and collecting expert demonstrations. Moreover, simulations can fail to capture the complexities of real-world environments, are prone to inaccuracies, and the resulting robot behaviours will not adapt to real-world environmental changes.

The Dreamer algorithm proposed by Hafner et al. at ICLR 2020 introduced an RL agent capable of solving long-horizon tasks purely via latent imagination. Although Dreamer has demonstrated its potential for learning from small amounts of interaction in the compact state space of a learned world model, learning accurate real-world models remains challenging, and it was unknown whether Dreamer could enable faster learning on physical robots.

In the new paper DayDreamer: World Models for Physical Robot Learning, Hafner and a research team from the University of California, Berkeley leverage recent advances in the Dreamer world model to enable online RL for robot training without simulators or demonstrations. The novel approach achieves promising results and establishes a strong baseline for efficient real-world robot training.

Jul 17, 2022

Perceptron: AI that can solve math problems and translate over 200 different languages

Posted by in categories: mathematics, robotics/AI

Research in the field of machine learning and AI, now a key technology in practically every industry and company, is far too voluminous for anyone to read it all. This column, Perceptron, aims to collect some of the most relevant recent discoveries and papers — particularly in, but not limited to, artificial intelligence — and explain why they matter.

In this batch of recent research, Meta open-sourced a language system that it claims is the first capable of translating 200 different languages with “state-of-the-art” results. Not to be outdone, Google detailed a machine learning model, Minerva, that can solve quantitative reasoning problems including mathematical and scientific questions. And Microsoft released a language model, Godel, for generating “realistic” conversations that’s along the lines of Google’s widely publicized Lamda. And then we have some new text-to-image generators with a twist.

Meta’s new model, NLLB-200, is a part of the company’s No Language Left Behind initiative to develop machine-powered translation capabilities for most of the world’s languages. Trained to understand languages such as Kamba (spoken by the Bantu ethnic group) and Lao (the official language of Laos), as well as over 540 African languages not supported well or at all by previous translation systems, NLLB-200 will be used to translate languages on the Facebook News Feed and Instagram in addition to the Wikimedia Foundation’s Content Translation Tool, Meta recently announced.

Jul 17, 2022

SpaceX Booster 7 Experiences Explosion

Posted by in categories: robotics/AI, space travel

Multiple angles of Booster 7 experiencing an unexpected ignition during Raptor engine testing.

Video and Pictures from the NSF Robots. Edited by Jack (@theJackBeyer).

Continue reading “SpaceX Booster 7 Experiences Explosion” »

Jul 16, 2022

An open-access, multilingual AI

Posted by in categories: government, law, robotics/AI, supercomputing

A new language model similar in scale to GPT-3 is being made freely available and could help to democratise access to AI.

BLOOM (which stands for BigScience Large Open-science Open-access Multilingual Language Model) has been developed by 1,000 volunteer researchers from over 70 countries and 250 institutions, supported by ethicists, philosophers, and legal experts, in a collaboration called BigScience. The project, coordinated by New York-based startup Hugging Face, used funding from the French government.

The new AI took more than a year of planning and training, which included a final run of 117 days (11th March – 6th July) using the Jean Zay, one of Europe’s most powerful supercomputers, located in the south of Paris, France.

Jul 16, 2022

Physicists use AI to find the most complex protein knots so far

Posted by in categories: biotech/medical, chemistry, nanotechnology, robotics/AI

The question of how the chemical composition of a protein—the amino acid sequence—determines its 3D structure has been one of the biggest challenges in biophysics for more than half a century. This knowledge about the so-called “folding” of proteins is in great demand, as it contributes significantly to the understanding of various diseases and their treatment, among other things. For these reasons, Google’s DeepMind research team has developed AlphaFold, an artificial intelligence that predicts 3D structures.

A team consisting of researchers from Johannes Gutenberg University Mainz (JGU) and the University of California, Los Angeles, has now taken a closer look at these structures and examined them with respect to knots. We know knots primarily from shoelaces and cables, but they also occur on the nanoscale in our cells. Knotted proteins can not only be used to assess the quality of structure but also raise important questions about folding mechanisms and the evolution of proteins.

Jul 16, 2022

Smart textiles detect, sense posture and motion

Posted by in categories: materials, robotics/AI

Researchers at the Massachusetts Institute of Technology (MIT) Media Lab have created a novel fabrication process to produce smart textiles that comfortabl | Technology.


Using 3DKnITS, the research team created a “smart” shoe and mat, followed by building a hardware and software system capable of measuring and interpreting real-time data from the pressure sensors. An individual then performed yoga poses on the smart textile mat while the machine-learning system was able to accurately predict the individual’s motions and poses 99 percent of the time.

Continue reading “Smart textiles detect, sense posture and motion” »

Jul 16, 2022

Timelapse of Future Gaming Worlds (2030 & Beyond)

Posted by in categories: education, Elon Musk, robotics/AI

The story of future video games starts when artificial intelligence takes over building the games for players — while they play them. And human brains are mapped by virtual reality headsets.

This sci fi documentary also covers A.I. npc characters, Metaverse scoreboards, brain to computer chips and gaming, Elon Musk and Neuralink, and the simulation hypothesis.

Continue reading “Timelapse of Future Gaming Worlds (2030 & Beyond)” »