Toggle light / dark theme

As more and more AI agents are used in practice, it is time to think about how to make these agents fully autonomous so that they can learn by themselves continually in a self-motivated and self-initiated manner rather than being retrained offline periodically on the initiation of human engineers and accommodate or adapt to unexpected or novel circumstances. As the real-world is an open environment that is full of unknowns or novelties, detecting novelties, characterizing them, accommodating or adapting to them, and gathering ground-truth training data and incrementally learning the unknowns/novelties are critical to making the AI agent more and more knowledgeable and powerful over time.

Over the past few decades, computer scientists have developed increasingly advanced techniques to train and operate robots. Collectively, these methods could facilitate the integration of robotic systems in an increasingly wide range of real-world settings.

Researchers at Carnegie Mellon University have recently created a new system that allows users to control a and arm remotely, simply by demonstrating the movements they want it to replicate in front of a camera. This system, introduced in a paper pre-published on arXiv, could open exciting possibilities for the teleoperation and remote training of robots completing missions in both everyday settings and environments that are inaccessible to humans.

“Prior works in this area rely either on gloves, motion markers or a calibrated multi-camera setup,” Deepak Pathak, one of the researchers who developed the new system, told TechXplore. “Instead, our system works using a single uncalibrated camera. Since no is needed, the user can be standing anywhere and still successfully teleoperate the robot.”

Quantum computers are getting bigger, but there are still few practical ways to take advantage of their extra computing power. To get over this hurdle, researchers are designing algorithms to ease the transition from classical to quantum computers. In a new study in Nature, researchers unveil an algorithm that reduces the statistical errors, or noise, produced by quantum bits, or qubits, in crunching chemistry equations.

Developed by Columbia chemistry professor David Reichman and postdoc Joonho Lee with researchers at Google Quantum AI, the uses up to 16 qubits on Sycamore, Google’s 53- , to calculate ground state energy, the lowest energy state of a molecule. “These are the largest quantum chemistry calculations that have ever been done on a real quantum device,” Reichman said.

The ability to accurately calculate ground state energy, will enable chemists to develop new materials, said Lee, who is also a visiting researcher at Google Quantum AI. The algorithm could be used to design materials to speed up for farming and hydrolysis for making , among other sustainability goals, he said.

Soooo Close


❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers.

📝 The paper “Competition-Level Code Generation with AlphaCode” is available here:
https://alphacode.deepmind.com/

❤️ Watch these videos in early access on our Patreon page or join us here on YouTube:
- https://www.patreon.com/TwoMinutePapers.
- https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join.

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:

Tesla’s Gigafactory Texas is about to start its operations, and when it does, it would be one of the United States’ most ambitious vehicle production facilities. The massive factory, which Elon Musk has noted will be almost a mile long when completed, is expected to hire thousands of workers in the area.

The arrival of Tesla in Texas and the influx of companies moving into the state will likely trigger an increase in the number of people residing in cities like Austin. With this in mind, Austin transit leaders recently stated that the city’s transportation network would play a critical role in aiding or hindering further development. After all, all those new workers need a way to get to and from their jobs.

The discussions were held during a South by Southwest panel on Thursday, where transportation startup leaders highlighted that Austin’s efforts to invest in new mass transit operations are steps in the right direction. Austin-based AI Fleet CEO Marc El Khoury noted that the city and its surrounding areas would be attractive for companies developing innovative transit technologies.

Artificial intelligence advances how scientists explore materials. Researchers from Ames Laboratory and Texas A&M University trained a machine-learning (ML) model to assess the stability of rare-earth compounds. This work was supported by Laboratory Directed Research and Development Program (LDRD) program at Ames Laboratory. The framework they developed builds on current state-of-the-art methods for experimenting with compounds and understanding chemical instabilities.

Ames Lab has been a leader in rare-earths research since the middle of the 20th century. Rare earth elements have a wide range of uses including clean energy technologies, energy storage, and permanent magnets. Discovery of new rare-earth compounds is part of a larger effort by scientists to expand access to these materials.

The present approach is based on machine learning (ML), a form of artificial intelligence (AI), which is driven by computer algorithms that improve through data usage and experience. Researchers used the upgraded Ames Laboratory Rare Earth database (RIC 2.0) and high-throughput density-functional theory (DFT) to build the foundation for their ML model.

Russian scientists have proposed a new algorithm for automatic decoding and interpreting the decoder weights, which can be used both in brain-computer interfaces and in fundamental research. The results of the study were published in the Journal of Neural Engineering.

Brain-computer interfaces are needed to create robotic prostheses and neuroimplants, rehabilitation simulators, and devices that can be controlled by the power of thought. These devices help people who have suffered a stroke or physical injury to move (in the case of a robotic chair or prostheses), communicate, use a computer, and operate household appliances. In addition, in combination with machine learning methods, neural interfaces help researchers understand how the human brain works.

Most frequently brain-computer interfaces use electrical activity of neurons, measured, for example, with electro-or magnetoencephalography. However, a special decoder is needed in order to translate neuronal signals into commands. Traditional methods of signal processing require painstaking work on identifying informative features—signal characteristics that, from a researcher’s point of view, appear to be most important for the decoding task.

For me, the concern was just how easy it was to do. A lot of the things we used are out there for free. You can go and download a toxicity dataset from anywhere. If you have somebody who knows how to code in Python and has some machine learning capabilities, then in probably a good weekend of work, they could build something like this generative model driven by toxic datasets. So that was the thing that got us really thinking about putting this paper out there; it was such a low barrier of entry for this type of misuse.


AI could be just as effective in developing biochemical weapons as it is in identifying helpful new drugs, researchers warn.