Toggle light / dark theme

Drones navigate unseen environments with liquid neural networks

In a series of quadrotor closed-loop control experiments, the drones underwent range tests, stress tests, target rotation and occlusion, hiking with adversaries, triangular loops between objects, and dynamic target tracking. They tracked moving targets, and executed multi-step loops between objects in never-before-seen environments, surpassing performance of other cutting-edge counterparts.

The team believes that the ability to learn from limited expert data and understand a given task while generalizing to new environments could make autonomous drone deployment more efficient, cost-effective, and reliable. Liquid neural networks, they noted, could enable autonomous air mobility drones to be used for environmental monitoring, package delivery, autonomous vehicles, and robotic assistants.

“The experimental setup presented in our work tests the reasoning capabilities of various deep learning systems in controlled and straightforward scenarios,” says MIT CSAIL Research Affiliate Ramin Hasani. “There is still so much room left for future research and development on more complex reasoning challenges for AI systems in autonomous navigation applications, which has to be tested before we can safely deploy them in our society.”

Creating the Lab of the Future; What does it entail?

As part of our SLAS US 2023 coverage, we speak to Luigi Da Via, Team Leader in Analytical Development at GSK, about the lab of the future, and what it may look like.

Please, can you introduce yourself and tell us what inspired your career within the life sciences?

Hello, my name is Luigi Da Via, and I am currently leading the High-Throughput Automation team at GSK. I have been with the company for the past six years, and I’m thrilled to be contributing to the development of life-saving medicines through the application of cutting-edge technology and automation.

Google Quantum AI Breaks Ground: Unraveling the Mystery of Non-Abelian Anyons

Summary: For the first time, Google Quantum AI has observed the peculiar behavior of non-Abelian anyons, particles with the potential to revolutionize quantum computing by making operations more resistant to noise.

Non-Abelian anyons have the unique feature of retaining a sort of memory, allowing us to determine when they have been exchanged, even though they are identical.

The team successfully used these anyons to perform quantum computations, opening a new path towards topological quantum computation. This significant discovery could be instrumental in the future of fault-tolerant topological quantum computing.

The AI revolution: Google’s artificial intelligence developers on what’s next in the field

The revolution in artificial intelligence is at the center of a debate ranging from those who hope it will save humanity to those who predict doom. Google lies somewhere in the optimistic middle, introducing AI in steps so civilization can get used to it.

Demis Hassabis, CEO of DeepMind Technologies, has spent decades working on AI and views it as the most important invention humanity will ever make. Hassabis sold DeepMind to Google in 2014. Part of the reason for the sale was to gain access to Google’s immense computing power. Brute force computing can very loosely approximate the neural networks and talents of the brain.

“Things like memory, imagination, planning, reinforcement learning, these are all things that are known about how the brain does it, and we wanted to replicate some of that in our AI systems,” Hassabis said.

AI offers leisure, if not happiness

NEW YORK, May 12 (Reuters Breakingviews) — Trying to predict how a nascent and promising technology will affect society is hubris, but history suggests people are going to have some serious leisure time if the development of artificial intelligence continues apace. Whether that makes them happy, and how the spoils will be divided, are harder to predict.

Over the past 50 years, technology has tended to grow faster than the wider economy. From 2006 to 2016, the digital economy grew at an average annual rate of 5.6% according to the U.S. Bureau of Economic Analysis, or almost four times faster than the overall output. That sort of expansion appears to be oddly consistent. Revenue earned by technology companies in Fortune’s list of the 100 biggest U.S. firms has, adjusted for inflation, increased at a similar rate for five decades.

American employee productivity has increased about 2% annually for seven decades. While higher capital intensity and more skilled labor steadily contribute, what varies more is the ability to deploy technology successfully. Sectors able to automate tasks and reduce workers, such as manufacturing, will generally see higher productivity, while others, such as education, may have a harder time. This process also takes time. In 1987, the economist Robert Solow famously said computers were visible everywhere expect in the productivity statistics. A decade later, productivity shot up.

Robotic proxy brings remote users to life in real time

Cornell University researchers have developed a robot, called ReMotion, that occupies physical space on a remote user’s behalf, automatically mirroring the user’s movements in real time and conveying key body language that is lost in standard virtual environments.

“Pointing gestures, the perception of another’s gaze, intuitively knowing where someone’s attention is—in remote settings, we lose these nonverbal, implicit cues that are very important for carrying out design activities,” said Mose Sakashita, a doctoral student of information science.

Sakashita is the lead author of “ReMotion: Supporting Remote Collaboration in Open Space with Automatic Robotic Embodiment,” which he presented at the Association for Computing Machinery CHI Conference on Human Factors in Computing Systems in Hamburg, Germany. “With ReMotion, we show that we can enable rapid, dynamic interactions through the help of a mobile, automated robot.”