Toggle light / dark theme

The field of machine learning on quantum computers got a boost from new research removing a potential roadblock to the practical implementation of quantum neural networks. While theorists had previously believed an exponentially large training set would be required to train a quantum neural network, the quantum No-Free-Lunch theorem developed by Los Alamos National Laboratory shows that quantum entanglement eliminates this exponential overhead.

“Our work proves that both and big entanglement are valuable in quantum machine learning. Even better, entanglement leads to scalability, which solves the roadblock of exponentially increasing the size of the data in order to learn it,” said Andrew Sornborger, a computer scientist at Los Alamos and a coauthor of the paper published Feb. 18 in Physical Review Letters. “The theorem gives us hope that quantum neural networks are on track towards the goal of quantum speed-up, where eventually they will outperform their counterparts on classical computers.”

The classical No-Free-Lunch theorem states that any machine-learning algorithm is as good as, but no better than, any other when their performance is averaged over all possible functions connecting the data to their labels. A direct consequence of this theorem that showcases the power of data in classical machine learning is that the more data one has, the better the average performance. Thus, data is the currency in machine learning that ultimately limits performance.

Hannah wraps up the series by meeting DeepMind co-founder and CEO, Demis Hassabis. In an extended interview, Demis describes why he believes AGI is possible, how we can get there, and the problems he hopes it will solve. Along the way, he highlights the important role of consciousness and why he’s so optimistic that AI can help solve many of the world’s major challenges. As a final note, Demis shares the story of a personal meeting with Stephen Hawking to discuss the future of AI and discloses Hawking’s parting message.

For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].

Interviewee: Deepmind co-founder and CEO, Demis Hassabis.

Credits.

In today’s world, autonomous machines play a major role in our lives, yet it is still difficult to establish trust between humans and machines. Aside from concerns about unexpected disruptions, robots do not yet communicate exactly the way humans do. Researchers have revealed that independent computer systems are able to increase trust in robots, increase collaboration, and streamline task execution.

Humans tend to rely more on robots that provide self-assessment while performing their tasks, according to the study. Communication is essential for establishing trust in a human working environment. Humans and autonomous machines may have a gap of understanding, which can result in a robot performing an action incorrectly, and even misuse or exploitation of the robot’s capabilities.

In a study conducted by researchers from Draper and the University of Colorado Boulder in the USA, researchers examined how autonomous robots using probability models are capable of calculating and expressing self-assessment skills, forming a kind of machine-self-confidence. The models were developed to predict their behavior and provide a perspective on their mission before the event occurs.

Researchers at Duke University have demonstrated the first attack strategy that can fool industry-standard autonomous vehicle sensors into believing nearby objects are closer (or further) than they appear without being detected.

The research suggests that adding optical 3D capabilities or the ability to share data with nearby cars may be necessary to fully protect from attacks.

The results will be presented Aug. 10–12 at the 2022 USENIX Security Symposium, a top venue in the field.

Circa 2021


Seoul National University Hospital completed a liver transplant procedure using a robot and a laparoscope that left no huge abdominal scars for both the donor and recipient.

Suh Kyung-suk, a professor on the liver transplant team, noted that the new surgical procedure also reduces complications associated with the lungs and scars and shortens the recovery time.

The use of a robot and a laparoscope that allowed a transplant without opening the donor’s abdomen was the world’s first.

Over the past decade or so, many researchers worldwide have been trying to develop brain-inspired computer systems, also known as neuromorphic computing tools. The majority of these systems are currently used to run deep learning algorithms and other artificial intelligence (AI) tools.

Researchers at Sandia National Laboratories have recently conducted a study assessing the potential of neuromorphic architectures to perform a different type of computations, namely random walk computations. These are computations that involve a succession of random steps in the mathematical space. The team’s findings, published in Nature Electronics, suggest that neuromorphic architectures could be well-suited for implementing these computations and could thus reach beyond machine learning applications.

“Most past studies related to focused on cognitive applications, such as ,” James Bradley Aimone, one of the researchers who carried out the study, told TechXplore. “While we are also excited about that direction, we wanted to ask a different and complementary question: can neuromorphic computing excel at complex math tasks that our brains cannot really tackle?”

A Microsoft Research team has introduced a “simple yet effective” method that dramatically improves stability in transformer models with just a few lines of code change.

Large-scale transformers have achieved state-of-the-art performance on a wide range of natural language processing (NLP) tasks, and in recent years have also demonstrated their impressive few-shot and zero-shot learning capabilities, making them a popular architectural choice for machine learning researchers. However, despite soaring parameter counts that now reach billions and even trillions, the layer depth of transformers remains restricted by problems with training instability.

In their new paper DeepNet: Scaling Transformers to 1,000 Layers, the Microsoft team proposes DeepNorm, a novel normalization function that improves the stability of transformers to enable scaling that is an order of magnitude deeper (more than 1,000 layers) than previous deep transformers.