Toggle light / dark theme

Researchers from Santa Clara University, New Jersey Institute of Technology and the University of Hong Kong have been able to successfully teach microrobots how to swim via deep reinforcement learning, marking a substantial leap in the progression of microswimming capability.

There has been tremendous interest in developing artificial microswimmers that can navigate the world similarly to naturally-occuring swimming microorganisms, like bacteria. Such microswimmers provide promise for a vast array of future biomedical applications, such as targeted drug delivery and microsurgery. Yet, most artificial microswimmers to date can only perform relatively simple maneuvers with fixed locomotory gaits.

The artificial intelligence-powered swimmer switches between different modes of locomotory gaits (color-coded) autonomously in tracing a complex trajectory ‘SWIM’. (Image: Commun. Phys., 5,158 (2022))

Analysing pendulum videos, the artificial intelligence tool identified variables not present in current mathematics.


An artificial intelligence tool has examined physical systems and not surprisingly, found new ways of describing what it found.

How do we make sense of the universe? There’s no manual. There’s no prescription.

At its most basic, physics helps us understand the relationships between “observable” variables – these are things we can measure. Velocity, energy, mass, position, angles, temperature, charge. Some variables like acceleration can be reduced to more fundamental variables. These are all variables in physics which shape our understanding of the world.

Tesla has teased its Optimus humanoid robot prototype with a new image ahead of a full unveiling planned for September 30th.

Earlier this year, CEO Elon Musk announced “Tesla AI Day #2” with “many cool updates” on August 19.

The original “Tesla AI Day” held last year was an event focused on the company’s self-driving program. The automaker also unveiled its Dojo supercomputer and announced plans for the “Tesla Bot” humanoid robot – now known as Tesla Optimus.

* At Long Last, Mathematical Proof That Black Holes Are Stable * Who Gets to Work in the Digital Economy? * Mice produce rat sperm with technique that could help conservation.

* Quantum computer can simulate infinitely many chaotic particles * Radar / AI & ML: Scaling False Peaks * Cyber security for the human world | George Loukas | TEDx.

* Can Airbnb Outperform a Potential Recession? | WSJ * San Diego joins other cities in restricting cops’ use of surveillance technology * Blue Origin launches crew of 6 to suborbital space, nails landings.

Advances in the fields of robotics, autonomous driving and computer vision have increased the need for highly performing sensors that can reliably collect data in different environmental conditions. This includes imagers that can operate at near-infrared wavelengths (i.e., 0.7–1.4 µm), thus potentially collecting high resolution images in complex or unfavorable atmospheric conditions, such as in the presence of rain, fog and smoke.

Researchers at Huazhong University of Science and Technology (HUST), HiSilicon Optoelectronics Co. Limited, and Optical Valley Laboratory have recently developed a near-infrared colloidal quantum dot (CQD) imager. This highly efficient imager was presented in a paper published in Nature Electronics.

“Our group was founded at Wuhan National Laboratory for Optoelectronics, HUST in 2012 and continuously conducts research on CQD materials and devices with Associate Prof. Jianbing Zhang,” Liang Gao, one of the researchers involved in the study, told TechXplore.

Google has described how the researchers have combined machine learning and semantic engines to develop a novel Transformer-based hybrid semantic ML code completion. The increasing complexity of code poses a key challenge to productivity in software engineering. Code completion has been an essential tool that has helped mitigate this complexity in integrated development environments. Intelligent code completion is a context-aware code completion feature in some programming environments that speeds up the process of coding applications by reducing typos and other common mistakes.

Google AI’s latest research explains how they combined machine learning and semantic engine SE to develop a novel transformer-based hybrid semantic ML code completion. A revolutionary Transformer-based hybrid semantic code completion model that is now available to internal Google engineers was created by Google AI researchers by combining ML with SE. The researchers’ method for integrating ML with SEs is defined as re-ranking SE single token proposals with ML, applying single and multi-line completions with ML, and then validating the results with the SE.

A common approach to code completion is to train transformer models, which use a self-attention mechanism for language understanding, to enable code understanding and completion predictions. Additionally, google suggested employing ML of single token semantic suggestions for single and multi-line continuation. Over three months, more than 10,000 Google employees tested the model in eight programming languages.

They claimed that artificial intelligence can actually solve some of the hardest challenges that affect the delivery of dementia treatment to old people, especially those with Alzheimer’s disease.

In 2021, the National Library of Medicine revealed that more than 6.2 million U.S. residents are suffering from Alzheimer’s.

Researchers from TU Delft have constructed the smallest flow-driven motors in the world. Inspired by iconic Dutch windmills and biological motor proteins, they created a self-configuring flow-driven rotor from DNA that converts energy from an electrical or salt gradient into useful mechanical work. The results open new perspectives for engineering active robotics at the nanoscale.

The article is now published in Nature Physics (“Sustained unidirectional rotation of a self-organized DNA rotor on a nanopore”).

Rotary motors have been the powerhouses of human societies for millennia: from the windmills and waterwheels across the Netherlands and the world to today’s most advanced off-shore wind turbines that drive our green-energy future.

Researchers have developed a new chip-based beam steering technology that provides a promising route to small, cost-effective and high-performance lidar (or light detection and ranging) systems. Lidar, which uses laser pulses to acquire 3D information about a scene or object, is used in a wide range of applications such as autonomous driving, free-space optical communications, 3D holography, biomedical sensing and virtual reality.

Optica l beam steering is a key technology for lidar systems, but conventional mechanical-based beam steering systems are bulky, expensive, sensitive to vibration and limited in speed,” said research team leader Hao Hu from the Technical University of Denmark. “Although devices known as chip-based optical phased arrays (OPAs) can quickly and precisely steer light in a non-mechanical way, so far, these devices have had poor beam quality and a field of view typically below 100 degrees.”

In Optica, Hu and co-author Yong Liu describe their new chip-based OPA that solves many of the problems that have plagued OPAs. They show that the device can eliminate a key optical artifact known as aliasing, achieving beam steering over a large field of view while maintaining high beam quality, a combination that could greatly improve lidar systems.