DeepL, which is valued at $2 billion, is a German startup that has created its own AI models for language translation.

Self-driving cars will soon be able to “think” like human drivers under complex traffic environments, thanks to a cognitive encoding framework built by a multidisciplinary research team from the School of Engineering at the Hong Kong University of Science and Technology (HKUST).
This innovation significantly enhances the safety of autonomous vehicles (AVs), reducing overall traffic risk by 26.3% and cutting potential harm to high-risk road users such as pedestrians and cyclists by an impressive 51.7%. Even the AVs themselves benefited, with their risk levels lowered by 8.3%, paving the way for a new framework to advance the automation of vehicle safety.
Existing AVs have one common limitation: their decision-making systems can only make pairwise risk assessments, failing to holistically consider interactions among multiple road users. This contrasts with a proficient driver who, for example, can skillfully navigate an intersection by prioritizing pedestrian protection while slightly compromising the safety of nearby vehicles. Once pedestrians are confirmed to be safe, the driver can then shift focus to nearby vehicles. Such risk management ability exhibited by humans is known as “social sensitivity.”
Partial differential equations (PDEs) are a class of mathematical problems that represent the interplay of multiple variables, and therefore have predictive power when it comes to complex physical systems. Solving these equations is a perpetual challenge, however, and current computational techniques for doing so are time-consuming and expensive.
Now, research from the University of Utah’s John and Marcia Price College of Engineering is showing a way to speed up this process: encoding those equations in light and feeding them into their newly designed “optical neural engine,” or ONE.
The researchers’ ONE combines diffractive optical neural networks and optical matrix multipliers. Rather than representing PDEs digitally, the researchers represented them optically, with variables represented by the various properties of a light wave, such as its intensity and phase. As a wave passes through the ONE’s series of optical components, those properties gradually shift and change, until they ultimately represent the solution to the given PDE.
Coordinated behaviors like swarming—from ant colonies to schools of fish—are found everywhere in nature. Researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have given a nod to nature with a next-generation robot system that’s capable of movement, exploration, transport and cooperation.
A study in Science Advances describing the new soft robotic system was co-led by L. Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, Physics, and Organismic and Evolutionary Biology in SEAS and the Faculty of Arts and Sciences, in collaboration with Professor Ho-Young Kim at Seoul National University. Their work paves new directions for future, low-power swarm robotics.
The new robots, called link-bots, are comprised of centimeter-scale, 3D-printed particles strung into V-shaped chains via notched links and are capable of coordinated, life-like movements without any embedded power or control systems. Each particle’s legs are tilted to allow the bot to self-propel when placed on a uniformly vibrating surface.
Researchers at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) have made a breakthrough in laser technology by using machine learning (ML) to help stabilize a high-power laser.
This advancement, spearheaded by Berkeley Lab’s Accelerator Technology & Applied Physics (ATAP) and Engineering Divisions, promises to accelerate progress in physics, medicine, and energy. The researchers report their work in the journal High Power Laser Science and Engineering.
An autonomous drone carrying water to help extinguish a wildfire in the Sierra Nevada might encounter swirling Santa Ana winds that threaten to push it off course. Rapidly adapting to these unknown disturbances inflight presents an enormous challenge for the drone’s flight control system.
To help such a drone stay on target, MIT researchers developed a new, machine learning-based adaptive control algorithm that could minimize its deviation from its intended trajectory in the face of unpredictable forces like gusty winds.
The study is published on the arXiv preprint server.