Toggle light / dark theme

Get the latest international news and world events from around the world.

Log in for authorized contributors

A 2D Device May Help Quantum Computers Stay Cool

PRESS RELEASE — To perform quantum computations, quantum bits (qubits) must be cooled down to temperatures in the millikelvin range (close to-273 Celsius), to slow down atomic motion and minimize noise. However, the electronics used to manage these quantum circuits generate heat, which is difficult to remove at such low temperatures. Most current technologies must therefore separate quantum circuits from their electronic components, causing noise and inefficiencies that hinder the realization of larger quantum systems beyond the lab.

Researchers in EPFL’s Laboratory of Nanoscale Electronics and Structures (LANES), led by Andras Kis, in the School of Engineering have now fabricated a device that not only operates at extremely low temperatures, but does so with efficiency comparable to current technologies at room temperature.

RACER Speeds Into a Second Phase With Robotic Fleet Expansion and Another Experiment Success

Robotic Autonomy in Complex Environments with Resiliency (RACER) program successfully tested autonomous movement on a new, much larger fleet vehicle – a significant step in scaling up the adaptability and capability of the underlying RACER algorithms.

The RACER Heavy Platform (RHP) vehicles are 12-ton, 20-foot-long, skid-steer tracked vehicles – similar in size to forthcoming robotic and optionally manned combat/fighting vehicles. The RHPs complement the 2-ton, 11-foot-long, Ackermann-steered, wheeled RACER Fleet Vehicles (RFVs) already in use.

“Having two radically different types of vehicles helps us advance towards RACER’s goal of platform agnostic autonomy in complex, mission-relevant off-road environments that are significantly more unpredictable than on-road conditions,” said Stuart Young, RACER program manager.

How do you make a robot smarter?

Teaching robots to ask for help is key to making them safer and more efficient.

Engineers at Princeton University and Google have come up with a new way to teach robots to know when they don’t know. The technique involves quantifying the fuzziness of human language and using that measurement to tell robots when to ask for further directions. Telling a robot to pick up a bowl from a table with only one bowl is fairly clear. But telling a robot to pick up a bowl when there are five bowls on the table generates a much higher degree of uncertainty — and triggers the robot to ask for clarification.

Because tasks are typically more complex than a simple “pick up a bowl” command, the engineers use large language models (LLMs) — the technology behind tools such as ChatGPT — to gauge uncertainty in complex environments. LLMs are bringing robots powerful capabilities to follow human language, but LLM outputs are still frequently unreliable, said Anirudha Majumdar, an assistant professor of mechanical and aerospace engineering at Princeton and the senior author of a study outlining the new method.

/* */