The new version of AlphaZero discovered a faster way to do matrix multiplication, a core problem in computing that affects thousands of everyday computer tasks.
Researchers are working on water-based microprocessors that could one day be used as a more diverse alternative to the current wafer architecture of today, with applications ranging from AI to DNA synthesis and likely beyond.
The chips in question are still in the prototype stage, so don’t expect processors with built in water cooling just yet, but the way they work is really exciting. They use a technique called ionics, which involves manipulating different ion species in liquid, as opposed to the standard electrons shooting through our semiconductors today.
Yann LeCun, machine learning pioneer and head of AI at Meta, lays out a vision for AIs that learn about the world more like humans in a new study.
Add analog and air-driven to the list of control system options for soft robots.
In a study published online this week, robotics researchers, engineers and materials scientists from Rice University and Harvard University showed it is possible to make programmable, nonelectronic circuits that control the actions of soft robots by processing information encoded in bursts of compressed air.
“Part of the beauty of this system is that we’re really able to reduce computation down to its base components,” said Rice undergraduate Colter Decker, lead author of the study in the Proceedings of the National Academy of Sciences. He said electronic control systems have been honed and refined for decades, and recreating computer circuitry “with analogs to pressure and flow rate instead of voltage and current” made it easier to incorporate pneumatic computation.
Engineers at UC Riverside have unveiled an air-powered computer memory that can be used to control soft robots. The innovation overcomes one of the biggest obstacles to advancing soft robotics: the fundamental mismatch between pneumatics and electronics. The work is published in the open-access journal, PLOS One.
Pneumatic soft robots use pressurized air to move soft, rubbery limbs and grippers and are superior to traditional rigid robots for performing delicate tasks. They are also safer for humans to be around. Baymax, the healthcare companion robot in the 2014 animated Disney film, Big Hero 6, is a pneumatic robot for good reason.
But existing systems for controlling pneumatic soft robots still use electronic valves and computers to maintain the position of the robot’s moving parts. These electronic parts add considerable cost, size, and power demands to soft robots, limiting their feasibility.
Artificial intelligence presents a major challenge to conventional computing architecture. In standard models, memory storage and computing take place in different parts of the machine, and data must move from its area of storage to a CPU or GPU for processing.
The problem with this design is that movement takes time. Too much time. You can have the most powerful processing unit on the market, but its performance will be limited as it idles waiting for data, a problem known as the “memory wall” or “bottleneck.”
When computing outperforms memory transfer, latency is unavoidable. These delays become serious problems when dealing with the enormous amounts of data essential for machine learning and AI applications.
Researchers from the Department of Materials Science and Engineering at Texas A&M University have used an Artificial Intelligence Materials Selection framework (AIMS) to discover a new shape memory alloy. The shape memory alloy showed the highest efficiency during operation achieved thus far for nickel-titanium-based materials. In addition, their data-driven framework offers proof of concept for future materials development.
This study was recently published in the Acta Materialia journal.
Shape memory alloys are utilized in various fields where compact, lightweight and solid-state actuations are needed, replacing hydraulic or pneumatic actuators because they can deform when cold and then return to their original shape when heated. This unique property is critical for applications, such as airplane wings, jet engines and automotive components, that must withstand repeated, recoverable large-shape changes.
Deep generative models are a popular data generation strategy used to generate high-quality samples in pictures, text, and audio and improve semi-supervised learning, domain generalization, and imitation learning. Current deep generative models, however, have shortcomings such as unstable training objectives (GANs) and low sample quality (VAEs, normalizing flows). Although recent developments in diffusion and scored-based models attain equivalent sample quality to GANs without adversarial training, the stochastic sampling procedure in these models is sluggish. New strategies for securing the training of CNN-based or ViT-based GAN models are presented.
They suggest backward ODEsamplers (normalizing flow) accelerate the sampling process. However, these approaches have yet to outperform their SDE equivalents. We introduce a novel “Poisson flow” generative model (PFGM) that takes advantage of a surprising physics fact that extends to N dimensions. They interpret N-dimensional data items x (say, pictures) as positive electric charges in the z = 0 plane of an N+1-dimensional environment filled with a viscous liquid like honey. As shown in the figure below, motion in a viscous fluid converts any planar charge distribution into a uniform angular distribution.
A positive charge with z 0 will be repelled by the other charges and will proceed in the opposite direction, ultimately reaching an imaginary globe of radius r. They demonstrate that, in the r limit, if the initial charge distribution is released slightly above z = 0, this rule of motion will provide a uniform distribution for their hemisphere crossings. They reverse the forward process by generating a uniform distribution of negative charges on the hemisphere, then tracking their path back to the z = 0 planes, where they will be dispersed as the data distribution.
Tesla announced today that it is moving away from using ultrasonic sensors in its suite of Autopilot sensors in favor of its camera-only “Tesla Vision” system.
Last year, Tesla announced it would transition to its “Tesla Vision” Autopilot without radar and start producing vehicles without a front-facing radar.
Originally, the suite of Autopilot sensors – which Tesla claimed would include everything needed to achieve full self-driving capability eventually – included eight cameras, a front-facing radar, and several ultrasonic sensors all around its vehicles.