New analysis recasts benefits of treatment in relation to day-to-day impacts on patients’ lives
Quantum critical points are thresholds that mark the transition of materials between different electronic phases at absolute zero temperatures, around which they often exhibit exotic physical properties.
One of these critical points is the so-called Kondo-breakdown quantum critical point, which marks the collapse of the Kondo effect (i.e., quantum phenomenon that entails the localization of magnetic moments in metals), followed by new emergent physics.
Researchers at Ludwig-Maximilian University of Munich, Rutgers University, and Seoul National University set out to further study the dynamical scaling associated with the Kondo-breakdown quantum critical point, utilizing a theoretical framework describing heavy fermion materials known as the periodic Anderson model.
Palladium-liquid gallium catalyst transforms chemical manufacturing, boosting speed, safety and sustainability
Posted in biological, chemistry, engineering, sustainability | Leave a Comment on Palladium-liquid gallium catalyst transforms chemical manufacturing, boosting speed, safety and sustainability
A major breakthrough in liquid catalysis is transforming how essential products are made, making the chemical manufacturing process faster, safer and more sustainable than ever before.
Researchers from Monash University, the University of Sydney, and RMIT University have developed a liquid catalyst that could transform chemical production across a range of industries—from pharmaceuticals and sustainable products to advanced materials.
By dissolving palladium in liquid gallium the team, led by Associate Professor Md. Arifur Rahim from Monash University’s Department of Chemical and Biological Engineering, created a self-regenerating catalytic system with unprecedented efficiency.
A research team has developed the world’s first smartphone-type OLED panel that can freely transform its shape while simultaneously functioning as a speaker—all without sacrificing its ultra-thin, flexible properties.
The study, led by POSTECH’s (Pohang University of Science and Technology) Professor Su Seok Choi from the Department of Electrical Engineering and conducted by Ph.D. candidates Jiyoon Park, Junhyuk Shin, Inpyo Hong, Sanghyun Han, and Dr. Seungmin Nam, was published in the March online edition of npj Flexible Electronics.
As the display industry rapidly advances toward flexible technologies—bendable, foldable, rollable, and stretchable—most implementations still rely on mechanical structures such as hinges, sliders, or motorized arms. While these allow for shape adjustment, they also result in increased thickness, added weight, and limited form factor design. These drawbacks are particularly restrictive for smartphones and wearable electronics, where compactness and elegance are critical.
Bar graphs and other charts provide a simple way to communicate data, but are, by definition, difficult to translate for readers who are blind or low-vision. Designers have developed methods for converting these visuals into “tactile charts,” but guidelines for doing so are extensive (for example, the Braille Authority of North America’s 2022 guidebook is 426 pages long). The process also requires understanding different types of software, as designers often draft their chart in programs like Adobe Illustrator and then translate it into Braille using another application.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have now developed an approach that streamlines the design process for tactile chart designers. Their program, called “Tactile Vega-Lite,” can take data from something like an Excel spreadsheet and turn it into both a standard visual chart and a touch-based one. Design standards are hardwired as default rules within the program to help educators and designers automatically create accessible tactile charts.
The tool could make it easier for blind and low-vision readers to understand many graphics, such as a bar chart comparing minimum wages across states or a line graph tracking countries’ GDPs over time. To bring your designs to the real world, you can tweak your chart in Tactile Vega-Lite and then send its file to a Braille embosser (which prints text as readable dots).
From virtual reality to rehabilitation and communication, haptic technology has revolutionized the way humans interact with the digital world. While early haptic devices focused on single-sensory cues like vibration-based notifications, modern advancements have paved the way for multisensory haptic devices that integrate various forms of touch-based feedback, including vibration, skin stretch, pressure, and temperature.
Recently, a team of experts, including Rice University’s Marcia O’Malley and Daniel Preston, graduate student Joshua Fleck, alumni Zane Zook ‘23 and Janelle Clark ‘22 and other collaborators, published an in-depth review in Nature Reviews Bioengineering analyzing the current state of wearable multisensory haptic technology, outlining its challenges, advancements, and real-world applications.
Haptic devices, which enable communication through touch, have evolved significantly since their introduction in the 1960s. Initially, they relied on rigid, grounded mechanisms acting as user interfaces, generating force-based feedback from virtual environments.
Imagine an automated delivery vehicle rushing to complete a grocery drop-off while you are hurrying to meet friends for a long-awaited dinner. At a busy intersection, you both arrive at the same time. Do you slow down to give it space as it maneuvers around a corner? Or do you expect it to stop and let you pass, even if normal traffic etiquette suggests it should go first?
“As self-driving technology becomes a reality, these everyday encounters will define how we share the road with intelligent machines,” says Dr. Jurgis Karpus from the Chair of Philosophy of Mind at LMU. He explains that the arrival of fully automated self-driving cars signals a shift from us merely using intelligent machines —like Google Translate or ChatGPT—to actively interacting with them. The key difference? In busy traffic, our interests will not always align with those of the self-driving cars we encounter. We have to interact with them, even if we ourselves are not using them.
In a study published recently in the journal Scientific Reports, researchers from LMU Munich and Waseda University in Tokyo found that people are far more likely to take advantage of cooperative artificial agents than of similarly cooperative fellow humans. “After all, cutting off a robot in traffic doesn’t hurt its feelings,” says Karpus, lead author of the study.
Artificial Intelligence (AI) can perform complex calculations and analyze data faster than any human, but to do so requires enormous amounts of energy. The human brain is also an incredibly powerful computer, yet it consumes very little energy.
As technology companies increasingly expand, a new approach to AI’s “thinking,” developed by researchers including Texas A&M University engineers, mimics the human brain and has the potential to revolutionize the AI industry.
Dr. Suin Yi, assistant professor of electrical and computer engineering at Texas A&M’s College of Engineering, is on a team of researchers that developed “Super-Turing AI,” which operates more like the human brain. This new AI integrates certain processes instead of separating them and then migrating huge amounts of data like current systems do.
A smaller, lighter and more energy-efficient computer, demonstrated at the University of Michigan, could help save weight and power for autonomous drones and rovers, with implications for autonomous vehicles more broadly.
The autonomous controller has among the lowest power requirements reported, according to the study published in Science Advances. It operates at a mere 12.5 microwatts—in the ballpark of a pacemaker.
In testing, a rolling robot using the controller was able to pursue a target zig-zagging down a hallway with the same speed and accuracy as with a conventional digital controller. In a second trial, with a lever-arm that automatically repositioned itself, the new controller did just as well.
When it comes to haptic feedback, most technologies are limited to simple vibrations. But our skin is loaded with tiny sensors that detect pressure, vibration, stretching and more. Now, Northwestern University engineers have unveiled a new technology that creates precise movements to mimic these complex sensations.
The study, “Full freedom-of-motion actuators as advanced haptic interfaces,” is published in the journal Science.
While sitting on the skin, the compact, lightweight, wireless device applies force in any direction to generate a variety of sensations, including vibrations, stretching, pressure, sliding and twisting. The device can also combine sensations and operate fast or slowly to simulate a more nuanced, realistic sense of touch.