Toggle light / dark theme

Hybrid model reveals people act less rationally in complex games, more predictably in simple ones

Throughout their everyday lives, humans are typically required to make a wide range of decisions, which can impact their well-being, health, social connections, and finances. Understanding the human decision-making processes is a key objective of many behavioral science studies, as this could in turn help to devise interventions aimed at encouraging people to make better choices.

Researchers at Princeton University, Boston University and other institutes used machine learning to predict the strategic decisions of humans in various games. Their paper, published in Nature Human Behavior, shows that a trained on human decisions could predict the strategic choices of players with high levels of accuracy.

“Our main motivation is to use modern computational tools to uncover the cognitive mechanisms that drive how people behave in strategic situations,” Jian-Qiao Zhu, first author of the paper, told Phys.org.

Can ChatGPT actually ‘see’ red? New study results are nuanced

ChatGPT works by analyzing vast amounts of text, identifying patterns and synthesizing them to generate responses to users’ prompts. Color metaphors like “feeling blue” and “seeing red” are commonplace throughout the English language, and therefore comprise part of the dataset on which ChatGPT is trained.

But while ChatGPT has “read” billions of words about what it might mean to feel blue or see red, it has never actually seen a blue sky or a red apple in the ways that humans have. This begs the questions: Do embodied experiences—the capacity of the human visual system to perceive color—allow people to understand colorful language beyond the textual ways ChatGPT does? Or is language alone, for both AI and humans, sufficient to understand color metaphors?

New results from a study published in Cognitive Science led by Professor Lisa Aziz-Zadeh and a team of university and industry researchers offer some insights into those questions, and raise even more.

Scientists unlock key manufacturing challenge for next-generation optical chips

Researchers at the University of Strathclyde have developed a new method for assembling ultra-small, light-controlling devices, paving the way for scalable manufacturing of advanced optical systems used in quantum technologies, telecommunications and sensing.

The study, published in Nature Communications, centers on photonic crystal cavities (PhCCs), micron-scale structures that trap and manipulate light with extraordinary precision. These are essential components for high-performance technologies ranging from quantum computing to photonic artificial intelligence.

Until now, the creation of large arrays of PhCCs has been severely limited by the tiny variations introduced during fabrication. Even nanometer-scale imperfections can drastically shift each device’s optical properties, making it impossible to build arrays of identical units directly on-chip.

Microrobots shaped and steered by metal patches could aid drug delivery and pollution cleanup

Researchers at the University of Colorado Boulder have created a new way to build and control tiny particles that can move and work like microscopic robots, offering a powerful tool with applications in biomedical and environmental research.

The study, published in Nature Communications, describes a new method of fabrication that combines high-precision 3D printing, called two-photon lithography, with a microstenciling technique. The team prints both the particle and its stencil together, then deposits a thin layer of metal—such as gold, platinum or cobalt—through the stencil’s openings. When the stencil is removed, a metal patch remains on the particle.

The particles, invisible to the naked eye, can be made in almost any shape and patterned with surface patches as small as 0.2 microns—more than 500 times thinner than a human hair. The metal patches guide how the particles move when exposed to electric or magnetic fields, or chemical gradients.

Pretrained jet foundation model successfully utilized for tau reconstruction

Simulating data in particle physics is expensive and not perfectly accurate. To get around this, researchers are now exploring the use of foundation models—large AI models trained in a general, task-agnostic way on large amounts of data.

Just like how language models can be pretrained on the full dataset of internet text before being fine-tuned for specific tasks, these models can learn from large datasets of particle jets, even without labels.

After the pretraining, they can be fine-tuned to solve specific problems using much less data than traditional approaches.

AI reveals astrocytes play a ‘starring’ role in dynamic brain function

Long overlooked and underestimated, glial cells—non-neuronal cells that support, protect and communicate with neurons—are finally stepping into the neuroscience spotlight. A new Florida Atlantic University study highlights the surprising influence of a particular glial cell, revealing that it plays a much more active and dynamic role in brain function than previously thought.

Using sophisticated computational modeling and , researchers discovered how astrocytes, a “star” shaped glial cell, subtly—but significantly—modulate communication between neurons, especially during highly coordinated, synchronous brain activity.

“Clearly, are significantly implicated in several brain functions, making identifying their presence among neurons an appealing and important problem,” said Rodrigo Pena, Ph.D., senior author, an assistant professor of biological sciences within FAU’s Charles E. Schmidt College of Science on the John D. MacArthur Campus in Jupiter, and a member of the FAU Stiles-Nicholson Brain Institute.

Scientists create biological ‘artificial intelligence’ system

Australian scientists have successfully developed a research system that uses ‘biological artificial intelligence’ to design and evolve molecules with new or improved functions directly in mammal cells. The researchers said this system provides a powerful new tool that will help scientists develop more specific and effective research tools or gene therapies.

Named PROTEUS (PROTein Evolution Using Selection) the system harnesses ‘directed evolution’, a lab technique that mimics the natural power of evolution. However, rather than taking years or decades, this method accelerates cycles of evolution and natural selection, allowing them to create molecules with new functions in weeks.

This could have a direct impact on finding new, more effective medicines. For example, this system can be applied to improve gene editing technology like CRISPR to improve its effectiveness.

AI model transforms blurry, choppy videos into clear, seamless footage

A research team, led by Professor Jaejun Yoo from the Graduate School of Artificial Intelligence at UNIST has announced the development of an advanced artificial intelligence (AI) model, “BF-STVSR (Bidirectional Flow-based Spatio-Temporal Video Super-Resolution),” capable of simultaneously improving both video resolution and frame rate.

This research was led by first author Eunjin Kim, with Hyeonjin Kim serving as co-author. Their findings were presented at the Conference on Computer Vision and Pattern Recognition (CVPR 2025) held in Nashville June 11–15. The study is posted on the arXiv preprint server.

Resolution and frame rate are critical factors that determine . Higher resolution results in sharper images with more detailed visuals, while increased frame rates ensure smoother motion without abrupt jumps.

AI cloud infrastructure gets faster and greener: NPU core improves inference performance by over 60%

The latest generative AI models such as OpenAI’s ChatGPT-4 and Google’s Gemini 2.5 require not only high memory bandwidth but also large memory capacity. This is why generative AI cloud operating companies like Microsoft and Google purchase hundreds of thousands of NVIDIA GPUs.

As a solution to address the core challenges of building such high-performance AI infrastructure, Korean researchers have succeeded in developing an NPU (neural processing unit) core technology that improves the inference performance of generative AI models by an average of more than 60% while consuming approximately 44% less power compared to the latest GPUs.

Professor Jongse Park’s research team from KAIST School of Computing, in collaboration with HyperAccel Inc., developed a high-performance, low-power NPU core technology specialized for generative AI clouds like ChatGPT.

/* */