Driverless trucks are officially running their first regular long-haul routes, making roundtrips between Dallas and Houston.

If you’re wondering how artificial intelligence may begin to interact with our world on a more personal level, look no further than the landscape of sports. As the technology of machine learning becomes more mature and the need for human officiating becomes less necessary, sports leagues have found creative ways to integrate the concept of “computer referees” in ways we may not have initially expected.
Tennis, for example, has been a leading figure in adopting AI officiating. The Hawk-Eye System, introduced in the early 2000s, first changed tennis officiating by allowing players to challenge calls made by line judges. Hawk-Eye, which used multiple cameras and real-time 3D analysis to determine whether a ball was in or out, has today developed into a system called Electronic Line Calling Live, known as ELC. The new technology has become so reliable that the ATP plans to phase out line judges in professional tournaments by the summer of this year.
The Australian Open has taken this system a step further by testing AI to detect foot-faults. Utilizing skeletal tracking technology, the system monitors player movements to identify infractions, improving match accuracy and reducing human error. However, a glitch in the technology did make for a funny moment during this past year’s Australian Open when the computer speaker repeated “foot-fault” before German player Dominik Koepfer could even begin his serve.
Insomnia, depression, and anxiety are the most common mental disorders. Treatments are often only moderately effective, with many people experiencing returning symptoms. This is why it is crucial to find new leads for treatments. Notably, these disorders overlap a lot, often occurring together. Could there be a shared brain mechanism behind this phenomenon?
Siemon de Lange, Elleke Tissink, and Eus van Someren, together with their colleagues from the Vrije Universiteit Amsterdam, investigated brain scans of more than 40,000 participants from the UK Biobank. The research is published in the journal Nature Mental Health.
Tissink says, “In our lab, we explore the similarities and differences between insomnia, anxiety, and depression. Everyone looks at this from a different perspective: some mainly look at genetics and in this study, we look at brain scans. What aspects are shared between the disorders, and what is unique to each one?”
In the post on the Chinese room, while concluding that Searle’s overall thesis isn’t demonstrated, I noted that if he had restricted himself to a more limited assertion, he might have had a point, that the Turing test doesn’t guarantee a system actually understands its subject matter. Although the probability of humans being fooled plummets as the test goes on, it never completely reaches zero. The test depends on human minds to assess whether there is more there than a thin facade. But what exactly is being assessed?
I just finished reading Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans. Mitchell recounts how, in recent years, deep learning networks have broken a lot of new ground. Such networks have demonstrated an uncanny ability to recognize items in photographs, including faces, to learn how to play old Atari games to superhuman levels, and have even made progress in driving cars, among many other things.
But do these systems have any understanding of the actual subject matter they’re dealing with? Or do they have what Daniel Dennett calls “competence without comprehension”?
A research team has developed a “next-generation AI electronic nose” capable of distinguishing scents like the human olfactory system does and analyzing them using artificial intelligence. This technology converts scent molecules into electrical signals and trains AI models on their unique patterns. It holds great promise for applications in personalized health care, the cosmetics industry, and environmental monitoring.
The study is published in the journal ACS Nano. The team was led by Professor Hyuk-jun Kwon of the Department of Electrical Engineering and Computer Science at DGIST, with integrated master’s and Ph.D. student Hyungtae Lim as first author.
While conventional electronic noses (e-noses) have already been deployed in areas such as food safety and gas detection in industrial settings, they struggle to distinguish subtle differences between similar smells or analyze complex scent compositions. For instance, distinguishing among floral perfumes with similar notes or detecting the faint odor of fruit approaching spoilage remains challenging for current systems. This gap has driven demand for next-generation e-nose technologies with greater precision, sensitivity, and adaptability.
For decades, neuroscientists have developed mathematical frameworks to explain how brain activity drives behavior in predictable, repetitive scenarios, such as while playing a game. These algorithms have not only described brain cell activity with remarkable precision but also helped develop artificial intelligence with superhuman achievements in specific tasks, such as playing Atari or Go.
Yet these frameworks fall short of capturing the essence of human and animal behavior: our extraordinary ability to generalize, infer and adapt. Our study, published in Nature late last year, provides insights into how brain cells in mice enable this more complex, intelligent behavior.
Unlike machines, humans and animals can flexibly navigate new challenges. Every day, we solve new problems by generalizing from our knowledge or drawing from our experiences. We cook new recipes, meet new people, take a new path—and we can imagine the aftermath of entirely novel choices.
Breakthrough light-powered chip speeds up AI training and reduces energy consumption.
Engineers at Penn have developed the first programmable chip capable of training nonlinear neural networks using light—a major breakthrough that could significantly accelerate AI training, lower energy consumption, and potentially lead to fully light-powered computing systems.
Unlike conventional AI chips that rely on electricity, this new chip is photonic, meaning it performs calculations using beams of light. Published in Nature Photonics.
Antimicrobial resistance (AMR) presents a serious challenge in today’s world. The use of antimicrobials (AMU) significantly contributes to the emergence and spread of resistant bacteria. Companion animals gain recognition as potential reservoirs and vectors for transmitting resistant microorganisms to both humans and other animals. The full extent of this transmission remains unclear, which is particularly concerning given the substantial and growing number of households with companion animals. This situation highlights critical knowledge gaps in our understanding of risk factors and transmission pathways for AMR transfer between companion animals and humans. Moreover, there’s a significant lack of information regarding AMU in everyday veterinary practices for companion animals. The exploration and development of alternative therapeutic approaches to antimicrobial treatments of companion animals also represents a research priority. To address these pressing issues, this Reprint aims to compile and disseminate crucial additional knowledge. It serves as a platform for relevant research studies and reviews, shedding light on the complex interplay between AMU, AMR, and the role of companion animals in this global health challenge. This Reprint is especially addressed to companion animal veterinary practitioners as well as all researchers working on the field of AMR in both animals and humans, from a One Health perspective.