Toggle light / dark theme

Microsoft introduces an AI model that runs on regular CPUs

A group of computer scientists at Microsoft Research, working with a colleague from the University of Chinese Academy of Sciences, has introduced Microsoft’s new AI model that runs on a regular CPU instead of a GPU. The researchers have posted a paper on the arXiv preprint server outlining how the new model was built, its characteristics and how well it has done thus far during testing.

Over the past several years, LLMs have become all the rage. Models such as ChatGPT have been made available to users around the globe, introducing the idea of intelligent chatbots. One thing most of them have in common is that they are trained and run on GPU chips. This is because of the massive amount of computing power they need when trained on massive amounts of data.

In more recent times, concerns have been raised about the huge amounts of energy being used by to support all the chatbots being used for various purposes. In this new effort, the team has found what it describes as a smarter way to process this data, and they have built a model to prove it.

Scientists improve gravitational wave identification with machine learning

A study published in Physical Review Letters outlines a new approach for extracting information from binary systems by looking at the entire posterior distribution instead of making decisions based on individual parameters.

Since their detection in 2015, have become a vital tool for astronomers studying the early universe, the limits of general relativity and cosmic events such as compact .

Binary systems consist of two massive objects, like neutron stars or black holes, spiraling toward each other. As they merge together, they generate ripples in spacetime—gravitational waves—which give us information about both objects.

Human-AI Collaboration in Cloud Security: Cognitive Hierarchy-Driven Deep Reinforcement Learning

Given the complexity of multi-tenant cloud environments and the growing need for real-time threat mitigation, Security Operations Centers (SOCs) must adopt AI-driven adaptive defense mechanisms to counter Advanced Persistent Threats (APTs). However, SOC analysts face challenges in handling adaptive adversarial tactics, requiring intelligent decision-support frameworks. We propose a Cognitive Hierarchy Theory-driven Deep Q-Network (CHT-DQN) framework that models interactive decision-making between SOC analysts and AI-driven APT bots. The SOC analyst (defender) operates at cognitive level-1, anticipating attacker strategies, while the APT bot (attacker) follows a level-0 policy. By incorporating CHT into DQN, our framework enhances adaptive SOC defense using Attack Graph (AG)-based reinforcement learning. Simulation experiments across varying AG complexities show that CHT-DQN consistently achieves higher data protection and lower action discrepancies compared to standard DQN. A theoretical lower bound further confirms its superiority as AG complexity increases. A human-in-the-loop (HITL) evaluation on Amazon Mechanical Turk (MTurk) reveals that SOC analysts using CHT-DQN-derived transition probabilities align more closely with adaptive attackers, leading to better defense outcomes. Moreover, human behavior aligns with Prospect Theory (PT) and Cumulative Prospect Theory (CPT): participants are less likely to reselect failed actions and more likely to persist with successful ones. This asymmetry reflects amplified loss sensitivity and biased probability weighting — underestimating gains after failure and overestimating continued success. Our findings highlight the potential of integrating cognitive models into deep reinforcement learning to improve real-time SOC decision-making for cloud security.

Unlocking the secrets of salt crystal formation at the nanoscale

In nature and technology, crystallization plays a pivotal role, from forming snowflakes and pharmaceuticals to creating advanced batteries and desalination membranes. Despite its importance, crystallization at the nanoscale is poorly understood, mainly because observing the process directly at this scale is exceptionally challenging. My research overcame this hurdle by employing state-of-the-art computational methods, allowing them to visualize atomic interactions in unprecedented detail.

Published in Chemical Science, my research has uncovered new details about how salt crystals form in tiny nanometer-sized spaces, which could pave the way for and improved electrochemical technologies.

This research used sophisticated enhanced by cutting-edge machine learning techniques to study how (NaCl), common table salt, crystallizes when confined between two graphene sheets separated by just a few billionths of a meter. These , known as nano-confinement, drastically alter how molecules behave compared to bulk, everyday conditions.

They Mapped Every Neuron in a Grain of Brain — And Revealed How We See

A massive, multi-year project led by over 150 scientists has produced the most detailed map yet of how visual information travels through the brain – revealing more than 500 million connections in a speck of mouse brain tissue.

Using glowing neurons, high-powered electron microscopes, and deep learning, researchers captured both the physical wiring and real-time electrical activity of over 200,000 brain cells. The resulting 1.6-petabyte dataset is not just a technological marvel – it brings us closer to answering age-old questions about how our brains turn light into vision and how brain disorders might arise when this system breaks.

Unraveling the Brain’s Visual Code.

/* */