Toggle light / dark theme

A group of computer scientists at Microsoft Research, working with a colleague from the University of Chinese Academy of Sciences, has introduced Microsoft’s new AI model that runs on a regular CPU instead of a GPU. The researchers have posted a paper on the arXiv preprint server outlining how the new model was built, its characteristics and how well it has done thus far during testing.

Over the past several years, LLMs have become all the rage. Models such as ChatGPT have been made available to users around the globe, introducing the idea of intelligent chatbots. One thing most of them have in common is that they are trained and run on GPU chips. This is because of the massive amount of computing power they need when trained on massive amounts of data.

In more recent times, concerns have been raised about the huge amounts of energy being used by to support all the chatbots being used for various purposes. In this new effort, the team has found what it describes as a smarter way to process this data, and they have built a model to prove it.

A study published in Physical Review Letters outlines a new approach for extracting information from binary systems by looking at the entire posterior distribution instead of making decisions based on individual parameters.

Since their detection in 2015, have become a vital tool for astronomers studying the early universe, the limits of general relativity and cosmic events such as compact .

Binary systems consist of two massive objects, like neutron stars or black holes, spiraling toward each other. As they merge together, they generate ripples in spacetime—gravitational waves—which give us information about both objects.

Given the complexity of multi-tenant cloud environments and the growing need for real-time threat mitigation, Security Operations Centers (SOCs) must adopt AI-driven adaptive defense mechanisms to counter Advanced Persistent Threats (APTs). However, SOC analysts face challenges in handling adaptive adversarial tactics, requiring intelligent decision-support frameworks. We propose a Cognitive Hierarchy Theory-driven Deep Q-Network (CHT-DQN) framework that models interactive decision-making between SOC analysts and AI-driven APT bots. The SOC analyst (defender) operates at cognitive level-1, anticipating attacker strategies, while the APT bot (attacker) follows a level-0 policy. By incorporating CHT into DQN, our framework enhances adaptive SOC defense using Attack Graph (AG)-based reinforcement learning. Simulation experiments across varying AG complexities show that CHT-DQN consistently achieves higher data protection and lower action discrepancies compared to standard DQN. A theoretical lower bound further confirms its superiority as AG complexity increases. A human-in-the-loop (HITL) evaluation on Amazon Mechanical Turk (MTurk) reveals that SOC analysts using CHT-DQN-derived transition probabilities align more closely with adaptive attackers, leading to better defense outcomes. Moreover, human behavior aligns with Prospect Theory (PT) and Cumulative Prospect Theory (CPT): participants are less likely to reselect failed actions and more likely to persist with successful ones. This asymmetry reflects amplified loss sensitivity and biased probability weighting — underestimating gains after failure and overestimating continued success. Our findings highlight the potential of integrating cognitive models into deep reinforcement learning to improve real-time SOC decision-making for cloud security.

Main episode with Michael Levin: https://www.youtube.com/watch?v=2aLhkm6QUgA&t=3255s.

As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe.

Join My New Substack (Personal Writings): https://curtjaimungal.substack.com.

Listen on Spotify: https://tinyurl.com/SpotifyTOE

Become a YouTube Member (Early Access Videos):
https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join.

Support TOE on Patreon: https://patreon.com/curtjaimungal.

Transplanted cells offer insight into human-specific properties, such as a lengthy cortical development and sensitivity to neurodevelopmental and neurodegenerative disease.

Quantum technologies, which operate leveraging quantum mechanical phenomena, have the potential to outperform their classical counterparts in some optimization and computational tasks. These technologies include so-called quantum networks, systems designed to transmit information between interconnected nodes and process it, using quantum phenomena such as entanglement and superposition.

Quantum networks could eventually contribute to the advancement of communications, sensing and computing. Before this can happen, however, existing systems will need to be improved and perfected, to ensure that they can transfer and process data both reliably and efficiently, minimizing errors.

Researchers at Tsinghua University, Hefei National Laboratory and the Beijing Academy of Quantum Information Sciences recently demonstrated the coherent control of a hybrid and scalable quantum network node. Their demonstration, outlined in Nature Physics, was realized by combining solutions and techniques that they developed as part of their earlier work.