Toggle light / dark theme

New AI method accelerates plasma heat defense in reactors

New AI method speeds up calculations to protect fusion reactors from plasma heat.


Scientists in the US have introduced a novel artificial intelligence (AI) approach that can protect fusion reactors from the extreme heat generated by plasma.

The new method, which is called HEAT-ML, was developed by researchers from Commonwealth Fusion Systems (CFS), the US Department of Energy’s (DOE) Princeton Plasma Physics Laboratory (PPPL), and Oak Ridge National Laboratory.

It is reportedly capable of quickly identifying magnetic shadows, which are critical areas protected from the intense heat of the plasma, and therefore help prevent potential problems before they start.

View a PDF of the paper titled Characterizing Deep Research: A Benchmark and Formal Definition, by Abhinav Java and 8 other authors

Information tasks such as writing surveys or analytical reports require complex search and reasoning, and have recently been grouped under the umbrella of \textit{deep research} — a term also adopted by recent models targeting these capabilities. Despite growing interest, the scope of the deep research task remains underdefined and its distinction from other reasoning-intensive problems is poorly understood. In this paper, we propose a formal characterization of the deep research (DR) task and introduce a benchmark to evaluate the performance of DR systems. We argue that the core defining feature of deep research is not the production of lengthy report-style outputs, but rather the high fan-out over concepts required during the search process, i.e., broad and reasoning-intensive exploration. To enable objective evaluation, we define DR using an intermediate output representation that encodes key claims uncovered during search-separating the reasoning challenge from surface-level report generation. Based on this formulation, we propose a diverse, challenging benchmark LiveDRBench with 100 challenging tasks over scientific topics (e.g., datasets, materials discovery, prior art search) and public interest events (e.g., flight incidents, movie awards). Across state-of-the-art DR systems, F1 score ranges between 0.02 and 0.72 for any sub-category. OpenAI’s model performs the best with an overall F1 score of 0.55. Analysis of reasoning traces reveals the distribution over the number of referenced sources, branching, and backtracking events executed by current DR systems, motivating future directions for improving their search mechanisms and grounding capabilities. The benchmark is available at https://github.com/microsoft/LiveDRBench.

AI Started Improving Itself. Researchers Are Terrified

Detailed sources: https://docs.google.com/document/d/1ksVvFuR0IttxzH6zoASSYy7Z…b62ckaanow.

Based on the report: Situational Awareness — by Leopold Aschenbrenner https://situational-awareness.ai/from-agi-to-superintelligence/

Hi, I’m Drew! Thanks for watching smile

I also post mid memes on Twitter: https://twitter.com/PauseusMaximus.

Also, I meant to say Cortés conquered the Aztecs, not the Incas.

AI breakthrough designs peptide drugs to target previously untreatable proteins

A study published in Nature Biotechnology reveals a powerful new use for artificial intelligence: designing small, drug-like molecules that can stick to and break down harmful proteins in the body — even when scientists don’t know what those proteins look like. The breakthrough could lead to new treatments for diseases that have long resisted traditional drug development, including certain cancers, brain disorders, and viral infections.

The study was published on August 13, 2025 by a multi-institutional team of researchers from McMaster University, Duke University, and Cornell University. The AI tool, called PepMLM, is based on an algorithm originally built to understand human language and used in chatbots, but was trained to understand the “language” of proteins.

In 2024, the Nobel Prize in Chemistry was awarded to researchers at Google DeepMind for developing AlphaFold, an AI system that predicts the 3D structure of proteins – a major advance in drug discovery. But many disease-related proteins, including those involved in cancer and neurodegeneration, don’t have stable structures. That’s where PepMLM takes a different approach – instead of relying on structure, the tool uses only the protein’s sequence to design peptide drugs. This makes it possible to target a much broader range of disease proteins, including those that were previously considered “undruggable.”

Sam Altman’s worst-case AI scenario may already be here

Sam Altman, CEO of OpenAI, appeared at a Federal Reserve event on July 22 and outlined three “scary categories” of how advanced artificial intelligence could threaten society.

The first two scenarios — a bad actor using artificial intelligence for malfeasance and a rogue AI taking over the world — were accompanied by the insistence that people were working to prevent them. However, Mr. Altman offered no such comfort with the third scenario, the one that seemed to trouble him most.

He described a future where AI systems become “so ingrained in society … [that we] can’t really understand what they’re doing, but we do kind of have to rely on them. And even without a drop of malevolence from anyone, society can just veer off in a sort of strange direction.”

/* */