Toggle light / dark theme

Google DeepMind creates super-advanced AI that can invent new algorithms

The team turned AlphaEvolve loose on Google’s Borg cluster management system for its data centers. The AI suggested a change to the scheduling heuristics, which has been implemented to save Google 0.7 percent on its computing resources globally. For a company the size of Google, that’s a significant financial benefit.

AlphaEvolve may also be able to make generative AI more efficient, which is necessary if anyone is ever going to make money on the technology. The internal workings of generative systems are based on matrix multiplication operations. The most efficient way to multiply 4×4 complex-valued matrices was devised by mathematician Volker Strassen in 1969, and that held for decades, but DeepMind says AlphaEvolve has discovered a new algorithm that’s even more efficient. DeepMind has worked on this problem before with narrowly trained AI agents like AlphaTensor. Despite being a general AI, AlphaEvolve came up with a better solution than AlphaTensor.

Google’s next-generation Tensor processing hardware will also benefit from AlphaEvolve. DeepMind reports that the AI created a change to the chip’s Verilog hardware description language that dropped unnecessary bits to increase efficiency. Google is still working to verify the change but expects this to be part of the upcoming processor.

Self-improving AI is here!

HUGE AI breakthrough: Absolute Zero Reasoner deep dive. Self-improving AI that learns with no data! #ai #aitools #ainews #llm.

Sources:
https://arxiv.org/abs/2505.03335
https://github.com/LeapLabTHU/Absolut… Thanks to Tavus for sponsoring this video. Try Tavus for free https://tavus.plug.dev/T4AQw5K 0:00 Absolute Zero intro 0:50 Traditional methods of training AI models 4:00 Absolute Zero algorithm 5:01 How Absolute Zero Reasoner works 7:19 Types of training tasks 9:00 How good is Absolute Zero 10:47 Tavus 12:11 Adding Absolute Zero to existing models 13:01 Interesting findings 15:43 Uhoh… 16:50 Ablation study 18:15 More interesting findings Newsletter: https://aisearch.substack.com/ Find AI tools & jobs: https://ai-search.io/ Support: https://ko-fi.com/aisearch Here’s my equipment, in case you’re wondering: Dell Precision 5690: https://www.dell.com/en-us/dt/ai-tech… Nvidia RTX 5,000 Ada https://nvda.ws/3zfqGqS Mouse/Keyboard: ALOGIC Echelon https://bit.ly/alogic-echelon Mic: Shure SM7B https://amzn.to/3DErjt1 Audio interface: Scarlett Solo https://amzn.to/3qELMeu.

Thanks to Tavus for sponsoring this video. Try Tavus for free https://tavus.plug.dev/T4AQw5K

0:00 Absolute Zero intro.
0:50 Traditional methods of training AI models.
4:00 Absolute Zero algorithm.
5:01 How Absolute Zero Reasoner works.
7:19 Types of training tasks.
9:00 How good is Absolute Zero.
10:47 Tavus.
12:11 Adding Absolute Zero to existing models.
13:01 Interesting findings.
15:43 Uhoh…
16:50 Ablation study.
18:15 More interesting findings.

Newsletter: https://aisearch.substack.com/
Find AI tools & jobs: https://ai-search.io/
Support: https://ko-fi.com/aisearch.

Here’s my equipment, in case you’re wondering:

AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms

Large language models (LLMs) are remarkably versatile. They can summarize documents, generate code or even brainstorm new ideas. And now we’ve expanded these capabilities to target fundamental and highly complex problems in mathematics and modern computing.

Today, we’re announcing AlphaEvolve, an evolutionary coding agent powered by large language models for general-purpose algorithm discovery and optimization. AlphaEvolve pairs the creative problem-solving capabilities of our Gemini models with automated evaluators that verify answers, and uses an evolutionary framework to improve upon the most promising ideas.

AlphaEvolve enhanced the efficiency of Google’s data centers, chip design and AI training processes — including training the large language models underlying AlphaEvolve itself. It has also helped design faster matrix multiplication algorithms and find new solutions to open mathematical problems, showing incredible promise for application across many areas.

Spin-based memory advance brings brain-like computing closer to reality

Researchers at National Taiwan University have developed a new type of spintronic device that mimics how synapses work in the brain—offering a path to more energy-efficient and accurate artificial intelligence systems.

In a study published in Advanced Science, the team introduced three novel memory designs, all controlled purely by electric current and without any need for an .

Among the devices, the one based on “tilted anisotropy” stood out. This optimized structure was able to achieve 11 stable memory states with highly consistent switching behavior.

/* */