Toggle light / dark theme

Superconductivity Inspires New Dark Matter Contender

As searches for the leading dark matter candidates—weakly interacting massive particles, axions, and primordial black holes—continue to deliver null results, the door opens on the exploration of more exotic alternatives. Guanming Liang and Robert Caldwell of Dartmouth College in New Hampshire have now proposed a dark matter candidate that is analogous with a superconducting state [1]. Their proposal involves interacting fermions that could exist in a condensate similar to that formed by Cooper pairs in the Bardeen-Cooper-Schrieffer theory of superconductivity.

The novel fermions considered by Liang and Caldwell emerge in the Nambu–Jona-Lasinio model, which can be regarded as a low-energy approximation of the quantum chromodynamics theory that describes the strong interaction. The duo considers a scenario where, in the early Universe, the fermions behave like radiation, reaching thermal equilibrium with standard photons. As the Universe expands and the temperature drops below a certain threshold, however, the fermions undergo a phase transition that leads them to pair up and form a massive condensate.

The proposed scenario has several appealing features, say Liang and Caldwell. The fermions’ behavior would be consistent with that of the cold dark matter considered by the current standard model of cosmology. Further, the scenario implies a slight imbalance between fermions with different chiralities (left-and right-handed). Such an imbalance might be related to the yet-to-be-explained matter–antimatter asymmetry seen in the Universe. What’s more, the model predicts that the fermions obey a time-dependent equation of state that would produce unique, potentially observable signatures in the cosmic microwave background (CMB) radiation. The researchers suggest that next-generation CMB measurements—by the Simons Observatory and by so-called stage 4 CMB telescopes—might reach sufficient precision to vet their idea.

Self-improving AI is here!

HUGE AI breakthrough: Absolute Zero Reasoner deep dive. Self-improving AI that learns with no data! #ai #aitools #ainews #llm.

Sources:
https://arxiv.org/abs/2505.03335
https://github.com/LeapLabTHU/Absolut… Thanks to Tavus for sponsoring this video. Try Tavus for free https://tavus.plug.dev/T4AQw5K 0:00 Absolute Zero intro 0:50 Traditional methods of training AI models 4:00 Absolute Zero algorithm 5:01 How Absolute Zero Reasoner works 7:19 Types of training tasks 9:00 How good is Absolute Zero 10:47 Tavus 12:11 Adding Absolute Zero to existing models 13:01 Interesting findings 15:43 Uhoh… 16:50 Ablation study 18:15 More interesting findings Newsletter: https://aisearch.substack.com/ Find AI tools & jobs: https://ai-search.io/ Support: https://ko-fi.com/aisearch Here’s my equipment, in case you’re wondering: Dell Precision 5690: https://www.dell.com/en-us/dt/ai-tech… Nvidia RTX 5,000 Ada https://nvda.ws/3zfqGqS Mouse/Keyboard: ALOGIC Echelon https://bit.ly/alogic-echelon Mic: Shure SM7B https://amzn.to/3DErjt1 Audio interface: Scarlett Solo https://amzn.to/3qELMeu.

Thanks to Tavus for sponsoring this video. Try Tavus for free https://tavus.plug.dev/T4AQw5K

0:00 Absolute Zero intro.
0:50 Traditional methods of training AI models.
4:00 Absolute Zero algorithm.
5:01 How Absolute Zero Reasoner works.
7:19 Types of training tasks.
9:00 How good is Absolute Zero.
10:47 Tavus.
12:11 Adding Absolute Zero to existing models.
13:01 Interesting findings.
15:43 Uhoh…
16:50 Ablation study.
18:15 More interesting findings.

Newsletter: https://aisearch.substack.com/
Find AI tools & jobs: https://ai-search.io/
Support: https://ko-fi.com/aisearch.

Here’s my equipment, in case you’re wondering:

AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms

Large language models (LLMs) are remarkably versatile. They can summarize documents, generate code or even brainstorm new ideas. And now we’ve expanded these capabilities to target fundamental and highly complex problems in mathematics and modern computing.

Today, we’re announcing AlphaEvolve, an evolutionary coding agent powered by large language models for general-purpose algorithm discovery and optimization. AlphaEvolve pairs the creative problem-solving capabilities of our Gemini models with automated evaluators that verify answers, and uses an evolutionary framework to improve upon the most promising ideas.

AlphaEvolve enhanced the efficiency of Google’s data centers, chip design and AI training processes — including training the large language models underlying AlphaEvolve itself. It has also helped design faster matrix multiplication algorithms and find new solutions to open mathematical problems, showing incredible promise for application across many areas.

Algorithm based on LLMs doubles lossless data compression rates

People store large quantities of data in their electronic devices and transfer some of this data to others, whether for professional or personal reasons. Data compression methods are thus of the utmost importance, as they can boost the efficiency of devices and communications, making users less reliant on cloud data services and external storage devices.

Researchers at the Central China Institute of Artificial Intelligence, Peng Cheng Laboratory, Dalian University of Technology, the Chinese Academy of Sciences and University of Waterloo recently introduced LMCompress, a new data compression approach based on (LLMs), such as the model underpinning the AI conversational platform ChatGPT.

Their proposed method, outlined in a paper published in Nature Machine Intelligence, was found to be significantly more powerful than classical data compression algorithms.

Smart charging, real cash: Ava wants to pay EV drivers to plug in

Ava Community Energy just rolled out a new program in California that pays EV and plug-in hybrid drivers for charging their cars when electricity on the grid is cleaner and cheaper.

The new Ava SmartHome Charging program, launched in partnership with home energy analytics platform Optiwatt, offers up to $100 in incentives in the first year. And because the program helps shift home charging to lower-cost hours, Ava says drivers could save around $140 a year on their energy bills.

EV and PHEV owners who are Ava customers can download the Optiwatt app for free, connect their vehicle, and let the app handle the rest. The app uses an algorithm to automatically schedule charging when demand is low and more renewable energy is available, typically overnight or during off-peak hours.

Study Suggests Quantum Entanglement May Rewrite the Rules of Gravity

A new study proposes that quantum information, encoded in entanglement entropy, directly shapes the fabric of spacetime, offering a fresh path toward unifying gravity and quantum mechanics.

Published in Annals of Physics, the paper presents a reformulation of Einstein’s field equations, arguing that gravity is not just a response to mass and energy, but also to the information structure of quantum fields. This shift, if validated, would mark a fundamental transformation in how physicists understand both gravity and quantum computing.

The study, published by Florian Neukart, of the Leiden Institute of Advanced Computer Science, Leiden University and Chief Product Officer of Terra Quantum, introduces the concept of an “informational stress-energy tensor” derived from quantum entanglement entropy.

The Enigmatic Machine: Decoding AI’s Black Box Phenomenon

In the domain of artificial intelligence, human ingenuity has birthed entities capable of feats once relegated to science fiction. Yet within this triumph of creation resides a profound paradox: we have designed systems whose inner workings often elude our understanding. Like medieval alchemists who could transform substances without grasping the underlying chemistry, we stand before our algorithmic progeny with a similar mixture of wonder and bewilderment. This is the essence of the “black box” problem in AI — a philosophical and technical conundrum that cuts to the heart of our relationship with the machines we’ve created.

The term “black box” originates from systems theory, where it describes a device or system analyzed solely in terms of its inputs and outputs, with no knowledge of its internal workings. When applied to artificial intelligence, particularly to modern deep learning systems, the metaphor becomes startlingly apt. We feed these systems data, they produce results, but the transformative processes occurring between remain largely opaque. As Pedro Domingos (2015) eloquently states in his seminal work The Master Algorithm: “Machine learning is like farming. The machine learning expert is like a farmer who plants the seeds (the algorithm and the data), harvests the crop (the classifier), and sells it to consumers, without necessarily understanding the biological mechanisms of growth” (p. 78).

This agricultural metaphor points to a radical reconceptualization in how we create computational systems. Traditionally, software engineering has followed a constructivist approach — architects design systems by explicitly coding rules and behaviors. Yet modern AI systems, particularly neural networks, operate differently. Rather than being built piece by piece with predetermined functions, they develop their capabilities through exposure to data and feedback mechanisms. This observation led AI researcher Andrej Karpathy (2017) to assert that “neural networks are not ‘programmed’ in the traditional sense, but grown, trained, and evolved.”

/* */