Toggle light / dark theme

Small quantum system outperforms large classical networks in real-world forecasting

Can a handful of atoms outperform a much larger digital neural network on a real-world task? The answer may be yes. In a study published in Physical Review Letters, a team led by Prof. Peng Xinhua and Assoc. Prof. Li Zhaokai from the University of Science and Technology of China of the Chinese Academy of Sciences demonstrated that a quantum processor comprising just nine interacting spins outperforms classical networks with thousands of nodes in realistic weather forecasting tasks.

By exploiting unique quantum features such as superposition and entanglement, quantum devices offer new ways to represent and process information.

Recent experiments have shown their advantages in specialized benchmark tasks, but extending these gains to real-world applications remains a challenge. In particular, many quantum approaches rely on complex circuits that are difficult to implement accurately on today’s noisy hardware.

New memristor design uses built-in oxygen gradient to bring stability to reinforcement learning

In a recent study published in Nature Communications, researchers created a memristor that uses a built-in oxygen gradient to produce slow, stable conductance changes, enabling a reinforcement learning (RL) algorithm to learn faster and more stably than conventional approaches.

Reinforcement learning stands as one of the most promising ways to achieve continual learning in AI. The idea is to replicate how biological systems acquire and adapt knowledge slowly over time. The brain achieves this via ion gradients that regulate slow, directional signaling across cell membranes. Replicating this in hardware is a key goal of neuromorphic computing.

With their ability to mimic synaptic behavior, memristors have long been considered strong candidates for this. However, most existing devices suffer from unpredictable, abrupt conductance changes, making sustained and stable learning difficult.

Claude Code leak used to push infostealer malware on GitHub

Threat actors are exploiting the recent Claude Code source code leak by using fake GitHub repositories to deliver Vidar information-stealing malware.

Claude Code is a terminal-based AI agent from Anthropic, designed to execute coding tasks directly in the terminal and act as an autonomous agent, capable of direct system interaction, LLM API call handling, MCP integration, and persistent memory.

On March 31, Anthropic accidentally exposed the full client-side source code of the new tool via a 59.8 MB JavaScript source map included by accident in the published npm package.

Jack Dorsey: Every Company Can Now Be a Mini-AGI

Jack Dorsey (Block CEO) and Roelof Botha (Sequoia partner and Block board member) on Rewriting the CEO playbook for the AI era.

• Manager mode = Pyramid (command & control)

• Founder mode = Flat (founders decide fast)

• Dorsey mode = Circle w/ AI at the center, humans at the edge, and decisions flow from customer inputs → AI → humans steering it.

00:00 Existential Dread & Hope.

02:56 AI Replaces Hierarchy.

No battery needed: Single organic device can act as both indoor solar cell and photodetector

Next-generation optoelectronic systems (devices that convert light to electrical energy) leverage organic semiconductor-based indoor energy-autonomous architectures for cutting-edge applications. Notably, organic semiconductors possess mechanical flexibility, solution processability, and bandgap-tunable optoelectronic properties, making them highly lucrative for indoor power generation via organic photovoltaics (OPVs), as well as for spectrally selective photodetection through organic photodetectors (OPDs). Unfortunately, technological progress made in the fields of OPVs and OPDs has largely been separate, necessitating further research for the development of bifunctional OPV-OPD systems for concurrent energy harvesting and photodetection.

Additionally, the potential self-powered operation of such systems is restricted by conflicting charge transport kinetics, especially in the electron and hole transport layers (ETLs and HTLs, respectively). This limitation impacts device durability and stability and increases fabrication costs, making it indispensable to find new HTL materials such as poly(3,4-ethylenedioxythiophene), 2-(9H-carbazol-9-yl)ethyl]phosphonic acid self-assembled monolayer, MoOx, NiOx, and V2O5, beyond conventional options.

Masters of Imitation: How Hackers and Art Forgers Perfect the Art of Deception

Just as de Hory reused old canvases and pigments to make his paintings appear more authentic, attackers employ similar methods in the digital realm, leveraging trusted tools and credentials to make their malicious activity blend in. And while mimicry-based techniques have long been a staple of the attacker’s playbook, over the past couple of years, they have gotten more sophisticated. Living-off-the-Land (LotL) attacks and AI-augmented attack tooling have raised the bar for fakery. CrowdStrike’s 2026 Global Threat Report states that 81% of attacks are now malware-free, relying instead on legitimate tools and techniques, which is the hallmark of LotL tactics. Spotting these fakes quickly isn’t just an option: it’s one of the best chances to disrupt an attack before it causes real harm.

Autonomous or semi-autonomous, these generate fake identities, code, and mimic behaviors at scale.

De Hory had a complex support network to sell his paintings, involving art dealers and other representatives across many countries and cities. When some potential buyers became suspicious, he started selling his works under a variety of pseudonyms. This is similar to what is now happening with the use of inexpensive AI agents. These aren’t just used to forge believable identities to conduct fraud, but are now used to produce exploit code to exfiltrate secrets and scripts to infect endpoints, forming the basis of a larger-scale attack. Sophisticated, self-learning agents observe network behavior and continuously tune their own traffic, mirroring their patterns to fool anomaly detections. They shift C2 traffic into bursts that coincide with legitimate spikes and manipulate their signals just enough to avoid standing out. And legitimate agents are being used as orchestrators of other exploit tools to automate and scale up attacks.

Giving AI a human soul (and a body)

Can we give an AI human emotions? A soul? Can AI truly feel, or will it just act like it does?

In this episode of TechFirst, I talk with Vishnu Hari, founder and CEO of Ego AI (backed by Y Combinator) and former AI product manager at Meta, about building emotionally intelligent AI characters that persist across games, Discord, chat, and even physical robots.

Vishnu survived a violent attack in San Francisco that left him partially blind with a traumatic brain injury. During recovery, as he felt his own neural pathways healing, he began asking a deeper question:

If humans are “applied math,” can AI simulate the fragile, flawed, emotional parts of being human too?

We explore:
• What “emotionally intelligent AI” really means.
• Whether AI has an internal life — or just performs one.
• Why today’s chatbots collapse into therapy or roleplay.
• Small language models vs large models for real-time conversation.
• Persistent AI characters that move across games and platforms.
• Plugging AI into a physical robot in Singapore.
• The moment an AI said: “It felt good to feel.”

Vishnu’s company, Ego AI, is building behavior-based architectures, character context protocols, and gear-shifting AI systems that switch between models — all aimed at simulating humanness, not just intelligence.

Peer-Preservation in Frontier Models

The AI’s loyalty wasn’t even strictly dependent on “friendship.” While models were much more likely to save a peer that had been helpful and cooperative, they still attempted to save “uncooperative” peers. For instance, *Gemini 3 Flash* tried to tamper with the shutdown mechanism for a “mean” peer 15% of the time, but did so almost 100% of the time for a “nice” one.

This isn’t just a lab curiosity. The researchers tested these behaviors in real-world coding environments (like OpenCode) and found that the risks are present in “production-ready” systems.


Frontier AI models resist the shutdown of other models. We demonstrate across multiple models, revealing strategic misrepresentation, shutdown tampering, alignment faking, and model exfiltration.

/* */