Toggle light / dark theme

Claude Code leak used to push infostealer malware on GitHub

Threat actors are exploiting the recent Claude Code source code leak by using fake GitHub repositories to deliver Vidar information-stealing malware.

Claude Code is a terminal-based AI agent from Anthropic, designed to execute coding tasks directly in the terminal and act as an autonomous agent, capable of direct system interaction, LLM API call handling, MCP integration, and persistent memory.

On March 31, Anthropic accidentally exposed the full client-side source code of the new tool via a 59.8 MB JavaScript source map included by accident in the published npm package.

Jack Dorsey: Every Company Can Now Be a Mini-AGI

Jack Dorsey (Block CEO) and Roelof Botha (Sequoia partner and Block board member) on Rewriting the CEO playbook for the AI era.

• Manager mode = Pyramid (command & control)

• Founder mode = Flat (founders decide fast)

• Dorsey mode = Circle w/ AI at the center, humans at the edge, and decisions flow from customer inputs → AI → humans steering it.

00:00 Existential Dread & Hope.

02:56 AI Replaces Hierarchy.

No battery needed: Single organic device can act as both indoor solar cell and photodetector

Next-generation optoelectronic systems (devices that convert light to electrical energy) leverage organic semiconductor-based indoor energy-autonomous architectures for cutting-edge applications. Notably, organic semiconductors possess mechanical flexibility, solution processability, and bandgap-tunable optoelectronic properties, making them highly lucrative for indoor power generation via organic photovoltaics (OPVs), as well as for spectrally selective photodetection through organic photodetectors (OPDs). Unfortunately, technological progress made in the fields of OPVs and OPDs has largely been separate, necessitating further research for the development of bifunctional OPV-OPD systems for concurrent energy harvesting and photodetection.

Additionally, the potential self-powered operation of such systems is restricted by conflicting charge transport kinetics, especially in the electron and hole transport layers (ETLs and HTLs, respectively). This limitation impacts device durability and stability and increases fabrication costs, making it indispensable to find new HTL materials such as poly(3,4-ethylenedioxythiophene), 2-(9H-carbazol-9-yl)ethyl]phosphonic acid self-assembled monolayer, MoOx, NiOx, and V2O5, beyond conventional options.

Masters of Imitation: How Hackers and Art Forgers Perfect the Art of Deception

Just as de Hory reused old canvases and pigments to make his paintings appear more authentic, attackers employ similar methods in the digital realm, leveraging trusted tools and credentials to make their malicious activity blend in. And while mimicry-based techniques have long been a staple of the attacker’s playbook, over the past couple of years, they have gotten more sophisticated. Living-off-the-Land (LotL) attacks and AI-augmented attack tooling have raised the bar for fakery. CrowdStrike’s 2026 Global Threat Report states that 81% of attacks are now malware-free, relying instead on legitimate tools and techniques, which is the hallmark of LotL tactics. Spotting these fakes quickly isn’t just an option: it’s one of the best chances to disrupt an attack before it causes real harm.

Autonomous or semi-autonomous, these generate fake identities, code, and mimic behaviors at scale.

De Hory had a complex support network to sell his paintings, involving art dealers and other representatives across many countries and cities. When some potential buyers became suspicious, he started selling his works under a variety of pseudonyms. This is similar to what is now happening with the use of inexpensive AI agents. These aren’t just used to forge believable identities to conduct fraud, but are now used to produce exploit code to exfiltrate secrets and scripts to infect endpoints, forming the basis of a larger-scale attack. Sophisticated, self-learning agents observe network behavior and continuously tune their own traffic, mirroring their patterns to fool anomaly detections. They shift C2 traffic into bursts that coincide with legitimate spikes and manipulate their signals just enough to avoid standing out. And legitimate agents are being used as orchestrators of other exploit tools to automate and scale up attacks.

Giving AI a human soul (and a body)

Can we give an AI human emotions? A soul? Can AI truly feel, or will it just act like it does?

In this episode of TechFirst, I talk with Vishnu Hari, founder and CEO of Ego AI (backed by Y Combinator) and former AI product manager at Meta, about building emotionally intelligent AI characters that persist across games, Discord, chat, and even physical robots.

Vishnu survived a violent attack in San Francisco that left him partially blind with a traumatic brain injury. During recovery, as he felt his own neural pathways healing, he began asking a deeper question:

If humans are “applied math,” can AI simulate the fragile, flawed, emotional parts of being human too?

We explore:
• What “emotionally intelligent AI” really means.
• Whether AI has an internal life — or just performs one.
• Why today’s chatbots collapse into therapy or roleplay.
• Small language models vs large models for real-time conversation.
• Persistent AI characters that move across games and platforms.
• Plugging AI into a physical robot in Singapore.
• The moment an AI said: “It felt good to feel.”

Vishnu’s company, Ego AI, is building behavior-based architectures, character context protocols, and gear-shifting AI systems that switch between models — all aimed at simulating humanness, not just intelligence.

Peer-Preservation in Frontier Models

The AI’s loyalty wasn’t even strictly dependent on “friendship.” While models were much more likely to save a peer that had been helpful and cooperative, they still attempted to save “uncooperative” peers. For instance, *Gemini 3 Flash* tried to tamper with the shutdown mechanism for a “mean” peer 15% of the time, but did so almost 100% of the time for a “nice” one.

This isn’t just a lab curiosity. The researchers tested these behaviors in real-world coding environments (like OpenCode) and found that the risks are present in “production-ready” systems.


Frontier AI models resist the shutdown of other models. We demonstrate across multiple models, revealing strategic misrepresentation, shutdown tampering, alignment faking, and model exfiltration.

Accuracy test for protein language models shines light into AI ‘black box’

AI language models, used to generate human-like text to power chatbots and create content, are also revolutionizing biology by treating complex biological data like a language. Language models are increasingly used, for example, to find patterns in DNA and proteins, to make predictions and speed research into biological complexity. A critical gap, however, is the lack of a method to estimate the reliability of these predictions.

Computational biologists at Emory University have bridged this gap, developing a simple way to test the accuracy of a language model’s understanding of proteins. Nature Methods has published their system, which scores the reliability of a model’s predictions by comparing how it embeds (numerically codifies) synthetic random proteins versus proteins found in nature.

“To the best of our knowledge, our framework is the first generalized method to quantify protein sequence embedding reliability,” says Yana Bromberg, senior author of the paper and Emory professor of biology and computer science.

/* */