Toggle light / dark theme

AI could make it easier to create bioweapons that bypass current security protocols

Artificial intelligence is transforming biology and medicine by accelerating the discovery of new drugs and proteins and making it easier to design and manipulate DNA, the building blocks of life. But as with most new technologies, there is a potential downside. The same AI tools could be used to develop dangerous new pathogens and toxins that bypass current security checks. In a new study from Microsoft, scientists employed a hacker-style test to demonstrate that AI-generated sequences could evade security software used by DNA manufacturers.

“We believe that the ongoing advancement of AI-assisted design holds great promise for tackling critical challenges in health and the , with the potential to deliver overwhelmingly positive impacts on people and society,” commented the researchers in their paper published in the journal Science. “As with other emerging technologies, however, it is also crucial to proactively identify and mitigate risks arising from novel capabilities.”

All Analysis and Records Withheld on DoD’s Own Released UAP Footage

The Department of Defense (DoD) has denied a Freedom of Information Act (FOIA) request seeking records connected to the review, redaction, and release of a UAP video published by the All-domain Anomaly Resolution Office (AARO) earlier this year.

The request, filed May 19, 2025, sought internal communications, review logs, classification guidance, legal opinions, and technical documentation tied to the public posting of the video titled “Middle East 2024.” The video, showing more than six minutes of infrared footage from a U.S. military platform, was released in May 2025 and remains unresolved by AARO.

Video Player

Phantom Taurus: New China-Linked Hacker Group Hits Governments With Stealth Malware

“The group takes an interest in diplomatic communications, defense-related intelligence and the operations of critical governmental ministries,” the company said. “The timing and scope of the group’s operations frequently coincide with major global events and regional security affairs.”

This aspect is particularly revealing, not least because other Chinese hacking groups have also embraced a similar approach. For instance, a new adversary tracked by Recorded Future as RedNovember is assessed to have targeted entities in Taiwan and Panama in close proximity to “geopolitical and military events of key strategic interest to China.”

Phantom Taurus’ modus operandi also stands out due to the use of custom-developed tools and techniques rarely observed in the threat landscape. This includes a never-before-seen bespoke malware suite dubbed NET-STAR. Developed in. NET, the program is designed to target Internet Information Services (IIS) web servers.

Physics-based algorithm enables nuclear microreactors to autonomously adjust power output

A new physics-based algorithm clears a path toward nuclear microreactors that can autonomously adjust power output based on need, according to a University of Michigan-led study published in Progress in Nuclear Energy.

Easily transportable and able to generate up to 20 megawatts of thermal energy for heat or electricity, nuclear microreactors could be useful in such as , disaster zones, or even cargo ships, in addition to other applications.

If integrated into an , nuclear microreactors could provide stable, carbon-free energy, but they must be able to adjust to match shifting demand—a capability known as load following. In large reactors, staff make these adjustments manually, which would be cost-prohibitive in remote areas, imposing a barrier to adoption.

Sorry Mr. Yudkowsky, we’ll build it and everything will be fine

Review of “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All” (2025), by Eliezer Yudkowsky and Nate Soares, with very critical commentary.

I’be been reading the book “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All” (2025), by Eliezer Yudkowsky and Nate Soares, published last week.

Yudkowsky and Soares present a stark warning about the dangers of developing artificial superintelligence (ASI), defined as artificial intelligence (AI) that vastly exceeds human intelligence. The authors argue that creating such AI using current techniques would almost certainly lead to human extinction and emphasize that ASI poses an existential threat to humanity. They argue that the race to build smarter-than-human AI is not an arms race but a “suicide race,” driven by competition and optimism that ignores fundamental risks.

US firm’s drone conducts strikes with next-gen loitering munition

A new military test has showcased potential that large drones can work as motherships for smaller loitering munitions. The plan could get a push following a recent air launch of a Switchblade 600 loitering munition (LM) from a General Atomics’ Block 5 MQ-9A unmanned aircraft system (UAS).

It marked the first time a Switchblade 600 has ever been launched from an unmanned aircraft.

The flight testing took place from July 22–24 at the U.S. Army Yuma Proving Grounds Test Range.

Mo Gawdat on AI, ethics & machine mastery: How Artificial Intelligence will rule the world

Mo Gawdat warns that AI will soon surpass human intelligence, fundamentally changing society, but also believes that with collective action, ethical development, and altruistic leadership, humans can ensure a beneficial future and potentially avoid losing control to AI

## Questions to inspire discussion.

AI’s Impact on Humanity.

🤖 Q: How soon will AI surpass human intelligence? A: According to Mo Gawdat, AI will reach AGI by 2026, with intelligence measured in thousands compared to humans, making human intelligence irrelevant within 3 years.

🌍 Q: What potential benefits could AI bring to global issues? A: 12% of world military spending redirected to AI could solve world hunger, provide universal healthcare, and end extreme poverty, creating a potential utopia.

Preparing for an AI-Driven Future.

ASI Risks: Similar premises, opposite conclusions | Eliezer Yudkowsky vs Mark Miller

A debate/discussion on ASI (artificial superintelligence) between Foresight Senior Fellow Mark S. Miller and MIRI founder Eliezer Yudkowsky. Sharing similar long-term goals, they nevertheless reach opposite conclusions on best strategy.

“What are the best strategies for addressing risks from artificial superintelligence? In this 4-hour conversation, Eliezer Yudkowsky and Mark Miller discuss their cruxes for disagreement. While Eliezer advocates an international treaty that bans anyone from building it, Mark argues that such a pause would make an ASI singleton more likely – which he sees as the greatest danger.”


What are the best strategies for addressing extreme risks from artificial superintelligence? In this 4-hour conversation, decision theorist Eliezer Yudkowsky and computer scientist Mark Miller discuss their cruxes for disagreement.

They examine the future of AI, existential risk, and whether alignment is even possible. Topics include AI risk scenarios, coalition dynamics, secure systems like seL4, hardware exploits like Rowhammer, molecular engineering with AlphaFold, and historical analogies like nuclear arms control. They explore superintelligence governance, multipolar vs singleton futures, and the philosophical challenges of trust, verification, and control in a post-AGI world.

Moderated by Christine Peterson, the discussion seeks the least risky strategy for reaching a preferred state amid superintelligent AI risks. Yudkowsky warns of catastrophic outcomes if AGI is not controlled, while Miller advocates decentralizing power and preserving human institutions as AI evolves.

/* */