Toggle light / dark theme

By 2050 we could get “10,000 years of technological progress”

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almost satirical. But is it actually a bad plan? Today’s guest, Ajeya Cotra, recently placed 3rd out of 413 participants forecasting AI developments and is among the most thoughtful and respected commentators on where the technology is going.

She thinks there’s a meaningful chance we’ll see as much change in the next 23 years as humanity faced in the last 10,000, thanks to the arrival of artificial general intelligence. Ajeya doesn’t reach this conclusion lightly: she’s had a ring-side seat to the growth of all the major AI companies for 10 years — first as a researcher and grantmaker for technical AI safety at Coefficient Giving (formerly known as Open Philanthropy), and now as a member of technical staff at METR.

So host Rob Wiblin asked her: is this plan to use AI to save us from AI a reasonable one?

Ajeya agrees that humanity has repeatedly used technologies that create new problems to help solve those problems. After all:
• Cars enabled carjackings and drive-by shootings, but also faster police pursuits.
• Microbiology enabled bioweapons, but also faster vaccine development.
• The internet allowed lies to disseminate faster, but had exactly the same impact for fact checks.

But she also thinks this will be a much harder case. In her view, the window between AI automating AI research and the arrival of uncontrollably powerful superintelligence could be quite brief — perhaps a year or less. In that narrow window, we’d need to redirect enormous amounts of AI labour away from making AI smarter and towards alignment research, biodefence, cyberdefence, adapting our political structures, and improving our collective decision-making.

The plan might fail just because the idea is flawed at conception: it does sound a bit crazy to use an AI you don’t trust to make sure that same AI benefits humanity.

Researcher skeptical of ‘Havana syndrome’ tested secret weapon on himself

“Working in strict secrecy, a government scientist in Norway built a machine capable of emitting powerful pulses of microwave energy and, in an effort to prove such devices are harmless to humans, in 2024 tested it on himself. He suffered neurological symptoms similar to those of ”Havana syndrome,” the unexplained malady that has struck hundreds of U.S. spies and diplomats around the world.

The bizarre story, described by four people familiar with the events, is the latest wrinkle in the decade-long quest to find the causes of Havana syndrome, whose sufferers experience long-lasting effects including cognitive challenges, dizziness and nausea. The U.S. government calls the events Anomalous Health Incidents (AHIs).

The secret test in Norway has not been previously reported. The Norwegian government told the CIA about the results, two of the people said, prompting at least two visits in 2024 to Norway by Pentagon and White House officials.


The CIA investigated a Norwegian government experiment with a pulsed-energy machine in which a researcher built and tested a ”Havana syndrome” device on himself.

Overtime with Bill Maher: Jonathan Haidt, Stephanie Ruhle, H.R. McMaster (HBO)

Artificial intelligence is rapidly advancing to the point where it may be able to write its own code, potentially leading to significant job displacement, societal problems, and concerns about unregulated use in areas like warfare.

## Questions to inspire discussion.

Career Adaptation.

🎯 Q: How should workers prepare for AI’s impact on employment? A: 20% of jobs including coders, medical, consulting, finance, and accounting roles will be affected in the next 5 years, requiring workers to actively learn and use large language models to enhance productivity or risk being left behind in the competitive landscape.

Economic Policy.

📊 Q: What systemic response is needed for AI-driven job displacement? A: Government planning is essential to manage massive economic transitions and job losses as AI’s exponential growth reaches a tipping point, extending beyond manufacturing into white-collar professions across multiple sectors.

The Frontier Labs War: Opus 4.6, GPT 5.3 Codex, and the SuperBowl Ads Debacle

Questions to inspire discussion AI Model Performance & Capabilities.

🤖 Q: How does Anthropic’s Opus 4.6 compare to GPT-5.2 in performance?

A: Opus 4.6 outperforms GPT-5.2 by 144 ELO points while handling 1M tokens, and is now in production with recursive self-improvement capabilities that allow it to rewrite its entire tech stack.

🔧 Q: What real-world task demonstrates Opus 4.6’s agent swarm capabilities?

A: An agent swarm created a C compiler in Rust for multiple architectures in weeks for **$20K, a task that would take humans decades, demonstrating AI’s ability to collapse timelines and costs.

🐛 Q: How effective is Opus 4.6 at finding security vulnerabilities?

The Singularity: Everyone’s Certain. Everyone’s Guessing

The Technological Singularity is the most overconfident idea in modern futurism: a prediction about the point where prediction breaks. It’s pitched like a destination, argued like a religion, funded like an arms race, and narrated like a movie trailer — yet the closer the conversation gets to specifics, the more it reveals something awkward and human. Almost nobody is actually arguing about “the Singularity.” They’re arguing about which future deserves fear, which future deserves faith, and who gets to steer the curve when it stops looking like a curve and starts looking like a cliff.

The Singularity begins as a definitional hack: a word borrowed from physics to describe a future boundary condition — an “event horizon” where ordinary forecasting fails. I. J. Good — British mathematician and early AI theorist — framed the mechanism as an “intelligence explosion,” where smarter systems build smarter systems and the loop feeds on itself. Vernor Vinge — computer scientist and science-fiction author — popularized the metaphor that, after superhuman intelligence, the world becomes as unreadable to humans as the post-ice age would have been to a trilobite.

In my podcast interviews, the key move is that “Singularity” isn’t one claim — it’s a bundle. Gennady Stolyarov II — transhumanist writer and philosopher — rejects the cartoon version: “It’s not going to be this sharp delineation between humans and AI that leads to this intelligence explosion.” In his framing, it’s less “humans versus machines” than a long, messy braid of tools, augmentation, and institutions catching up to their own inventions.

DPRK Operatives Impersonate Professionals on LinkedIn to Infiltrate Companies

“Always validate that accounts listed by candidates are controlled by the email they provide,” Security Alliance said. “Simple checks like asking them to connect with you on LinkedIn will verify their ownership and control of the account.”

The disclosure comes as the Norwegian Police Security Service (PST) issued an advisory, stating it’s aware of “several cases” over the past year where Norwegian businesses have been impacted by IT worker schemes.

“The businesses have been tricked into hiring what likely North Korean IT workers in home office positions,” PST said last week. “The salary income North Korean employees receive through such positions probably goes to finance the country’s weapons and nuclear weapons program.”

Germany warns of Signal account hijacking targeting senior figures

Germany’s domestic intelligence agency is warning of suspected state-sponsored threat actors targeting high-ranking individuals in phishing attacks via messaging apps like Signal.

The attacks combine social engineering with legitimate features to steal data from politicians, military officers, diplomats, and investigative journalists in Germany and across Europe.

The security advisory is based on intelligence collected by the Federal Office for the Protection of the Constitution (BfV) and the Federal Office for Information Security (BSI).

Elon Musk — “In 36 months, the cheapest place to put AI will be space”

How Elon plans to launch a terawatt of GPUs into space.

## Elon Musk plans to launch a massive computing power of 1 terawatt of GPUs into space to advance AI, robotics, and make humanity multi-planetary, while ensuring responsible use and production. ## ## Questions to inspire discussion.

Space-Based AI Infrastructure.

Q: When will space-based data centers become economically superior to Earth-based ones? A: Space data centers will be the most economically compelling option in 30–36 months due to 5x more effective solar power (no batteries needed) and regulatory advantages in scaling compared to Earth.

☀️ Q: How much cheaper is space solar compared to ground solar? A: Space solar is 10x cheaper than ground solar because it requires no batteries and is 5x more effective, while Earth scaling faces tariffs and land/permit issues.

Q: What solar production capacity are SpaceX and Tesla planning? A: SpaceX and Tesla plan to produce 100 GW/year of solar cells for space, manufacturing from raw materials to finished cells in-house.

SYRIA UAP 2021 : Military-Filmed Footage / Apparent Instantaneous Acceleration

Investigative journalists Jeremy Corbell and George Knapp have obtained and are revealing, for the first time, military-filmed footage of a UAP (Unidentified Anomalous Phenomena), officially documented and cataloged within United States Intelligence Community investigations and examinations — as demonstrating “instantaneous acceleration” — one of the signature six observables associated with UAP flight performance. The official designation of UAP was established by the United States Intelligence Community and the Department of War. This designation is currently maintained.

DATE — 2021

LOCATION — Syria / imaged from the border of Jordan (32°05’39.2 N, 36°53’54.4 E)

IMAGING TYPE — Thermographic / Forward Looking Infrared (FLIR)

PLATFORM — MQ-9 Reaper / Multi-Spectral Targeting System (MTS-B)

EVENT DESCRIPTION — Filmed by a platform operating under the direction of the United States Air Force. The UAP was observed and actively tracked — the Reaper established a weapons-quality lock. The UAP appeared to demonstrate abrupt directional changes, instantaneous acceleration, and intelligent control. Absence of traditional propulsion or thermal signatures during performance — as well as an examination of shape and acceleration — were noted in documentation. Origin, intent, and capabilities remain unknown.

/* */