Toggle light / dark theme

Colombia’s government on Friday announced an expedition to remove items of “incalculable value” from the wreck of the legendary San Jose galleon, which sank in 1708 while laden with gold, silver and emeralds estimated to be worth billions of dollars. The 316-year-old wreck, often called the “holy grail” of shipwrecks, has been controversial, because it is both an archaeological and economic treasure.

Culture Minister Juan David Correa told AFP that more than eight years after the discovery of the wreck off Colombia’s coast, an underwater robot would be sent to recover some of its bounty.

Superintelligent AI might solve all the world’s problems. It could cure cancer, eliminate human aging, create a world of abundance for all.

Superintelligent AI might also prove completely uncontrollable and destroy humanity, whether intentionally or as mere collateral damage in the path of achieving other goals.

The clashing viewpoints about the potential and dangers of peak AI live at the heart of the battle of techno-optimists and doomsayers, accelerationists vs doomers.

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Irene Solaiman began her career in AI as a researcher and public policy manager at OpenAI, where she led a new approach to the release of GPT-2, a predecessor to ChatGPT. After serving as an AI policy manager at Zillow for nearly a year, she joined Hugging Face as the head of global policy. Her responsibilities there range from building and leading company AI policy globally to conducting socio-technical research.

Solaiman also advises the Institute of Electrical and Electronics Engineers (IEEE), the professional association for electronics engineering, on AI issues, and is a recognized AI expert at the intergovernmental Organization for Economic Co-operation and Development (OECD).

Here’s my latest Opinion piece just out for Newsweek. Check it out! Lifeboat Foundation mentioned.


We need to remember that universal distress we all had when the world started to shut down in March 2020: when not enough ventilators and hospital beds could be found; when food shelves and supplies were scarce; when no COVID-19 vaccines existed. We need to remember because COVID is just one of many different existential risks that can appear out of nowhere, and halt our lives as we know it.

Naturally, I’m glad that the world has carried on with its head high after the pandemic, but I’m also worried that more people didn’t take to heart a longer-term philosophical view that human and earthly life is highly tentative. The best, most practical way to protect ourselves from more existential risks is to try to protect ourselves ahead of time.

That means creating vaccines for diseases even when no dire need is imminent. That means trying to continue to denuclearize the military regardless of social conflicts. That means granting astronomers billions of dollars to scan the skies for planet-killer asteroids. That means spending time to build safeguards into AI, and keeping it far from military munitions.