At a San Diego laboratory, four women do the painstaking work of preserving cells amid a growing extinction crisis.
Category: existential risks – Page 22
Star Trek is the most popular and longest running Sci-Fi franchise in American history. Created by Gene Roddenberry, Star Trek follows the various crew of Starfleet in their missions across the galaxy.
This presentation showcases the inspiration behind Star Trek, how the Federation came to be, examines the post-scarcity economy featured in their future Earth society, and explores the philosophies of all of the major alien cultures shown throughout the Star Trek series.
00:00 — Intro.
09:38 — The Augments.
14:06 — World War 3
15:13 — Post Atomic Horror.
15:45 — First Contact.
17:47 — The Vulcans.
23:30 — Post-Scarcity Economy.
33:24 — The Federation.
45:22 — The Maquis.
48:26 — The Romulans | The Cardassians | The Klingons.
52:00 — The Ferengi.
1:00:00 — The Dominion.
1:06:08 — The Borg.
1:13:34 — Conclusion.
1:16:01 — Outro
New research shows that nasal drops of neuropeptide Y triggers extinction of fear memories in an animal model of PTSD.
The probe “performed as predicted” during the first of seven close approaches to the sun on its way to Apophis, NASA said.
Here’s my latest Opinion piece just out for Newsweek. Check it out! Lifeboat Foundation mentioned.
We need to remember that universal distress we all had when the world started to shut down in March 2020: when not enough ventilators and hospital beds could be found; when food shelves and supplies were scarce; when no COVID-19 vaccines existed. We need to remember because COVID is just one of many different existential risks that can appear out of nowhere, and halt our lives as we know it.
Naturally, I’m glad that the world has carried on with its head high after the pandemic, but I’m also worried that more people didn’t take to heart a longer-term philosophical view that human and earthly life is highly tentative. The best, most practical way to protect ourselves from more existential risks is to try to protect ourselves ahead of time.
That means creating vaccines for diseases even when no dire need is imminent. That means trying to continue to denuclearize the military regardless of social conflicts. That means granting astronomers billions of dollars to scan the skies for planet-killer asteroids. That means spending time to build safeguards into AI, and keeping it far from military munitions.
If we don’t take these steps now, either via government or private action, it could be far too late when a global threat emerges. We must treat existential risk as the threat it is: a human species and planet killer—the potential end of everything we know.
There are plenty of life-friendly stellar systems in the Universe today. But at some point in the far future, life’s final extinction will occur.
When the Chicxulub impactor, a six-mile-wide asteroid, struck Earth 66 million years ago, the dinosaurs had no warning.
If an asteroid that size hit Earth today, a shock wave two million times more powerful than a hydrogen bomb would flatten forests and trigger tsunamis. A seismic pulse equal to a magnitude 10 earthquake would crumble cities.
And long after the impact, a cloud of hot dust, ash, and steam would blot out the sun, plunging the Earth into freezing cold.
J. Robert Oppenheimer’s grandson is among the star-studded signatories of a new open letter about the dangers artificial intelligence poses to the planet.
The letter, which was issued by the Nelson Mandela-founded group The Elders in conjunction with the Future of Life Institute, calls on global decisionmakers to “show long-view leadership on existential threats,” including “ungoverned AI” and nuclear weapons.
Charles Oppenheimer, who founded the Oppenheimer Project to continue his grandfather’s mission of tempering scientific progress with “international cooperation and unity,” was joined by hundreds of others, including British billionaire Richard Branson, AI pioneer Geoffrey Hinton, writer and Carl Sagan widow Ann Druyan, and musician Peter Gabriel. In it, they warn that the world “is in grave danger” as we face down the perils of AI.
The aliens haven’t contacted us because they have uploaded themselves into digital information where they live forever anf create simulated universes that they live in or they upload themselves into femto tech level computational substrates and they could surround us.
Is Earth impossible? An exploration of the impossible earth hypothesis and its implications on science and existence.
My Patreon Page:
/ johnmichaelgodier.
My Event Horizon Channel:
A halloween eve exploration of two of the spookiest solutions to the Fermi Paradox. The Dark Forest Hypothesis and the Berserker Hypothesis.
My Patreon Page:
/ johnmichaelgodier.
My Event Horizon Channel:
/ eventhorizonshow.
Music.