Menu

Blog

Archive for the ‘existential risks’ category: Page 23

Jun 11, 2023

Oppenheimer — with Robert J. Sawyer

Posted by in categories: cryptocurrencies, existential risks, military, nuclear energy, robotics/AI

Science Fiction author Robert J. Sawyer talks about Oppenheimer and about his Alternate History book: The Oppenheimer Alternative.

Where to find ‘The Oppenheimer Alternative” book?
Robert J. Sawyer’s website: https://sfwriter.com.

Continue reading “Oppenheimer — with Robert J. Sawyer” »

Jun 10, 2023

Europe’s most dangerous ‘supervolcano’ could be creeping toward eruption, scientists warn

Posted by in categories: asteroid/comet impacts, existential risks

Italy’s Campi Flegrei is showing some troubling early warning signs, but scientists caution that its eruption is far from certain.

Jun 8, 2023

The Y Chromosome Is Vanishing. A New Sex Gene Could Be The Future of Men

Posted by in categories: existential risks, sex

The sex of human and other mammal babies is decided by a male-determining gene on the Y chromosome. But the human Y chromosome is degenerating and may disappear in a few million years, leading to our extinction unless we evolve a new sex gene.

The good news is two branches of rodents have already lost their Y chromosome and have lived to tell the tale.

A recent paper in Proceedings of the National Academy of Science shows how the spiny rat has evolved a new male-determining gene.

Jun 5, 2023

The rise of AI: ‘AI doomsday’ or the best thing since sliced bread?

Posted by in categories: existential risks, robotics/AI, security

A raft of industry experts have given their views on the likely impact of artificial intelligence on humanity in the future. The responses are unsurprisingly mixed.

The Guardian has released an interesting article regarding the potential socioeconomic and political impact of the ever-increasing rollout of artificial intelligence (AI) on society. By asking various experts in the field on the subject, the responses were, not surprisingly, a mixed bag of doom, gloom, and hope.

Continue reading “The rise of AI: ‘AI doomsday’ or the best thing since sliced bread?” »

Jun 1, 2023

Terrifying New Use Of AI Brings Humanity One Step Closer To Extinction

Posted by in categories: existential risks, robotics/AI

Published 49 mins ago.

May 31, 2023

Geneticists discover hidden ‘whole genome duplication’ that may explain why some species survived mass extinctions

Posted by in categories: biotech/medical, evolution, existential risks, genetics

Geneticists have unearthed a major event in the ancient history of sturgeons and paddlefish that has significant implications for the way we understand evolution. They have pinpointed a previously hidden “whole genome duplication” (WGD) in the common ancestor of these species, which seemingly opened the door to genetic variations that may have conferred an advantage around the time of a major mass extinction some 200 million years ago.

The big-picture finding suggests that there may be many more overlooked, shared WGDs in other species before periods of extreme environmental upheaval throughout Earth’s tumultuous history.

The research, led by Professor Aoife McLysaght and Dr. Anthony Redmond from Trinity College Dublin’s School of Genetics and Microbiology, has just been published in Nature Communications.

May 30, 2023

AI intelligence could cause human extinction say tech leaders

Posted by in categories: existential risks, robotics/AI

As apocalyptic warnings go, today is right up there. Some of the world’s most influential tech geniuses and entrepreneurs say AI risks the extinction of humanity.

Having lobbed the ball firmly in the court of global leaders and lawmakers the question is: will they have any idea what to do about it?

May 30, 2023

Top AI researchers and CEOs warn against ‘risk of extinction’ in 22-word statement

Posted by in categories: biotech/medical, existential risks, robotics/AI

It’s another high-profile warning about AI risk that will divide experts. Signatories include Google DeepMind CEO Demis Hassabis and OpenAI CEO Sam Altman.

A group of top AI researchers, engineers, and CEOs have issued a new warning about the existential threat they believe that AI poses to humanity.

The 22-word statement, trimmed short to make it as broadly acceptable as possible, reads as follows: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Continue reading “Top AI researchers and CEOs warn against ‘risk of extinction’ in 22-word statement” »

May 30, 2023

We Are (Probably) Safe From Asteroids For 1,000 Years, Say Scientists

Posted by in categories: asteroid/comet impacts, existential risks

When will an asteroid hit Earth and wipe us out? Not for at least 1,000 years, according to a team of astronomers. Probably.

Either way, you should get to know an asteroid called 7482 (1994 PC1), the only one known whose orbital path will cross that of Earth’s consistently for the next millennium—and thus has the largest probability of a “deep close encounter” with us, specifically in 502 years. Possibly.

Published on a preprint archive and accepted for publication in The Astronomical Journal, the paper states that astronomers have almost found all the kilometer-sized asteroids. There’s a little under 1,000 of them.

May 24, 2023

Whole Brain Emulation

Posted by in categories: existential risks, mapping, neuroscience, robotics/AI

I had an amazing experience at the Foresight Institute’s Whole-Brain Emulation (WBE) Workshop at a venue near Oxford! For more information and a list of participants, see: https://foresight.org/whole-brain-emulation-workshop-2023/ I had the opportunity to work within a group of some of the most brilliant, ambitious, and visionary people I’ve ever encountered on the quest for recreating the human brain in a computer. We also discussed in depth the existential risks of upcoming artificial superintelligence and how to mitigate these risks, perhaps with the aid of WBE.

My subgroup focused on exploring the challenge of human connectomics (mapping all of the neurons and synapses in the brain).


WBE is a potential technology to generate software intelligence that is human-aligned simply by being based directly on human brains. Generally past discussions have assumed a fairly long timeline to WBE, while past AGI timelines had broad uncertainty. There were also concerns that the neuroscience of WBE might boost AGI capability development without helping safety, although no consensus did develop. Recently many people have updated their AGI timelines towards earlier development, raising safety concerns. That has led some people to consider whether WBE development could be significantly speeded up, producing a differential technology development re-ordering of technology arrival that might lessen the risk of unaligned AGI by the presence of aligned software intelligence.

Page 23 of 148First2021222324252627Last