Blog

Archive for the ‘existential risks’ category

Jun 1, 2018

Synthetic molecule super-effective against superbugs

Posted by in categories: biotech/medical, existential risks, robotics/AI

Forget zombies or killer robots – the most likely doomsday scenario in the near future is the threat of superbugs. Bacteria are evolving resistance to our best antibiotics at an alarming rate, so developing new ones is a crucial area of study. Now, inspired by a natural molecule produced by marine microorganisms, researchers at North Carolina State University have synthesized a new compound that shows promising antibacterial properties against resistant bugs.

Decades of overuse and overprescription of antibiotics has led to more and more bacteria becoming resistant to them, and the situation is so dire that a recent report warned that they could be killing up to 10 million people a year by 2050. Worse still, the bugs seem to be on schedule, with the ECDC reporting that our last line of defense has already begun to fail in large numbers.

Read more

May 28, 2018

IBM’s head of Watson likes Elon Musk but ‘hates’ A.I. scaremongering

Posted by in categories: Elon Musk, existential risks, robotics/AI

IBM’s David Kenny suggested that artificial intelligence doomsday warnings from the likes of Tesla CEO Elon Musk were overblown.

Read more

May 25, 2018

Blue Origin’s Jeff Bezos advocates a return to the moon and calls for collaborative effort in space

Posted by in categories: existential risks, space travel

Staying on Earth “is not necessarily extinction, but the alternative is stasis,” Bezos said during an onstage discussion Friday night with Geekwire journalist Alan Boyle at the National Space Society’s International Space Development Conference in Los Angeles.

Read more

May 14, 2018

What If an Asteroid Hit the Earth?

Posted by in categories: asteroid/comet impacts, existential risks

Would we suffer the same fate as the dinosaurs?

Read more

May 10, 2018

How Frightened Should We Be of A.I.?

Posted by in categories: existential risks, robotics/AI, transportation

Many people in tech point out that artificial narrow intelligence, or A.N.I., has grown ever safer and more reliable—certainly safer and more reliable than we are. (Self-driving cars and trucks might save hundreds of thousands of lives every year.) For them, the question is whether the risks of creating an omnicompetent Jeeves would exceed the combined risks of the myriad nightmares—pandemics, asteroid strikes, global nuclear war, etc.—that an A.G.I. could sweep aside for us.


Thinking about artificial intelligence can help clarify what makes us human—for better and for worse.

Read more

May 9, 2018

Where are the aliens? Solutions to Fermi Paradox

Posted by in categories: astronomy, cosmology, existential risks, first contact, lifeboat

The Fermi Paradox poses an age-old question: With light and radio waves skipping across the galaxy, why has there never been any convincing evidence of other life in the universe—or at least another sufficiently advanced civilization that uses radio? After all, evidence of intelligent life requires only that some species modulates a beacon (intentionally or unintentionally) in a fashion that is unlikely to be caused by natural phenomena.

The Fermi Paradox has always fascinated me, perhaps because SETI spokesperson, Carl Sagan was my astronomy professor at Cornell and—coincidentally—Sagan and Stephen Spielberg dedicated a SETI radio telescope at Oak Ridge Observatory around the time that I moved from Ithaca to New England. It’s a 5 minute drive from my new home. In effect, two public personalities followed me to Massachusetts.

What is SETI?

In November of 1984, SETI was chartered as a non-profit corporation with a single goal. In seeking to answer to the question “Are we alone?” it fuels the Drake equation by persuading radio telescopes to devote time to the search for extraterrestrial life and establishing an organized and systematic approach to partitioning, prioritizing, gathering and mining signal data.

Continue reading “Where are the aliens? Solutions to Fermi Paradox” »

Apr 30, 2018

Humanity’s ‘Final Institute’ (Part 1)

Posted by in categories: existential risks, materials

The concern for the future of humanity is becoming more imperative as exponential technology brings us to the brink of the most fragile time in human history. Existential risk is a matter that is necessary to contemplate proactively rather than in a reactionary state, especially if intentions are to ensure continuance into the far future; a sort of insurance for humanity. However, what is mankind really trying to do? It is commonly advised to begin with the end in mind, however, there doesn’t seem to be a legitimate end goal besides a desperate cling to survival. Living without a purpose is simply existing, which seems to be the current state of our species. What are we existing for?

If we are referring to the whole of mankind rather than the specific individual, it can be commonly agreed upon that we simply have no concrete conclusion for why we are even here; or why anything should exist at all. This is in part due to the fact that we don’t even seem to have a complete understanding of what the universe actually is; why things behave the way they do. The fact that this is unknown would, by definition, imply that the relevance of everything that we do is also currently unknown. Thus, the logical progression would begin with acquiring the information necessary to discover what this nature is that existence seems to abide by. Then we can assemble the right question pertaining to the reason behind this phenomenon that we refer to as the “universe.”

By starting with this end question in mind, we can identify to the best of our current knowledge, the information that would be necessary to know before answering it. Regardless if it seems possible or not, we must consider it necessary for the time being. This would likely result in a series of questions, pushing the boundaries of our scientific and philosophical capabilities. This process would certainly be subject to change as new breakthroughs advance our understanding of the universe. However, the fact of the matter remains; it would be the most efficient direction relative to our maximum capability.

Continue reading “Humanity’s ‘Final Institute’ (Part 1)” »

Apr 26, 2018

North Korea’s Nuclear Test Site Has Collapsed: Chinese Scientists

Posted by in categories: existential risks, nuclear weapons

Chinese scientists urge the authorities to monitor and prevent potential radioactive leakage.

Read more

Apr 23, 2018

Endless Energy and Black Hole Bombs

Posted by in categories: cosmology, existential risks

A spinning black hole could provide enough energy to power civilization for trillions of years — and create the biggest bomb known to the universe. Using the rotation of a black hole to supercharge electromagnetic waves could create massive amounts of energy or equally massive amounts of destruction. Kurzgesagt explains what it would take to harness a black hole and the potential risks of the process.

Read more

Apr 21, 2018

The Geological Record, A Possible Solution For Fermi’s Paradox And The Future Of Humankind

Posted by in categories: alien life, existential risks

Speculating about the geological record of a technologically advanced civilization may help in the search for alien societies and poses an important question about our own future on earth.

Read more

Page 1 of 6012345678Last