Menu

Blog

Archive for the ‘existential risks’ category: Page 22

Jul 26, 2023

Map shows how you would be affected by a nuclear bomb

Posted by in categories: existential risks, military

A rather macabre interactive map demonstrates how the area you live in would be impacted if a nuclear bomb were to hit it. Nuclear war is as big a talking point these days as it ever has been. Advert With Russia and Ukraine still at war, Russian President Vladimir Putin has made some not-so-veiled threats about nuclear weapon use.

Jul 26, 2023

Meteor which exploded over The Atlantic had force comparable to Hiroshima bomb

Posted by in categories: asteroid/comet impacts, existential risks, military

A meteor has exploded over the Atlantic Ocean with the force of the atomic bomb dropped on Hiroshima. It’s one of the ways that civilisation as we know it could end, with an asteroid impact sending the human race the way of the dinosaurs. It’s a terrifying prospect, and the film Don’t Look Up with Jennifer Lawrence and Leonardo DiCaprio really didn’t help matter with its demonstration of the paralysis and greed which could doom humanity.

Jul 24, 2023

Americans prioritize safety over space travel, survey shows

Posted by in categories: asteroid/comet impacts, existential risks

Most Americans favor NASA’s focus on deflecting asteroids to protect Earth rather than pursuing lunar and Martian exploration.

In a galaxy not so far away, most Americans are casting their eyes on the skies, but not necessarily on the Moon or Mars. A recent survey by the Pew Research Center has unveiled that most Americans are more concerned about the threat of potential asteroid impacts on Earth, urging NASA to deflect these space intruders rather than diverting its resources to lunar and Martian exploration.

The survey, conducted among over 10,000 individuals, offers an insightful glimpse into the public’s views on space exploration, NASA’s role, private space companies, and the United States’ position as a leader in space.

Jul 24, 2023

‘Bond of trust’ can see humans and robots working together, says AI expert

Posted by in categories: existential risks, robotics/AI

A prominent engineer in the AI field believes robots can be designed to support humans not replace them.

A prominent engineer in AI claims humans and robots can work together peacefully if they can build a “bond of trust.” The claim is a far cry from the doomsday scenarios painted by many experts in the field.

Tariq Iqbal, an assistant professor of systems engineering and computer science in the University of Virginia’s School of Engineering and Applied Science, says he strives for machines to work with people, not replace them.

Jul 22, 2023

I Used Generative AI To Create A Synthetic Self And You Can Too

Posted by in categories: business, existential risks, robotics/AI

Generative AI has been front and centre of the news for the last nine months and attention is often on existential risks, copyright claims or suspicions around deepfakes. However, there are a growing number of more positive ways it can be integrated into businesses.

One of those areas is customer service. The Samsung Neon people were a good example of what could be achieved with embodied AI. Samsung created an impressive suite of customer service agents whose profiles could match those of customers in need of help.


I wanted an avatar that was a bit ‘uncanny’, so that it had some resemblance to my real physical self but looked quite artificial too.

Continue reading “I Used Generative AI To Create A Synthetic Self And You Can Too” »

Jul 16, 2023

How Would the United States Fight a Nuclear War?

Posted by in categories: existential risks, nuclear weapons

Go to https://get.atlasvpn.com/ModernMuscle to get a 3-year plan for just $1.83 a month. It’s risk free with Atlas’s 30 day money back guarantee!

Today we’re going explore the unthinkable: How would the United States respond during a Nuclear conflict?

Continue reading “How Would the United States Fight a Nuclear War?” »

Jul 16, 2023

From Sci-Fi to Reality: Addressing AI Risks — with David Brin

Posted by in categories: cryptocurrencies, existential risks, military, particle physics, robotics/AI

AI had its nuclear bomb threshold. The biggest thing that happens to human technology maybe since the splitting of the atom.

A conversation with Science Fiction author and a NASA consultant David Brin about the existential risks of AI and what approach we can take to address these risks.

Continue reading “From Sci-Fi to Reality: Addressing AI Risks — with David Brin” »

Jul 15, 2023

Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED

Posted by in categories: business, existential risks, robotics/AI

Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent? In a fiery talk, Yudkowsky explores why we need to act immediately to ensure smarter-than-human AI systems don’t lead to our extinction.

If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: https://ted.com/membership.

Continue reading “Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED” »

Jul 13, 2023

Zachary Kallenborn — Existential Terrorism

Posted by in categories: existential risks, mathematics, policy, security, terrorism

“Some men just want to watch the world burn.” Zachary Kallenborn discusses acts of existential terrorism, such as the Tokyo subway sarin attack by Aum Shinrikyo in 1995, which killed or injured over 1,000 people.

Zachary kallenborn is a policy fellow in the center for security policy studies at george mason university, research affiliate in unconventional weapons and technology at START, and senior risk management consultant at the ABS group.

Continue reading “Zachary Kallenborn — Existential Terrorism” »

Jul 8, 2023

AI Singularity realistically by 2029: year-by-year milestones

Posted by in categories: existential risks, robotics/AI, singularity

This existential threat could even come as early as, say, 2026. Or might even be a good thing, but whatever the Singularity exactly is, although it’s uncertain in nature, it’s becoming clearer in timing and much closer than most predicted.

AI is nevertheless hard to predict, but many agree with me that with GPT-4 we’re close to AGI (artificial general intelligence) already.

Page 22 of 149First1920212223242526Last