Archive for the ‘existential risks’ category
Jan 7, 2017
Posted by Montie Adkins in categories: biotech/medical, education, existential risks, life extension, lifeboat, nanotechnology, robotics/AI
I figured they would post it themselves but I got too excited and decided to spread it around.
The Lifeboat Foundation is a nonprofit organization devoted to encouraging the promotion and advancement of science while helping develop strategies to survive existential risks and the possible abuse of technology. They are interested in biotechnology, nanotechnology, robotics and AI and fostering the safe and responsible use of these powerful new technologies. The Life Preserver program is aligned with our mission to promote and develop rejuvenation biotechnology capable of combating age-related diseases.
We believe that a bright future awaits mankind and support the ethical and safe use of new medical technologies being developed today, thus we consider the goals of the Lifeboat Foundation to be compatible with ours and are pleased to move forward with them in official collaboration. As part of our commitment to the ethical progress of medical science LEAF promotes scientific research and learning via our crowdfunding website Lifespan.io and our educational hub at the LEAF website. A number of LEAF board members are already on the Scientific Advisory board for the Lifeboat Foundation and we look forward to working closely with them in the coming year.
Jan 4, 2017
Posted by Nicola Bagalà in categories: existential risks, life extension
Is the future really going to be so bad that you wouldn’t want to live longer? Hardly!
The future looks grim? That’s quite an interesting claim, and I wonder whether there is any evidence to support it. In fact, I think there’s plenty of evidence to believe the opposite, i.e. that the future will be bright indeed. However, I can’t promise the future will certainly be bright. I am no madame clearvoyant, but neither are doomsday prophets. We can all only speculate, no matter how ‘sure’ pessimists may say they are about the horrible dystopian future that allegedly awaits us. I’m soon going to present the evidence of the bright future I believe in, but before I do, I would like to point out a few problems in the reasoning of the professional catastrophists who say that life won’t be worth living and there’s thus no point in extending it anyway.
Dec 27, 2016
Posted by Steve Fuller in categories: existential risks, security, terrorism
A version of this piece appears on the Sociological Imagination website
Twenty years ago Theodore Kaczynski, a Harvard-trained maths prodigy obsessed with technology’s destruction of nature, was given eight consecutive life sentences for sending letter bombs in the US post which killed three people and injured 23 others. Generally known as the ‘Unabomber’, he remains in a supermax prison in Colorado to this day.
It is perhaps easy to forget the sway that the Unabomber held on American society in the mid-1990s. Kaczynski managed to get a 35,000 word manifesto called ‘Industrial Society and Its Future’ published in both The New York Times and The Washington Post. It is arguably the most famous and influential statement of neo-Luddite philosophy and politics to this day. Now he is back with a new book, Anti-Tech Revolution: Why and How.
Dec 23, 2016
Posted by Klaus Baldauf in categories: biotech/medical, existential risks, genetics
What humans will look like in 100 years: Expert reveals the genetically modified bodies we’ll need to survive
- Harvard researchers says to survive the next extinction we must leave the Earth
- But to live on other planets we will need to genetically modify our organs
- Experts have previously speculated how humanity will look in 1,000 years’ time
- Video describes scenario in which bodies are part-human part-machine
Dec 22, 2016
Posted by Shane Hinshaw in category: existential risks
Dec 22, 2016
Posted by Sean Brazell in categories: existential risks, space
For years, scientists have known that Gliese 710 will come excruciatingly close to our Solar System in about a million years. An updated analysis suggests this star will come considerably closer than we thought, during which time it’s expected to spawn dangerous cometary swarms.
Dec 14, 2016
Posted by Sean Brazell in categories: asteroid/comet impacts, existential risks
In news certain to take the bounce out of your step, a NASA scientist says Earth is due for an “extinction-level” event that we basically would have no way of stopping.
Dr. Joseph Nuth of NASA’s Goddard Space Flight Center rang the alarm Monday in San Francisco, New York Magazine reports. The comet that spelled disaster for the dinosaurs hit 65 million years ago, and Nuth said the massive asteroids and comets that could wipe out civilization usually strike “50 to 60 million years apart,” making such an event overdue.
In 2014, scientists first spotted a large comet barreling toward Mars just 22 months before it came perilously close to hitting the planet. That wasn’t enough time to do anything, Nuth said, proof that “the biggest problem, basically, is there’s not a hell of a lot we can do about it at the moment.” To prevent a catastrophic event, Nuth suggests NASA create a rocket that can go in storage, ready to be used if a huge comet comes our way. “It could mitigate the possibility of a sneaky asteroid coming in from a place that’s hard to observe, like from the sun,” Nuth said. The way 2016 has gone so far, you might want to start scanning the sky. Catherine Garcia.
Nov 7, 2016
Posted by Amnon H. Eden in categories: business, climatology, existential risks, food, habitats, sustainability
The wealth gap worries Forbes, not your usual wide-eyed socialist.
How do we expect to feed that many people while we exhaust the resources that remain?
Human activities are behind the extinction crisis. Commercial agriculture, timber extraction, and infrastructure development are causing habitat loss and our reliance on fossil fuels is a major contributor to climate change.
Nov 5, 2016
Posted by Klaus Baldauf in categories: existential risks, robotics/AI
Star physicist Stephen Hawking has reiterated his concerns that the rise of powerful artificial intelligence (AI) systems could spell the end for humanity.
Speaking at the launch of the University of Cambridge’s Centre for the Future of Intelligence on 19 October, he did, however, acknowledge that AI equally has the potential to be one of the best things that could happen to us.
So are we on the cusp of creating super-intelligent machines that could put humanity at existential risk?