Some of Daniel Schmarchtenberger’s friends say you can be “Schmachtenberged”. It means realising that we are on our way to self-destruction as a civilisation, on a global level. This is a topic often addressed by the American philosopher and strategist, in a world with powerful weapons and technologies and a lack of efficient governance. But, as the catastrophic script has already started to be written, is there still hope? And how do we start reversing the scenario?
Category: existential risks – Page 37

Lightning strike creates a material seen for the first time on Earth
After lightning struck a tree in New Port Richey, Florida, a team of scientists from the University of South Florida (USF) discovered that this strike led to the formation of a new phosphorous material in a rock. This is the first time such a material has been found in solid form on Earth and could represent a member of a new mineral group.
“We have never seen this material occur naturally on Earth – minerals similar to it can be found in meteorites and space, but we’ve never seen this exact material anywhere,” said study lead author Matthew Pasek, a geoscientist at USF.
According to the researchers, high-energy events such as lightning can sometimes cause unique chemical reactions which, in this particular case, have led to the formation of a new material that seems to be transitional between space minerals and minerals found on Earth.
The intelligence explosion: Nick Bostrom on the future of AI
We may build incredible AI. But can we contain our cruelty? Oxford professor Nick Bostrom explains.
Up next, Is AI a species-level threat to humanity? With Elon Musk, Michio Kaku, Steven Pinker & more ► https://youtu.be/91TRVubKcEM
Nick Bostrom, a professor at Oxford University and director of the Future of Humanity Institute, discusses the development of machine superintelligence and its potential impact on humanity. Bostrom believes that in this century, we will create the first general intelligence that will be smarter than humans. He sees this as the most important thing humanity will ever do, but it also comes with an enormous responsibility.
Bostrom notes that there are existential risks associated with the transition to the machine intelligence era, such as the possibility of an underlying superintelligence that overrides human civilization with its own value structures. In addition, there is the question of how to ensure that conscious digital minds are treated well. However, if we succeed in ensuring the well-being of artificial intelligence, we could have vastly better tools for dealing with everything from diseases to poverty.
Ultimately, Bostrom believes that the development of machine superintelligence is crucial for a truly great future.
0:00 Smarter than humans.

Doomsday Predictions Around ChatGPT Are Counter-Productive
The last few weeks have been abuzz with news and fears (well, largely fears) about the impact chatGPT and other generative technologies might have on the workplace. Goldman Sachs predicted 300 million jobs would be lost, while the likes of Steve Wozniak and Elon Musk asked for AI development to be paused (although pointedly not the development of autonomous driving).
Indeed, OpenAI chief Sam Altman recently declared that he was “a little bit scared”, with the sentiment shared by OpenAI’s chief scientist Ilya Sutskever, who recently said that “at some point it will be quite easy, if one wanted, to cause a great deal of harm”.
As fears mount about the jobs supposedly at risk from generative AI technologies like chatGPT, are these fears likely to prevent people from taking steps to adapt?
Fermi Paradox: The Vulnerable World Hypothesis
And exploration of the Vulnerable World Hypothesis solution to the Fermi Paradox.
And exploration of the possibility of finding fossils of alien origin right here on the surface of the earth.
My Patreon Page:
https://www.patreon.com/johnmichaelgodier.
My Event Horizon Channel: