Toggle light / dark theme

Zachary Kallenborn — Existential Terrorism

“Some men just want to watch the world burn.” Zachary Kallenborn discusses acts of existential terrorism, such as the Tokyo subway sarin attack by Aum Shinrikyo in 1995, which killed or injured over 1,000 people.

Zachary kallenborn is a policy fellow in the center for security policy studies at george mason university, research affiliate in unconventional weapons and technology at START, and senior risk management consultant at the ABS group.

Zachary has an MA in Nonproliferation and Terrorism Studies from Middlebury Institute of International Studies, and a BS in Mathematics and International Relations from the University of Puget Sound.

His work has been featured in numerous international media outlets including the New York Times, Slate, NPR, Forbes, New Scientist, WIRED, Foreign Policy, the BBC, and many others.

Forbes: New Report Warns Terrorists Could Cause Human Extinction With ‘Spoiler Attacks’
https://www.forbes.com/sites/davidhambling/2023/06/23/new-re…r-attacks/

Schar School Scholar Warns of Existential Threats to Humanity by Terrorists.

AI Singularity realistically by 2029: year-by-year milestones

This existential threat could even come as early as, say, 2026. Or might even be a good thing, but whatever the Singularity exactly is, although it’s uncertain in nature, it’s becoming clearer in timing and much closer than most predicted.

AI is nevertheless hard to predict, but many agree with me that with GPT-4 we’re close to AGI (artificial general intelligence) already.

Tesla, Facebook, OpenAI Account For 24.5% Of ‘AI Incidents,’ Security Company Says

The first “AI incident” almost caused global nuclear war. More recent AI-enabled malfunctions, errors, fraud, and scams include deepfakes used to influence politics, bad health information from chatbots, and self-driving vehicles that are endangering pedestrians.

The worst offenders, according to security company Surfshark, are Tesla, Facebook, and OpenAI, with 24.5% of all known AI incidents so far.

In 1983, an automated system in the Soviet Union thought it detected incoming nuclear missiles from the United States, almost leading to global conflict. That’s the first incident in Surfshark’s report (though it’s debatable whether an automated system from the 1980s counts specifically as artificial intelligence). In the most recent incident, the National Eating Disorders Association (NEDA) was forced to shut down Tessa, its chatbot, after Tessa gave dangerous advice to people seeking help for eating disorders. Other recent incidents include a self-driving Tesla failing to notice a pedestrian and then breaking the law by not yielding to a person in a crosswalk, and a Jefferson Parish resident being wrongfully arrested by Louisiana police after a facial recognition system developed by Clearview AI allegedly mistook him for another individual.

Russian Ships Enter Taiwan’s Territory: New Escalation in East Asia? | Vantage with Palki Sharma

Taiwan was on high alert after two Russian warships entered its waters. Taiwan is used to incursions by China, not Russia. It marks a new flare-up in East Asia. Moscow then doubled down by releasing footage of a military drill in the Sea of Japan. East Asia is becoming a powder keg.

The region already deals with tensions between North Korea, South Korea & Japan. And now the US is trying to send a message to Pyongyang by having its largest nuclear submarine visit South Korea.

Firstpost | world news | vantage.

#firstpost #vantageonfirstpost #worldnews.

Vantage is a ground-breaking news, opinions, and current affairs show from Firstpost. Catering to a global audience, Vantage covers the biggest news stories from a 360-degree perspective, giving viewers a chance to assess the impact of world events through a uniquely Indian lens.

How existential risk became the biggest meme in AI

Who’s afraid of the big bad bots? A lot of people, it seems. The number of high-profile names that have now made public pronouncements or signed open letters warning of the catastrophic dangers of artificial intelligence is striking.

Hundreds of scientists, business leaders, and policymakers have spoken up, from deep learning pioneers Geoffrey Hinton and Yoshua Bengio to the CEOs of top AI firms, such as Sam Altman and Demis Hassabis, to the California congressman Ted Lieu and the former president of Estonia Kersti Kaljulaid.

ChatGPT — A Human Upgrade Or Future Malaise?

Elon Musk is exploring the possibility of upgrading the human brain to allow humans to compete with sentient AI through ‘a brain computer interface’ created by his company Neuralink. “I created [Neuralink] specifically to address the AI symbiosis problem, which I think is an existential threat,” says Musk.

While Neuralink has just received FDA approval to start clinical trials in humans (intended to empower those with paralysis), only time will tell whether this technology will succeed in augmenting human intelligence as Musk first intended. But the use of AI to augment human intelligence brings up some interesting ethical questions as to which tools are acceptable (a subject to be discussed… More.


Chat GPT may have an effect on critical thinking. Also early adopters may be at an advantage with GPT. Study with students.

How North Korea Built the World’s Largest Criminal Empire

Have you ever wondered how can North Korea afford its nuclear program and the luxury goods for its leadership when its economy is effectively cut off from the world? Well… let me tell you a little secret.

If you want to support the channel, check out my Patreon: https://www.patreon.com/ExplainedWithDom.

Selected sources and further reading:
https://www.wilsoncenter.org/event/north-koreas-criminal-act…-challenge.
https://press.armywarcollege.edu/cgi/viewcontent.cgi?article…monographs.
https://www.airuniversity.af.edu/JIPA/Display/Article/328526…ase-study/
https://sgp.fas.org/crs/row/RL33885.pdf.
https://www.rusi.org/events/open-to-all/organised-crime-north-korea.
https://moneyweek.com/19827/north-koreas-criminal-economy

/* */