Sergio Tarrero – Lifeboat News: The Blog https://lifeboat.com/blog Safeguarding Humanity Tue, 23 Jul 2024 23:35:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 The Far Out Initiative interviews David Pearce https://lifeboat.com/blog/2024/07/the-far-out-initiative-interviews-david-pearce https://lifeboat.com/blog/2024/07/the-far-out-initiative-interviews-david-pearce#respond Tue, 23 Jul 2024 23:35:11 +0000 https://lifeboat.com/blog/2024/07/the-far-out-initiative-interviews-david-pearce

Some of the questions touch on x-risks and EA. Enjoy. http://faroutinitoative.com

]]>
https://lifeboat.com/blog/2024/07/the-far-out-initiative-interviews-david-pearce/feed 0
Superbad — CIA Targeter Tracks Down #1 Enemy of Benghazi Attacks | SRS #116 https://lifeboat.com/blog/2024/06/superbad-cia-targeter-tracks-down-1-enemy-of-benghazi-attacks-srs-116 Tue, 11 Jun 2024 11:24:46 +0000 https://lifeboat.com/blog/2024/06/superbad-cia-targeter-tracks-down-1-enemy-of-benghazi-attacks-srs-116

Sarah Adams (call sign: Superbad) is a former CIA Targeting Officer and author of Benghazi: Know Thy Enemy. Adams served as the Senior Advisor for the U.S. House of Representatives Select Committee on Benghazi. She conducted all-source investigations and oversight activities related to the 2012 Libya terrorist attacks and was instrumental in mitigating future security risks to U.S. personnel serving overseas. Adams remains one of the most knowledgeable individuals on active terrorism threats around the world.

Get Sarah’s intel briefing via the newsletter: https://shawnryanshow.com/pages/newsl

Join this channel to get access to perks:
/ @shawnryanshow.

Shawn Ryan Show Links.
SUBSCRIBE: / @shawnryanshow.
Sign up: https://shawnryanshow.com/pages/newsl
Support the Show: / vigilanceelite.
Download Free Content: https://drive.google.com/drive/folder

Listen on Apple: https://podcasts.apple.com/us/podcast
Listen on Spotify: https://open.spotify.com/show/5eodRZd

Follow on TikTok: / shawnryanshow.
Follow on Instagram: / shawnryanshow.
Follow on Facebook: / shawnryanshow.
Follow on X: / shawnryanshow.

]]>
Stephen Bassett — UAP Disclosure Politics https://lifeboat.com/blog/2023/12/stephen-bassett-uap-disclosure-politics Mon, 18 Dec 2023 00:25:04 +0000 https://lifeboat.com/blog/2023/12/stephen-bassett-uap-disclosure-politics

Beltway insider Stephen Bassett discusses the untold story of the UAP Disclosure Act of 2023, estimated timeline for official disclosure, David Grusch whistleblower story, and public activism in the UAP legislative process.

Stephen Bassett is a political activist, disclosure advocate and the executive director of Paradigm Research Group (PRG) founded in 1996 to end a government-imposed embargo on the truth behind extraterrestrial related phenomena.

Steve has spoken to audiences around the world about the implications of \.

]]>
Opinion: Who a ‘stalemate’ in Ukraine really benefits https://lifeboat.com/blog/2023/11/opinion-who-a-stalemate-in-ukraine-really-benefits Sat, 11 Nov 2023 02:22:24 +0000 https://lifeboat.com/blog/2023/11/opinion-who-a-stalemate-in-ukraine-really-benefits

If recent warnings of a stalemate war between Ukraine and Russia come to fruition, along with the West’s absent resolution for Ukraine’s win, Russian President Vladimir Putin will benefit above all, writes Jade McGlynn.

]]>
Regulate AI Now https://lifeboat.com/blog/2023/10/regulate-ai-now Tue, 10 Oct 2023 11:24:08 +0000 https://lifeboat.com/blog/2023/10/regulate-ai-now

In the six months since FLI published its open letter calling for a pause on giant AI experiments, we have seen overwhelming expert and public concern about the out-of-control AI arms race — but no slowdown. In this video, we call for U.S. lawmakers to step in, and explore the policy solutions necessary to steer this powerful technology to benefit humanity.

]]>
Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED https://lifeboat.com/blog/2023/07/will-superintelligent-ai-end-the-world-eliezer-yudkowsky-ted Sat, 15 Jul 2023 19:23:18 +0000 https://lifeboat.com/blog/2023/07/will-superintelligent-ai-end-the-world-eliezer-yudkowsky-ted

Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent? In a fiery talk, Yudkowsky explores why we need to act immediately to ensure smarter-than-human AI systems don’t lead to our extinction.

If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: https://ted.com/membership.

Follow TED!
Twitter: https://twitter.com/TEDTalks.
Instagram: https://www.instagram.com/ted.
Facebook: https://facebook.com/TED
LinkedIn: https://www.linkedin.com/company/ted-conferences.
TikTok: https://www.tiktok.com/@tedtoks.

The TED Talks channel features talks, performances and original series from the world’s leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit https://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.

Watch more: https://go.ted.com/eliezeryudkowsky.

]]>
Zachary Kallenborn — Existential Terrorism https://lifeboat.com/blog/2023/07/zachary-kallenborn-existential-terrorism Thu, 13 Jul 2023 15:32:33 +0000 https://lifeboat.com/blog/2023/07/zachary-kallenborn-existential-terrorism

“Some men just want to watch the world burn.” Zachary Kallenborn discusses acts of existential terrorism, such as the Tokyo subway sarin attack by Aum Shinrikyo in 1995, which killed or injured over 1,000 people.

Zachary kallenborn is a policy fellow in the center for security policy studies at george mason university, research affiliate in unconventional weapons and technology at START, and senior risk management consultant at the ABS group.

Zachary has an MA in Nonproliferation and Terrorism Studies from Middlebury Institute of International Studies, and a BS in Mathematics and International Relations from the University of Puget Sound.

His work has been featured in numerous international media outlets including the New York Times, Slate, NPR, Forbes, New Scientist, WIRED, Foreign Policy, the BBC, and many others.

Forbes: New Report Warns Terrorists Could Cause Human Extinction With ‘Spoiler Attacks’
https://www.forbes.com/sites/davidhambling/2023/06/23/new-re…r-attacks/

Schar School Scholar Warns of Existential Threats to Humanity by Terrorists.
https://www.gmu.edu/news/2023-07/schar-school-scholar-wa…terrorists.

]]>
Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371 https://lifeboat.com/blog/2023/04/max-tegmark-the-case-for-halting-ai-development-lex-fridman-podcast-371 Sat, 15 Apr 2023 11:25:10 +0000 https://lifeboat.com/blog/2023/04/max-tegmark-the-case-for-halting-ai-development-lex-fridman-podcast-371

Max Tegmark is a physicist and AI researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence. Please support this podcast by checking out our sponsors:
- Notion: https://notion.com.
- InsideTracker: https://insidetracker.com/lex to get 20% off.
- Indeed: https://indeed.com/lex to get $75 credit.

EPISODE LINKS:
Max’s Twitter: https://twitter.com/tegmark.
Max’s Website: https://space.mit.edu/home/tegmark.
Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/pause-giant-ai-experiments.
Future of Life Institute: https://futureoflife.org.
Books and resources mentioned:
1. Life 3.0 (book): https://amzn.to/3UB9rXB
2. Meditations on Moloch (essay): https://slatestarcodex.com/2014/07/30/meditations-on-moloch.
3. Nuclear winter paper: https://nature.com/articles/s43016-022-00573-0

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast.
Apple Podcasts: https://apple.co/2lwqZIr.
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

OUTLINE:
0:00 — Introduction.
1:56 — Intelligent alien civilizations.
14:20 — Life 3.0 and superintelligent AI
25:47 — Open letter to pause Giant AI Experiments.
50:54 — Maintaining control.
1:19:44 — Regulation.
1:30:34 — Job automation.
1:39:48 — Elon Musk.
2:01:31 — Open source.
2:08:01 — How AI may kill all humans.
2:18:32 — Consciousness.
2:27:54 — Nuclear winter.
2:38:21 — Questions for AGI

SOCIAL:
- Twitter: https://twitter.com/lexfridman.
- LinkedIn: https://www.linkedin.com/in/lexfridman.
- Facebook: https://www.facebook.com/lexfridman.
- Instagram: https://www.instagram.com/lexfridman.
- Medium: https://medium.com/@lexfridman.
- Reddit: https://reddit.com/r/lexfridman.
- Support on Patreon: https://www.patreon.com/lexfridman

]]>
Fmr. Google CEO Eric Schmidt on the Consequences of an A.I. Revolution | Amanpour and Company https://lifeboat.com/blog/2023/04/fmr-google-ceo-eric-schmidt-on-the-consequences-of-an-a-i-revolution-amanpour-and-company Sun, 09 Apr 2023 21:23:17 +0000 https://lifeboat.com/blog/2023/04/fmr-google-ceo-eric-schmidt-on-the-consequences-of-an-a-i-revolution-amanpour-and-company

Artificial Intelligence is here to stay. How it is being applied—and, perhaps more importantly, regulated—are now the crucial questions to ask. Walter Isaacson speaks with former Google CEO Eric Schmidt about A.I.’s impact on life, politics, and warfare, as well as what can be done to keep it under control.

Originally aired on March 23, 2023.

Major support for Amanpour and Company is provided by the Anderson Family Charitable Fund, Sue and Edgar Wachenheim, III, Candace King Weir, Jim Attwood and Leslie Williams, Mark J. Blechner, Bernard and Denise Schwartz, Koo and Patricia Yuen, the Leila and Mickey Straus Family Charitable Trust, Barbara Hope Zuckerberg, Jeffrey Katz and Beth Rogers, the Filomen M. D’Agostino Foundation and Mutual of America.

For more from Amanpour and Company, including full episodes, click here: https://to.pbs.org/2NBFpjf.

]]>
Eliezer Yudkowsky — Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality https://lifeboat.com/blog/2023/04/eliezer-yudkowsky-why-ai-will-kill-us-aligning-llms-nature-of-intelligence-scifi-rationality Fri, 07 Apr 2023 11:25:25 +0000 https://lifeboat.com/blog/2023/04/eliezer-yudkowsky-why-ai-will-kill-us-aligning-llms-nature-of-intelligence-scifi-rationality

For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.

We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.

If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.

Transcript: https://dwarkeshpatel.com/p/eliezer-yudkowsky.
Apple Podcasts: https://apple.co/3mcPjON
Spotify: https://spoti.fi/3KDFzX9

Follow me on Twitter: https://twitter.com/dwarkesh_sp.

Timestamps:
(0:00:00) — TIME article.
(0:09:06) — Are humans aligned?
(0:37:35) — Large language models.
(1:07:15) — Can AIs help with alignment?
(1:30:17) — Society’s response to AI
(1:44:42) — Predictions (or lack thereof)
(1:56:55) — Being Eliezer.
(2:13:06) — Othogonality.
(2:35:00) — Could alignment be easier than we think?
(3:02:15) — What will AIs want?
(3:43:54) — Writing fiction & whether rationality helps you win.

]]>