Sergio Tarrero – Lifeboat News: The Blog https://lifeboat.com/blog Safeguarding Humanity Sat, 11 Nov 2023 02:22:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 Opinion: Who a ‘stalemate’ in Ukraine really benefits https://lifeboat.com/blog/2023/11/opinion-who-a-stalemate-in-ukraine-really-benefits https://lifeboat.com/blog/2023/11/opinion-who-a-stalemate-in-ukraine-really-benefits#respond Sat, 11 Nov 2023 02:22:24 +0000 https://lifeboat.com/blog/2023/11/opinion-who-a-stalemate-in-ukraine-really-benefits

If recent warnings of a stalemate war between Ukraine and Russia come to fruition, along with the West’s absent resolution for Ukraine’s win, Russian President Vladimir Putin will benefit above all, writes Jade McGlynn.

]]>
https://lifeboat.com/blog/2023/11/opinion-who-a-stalemate-in-ukraine-really-benefits/feed 0
Regulate AI Now https://lifeboat.com/blog/2023/10/regulate-ai-now https://lifeboat.com/blog/2023/10/regulate-ai-now#respond Tue, 10 Oct 2023 11:24:08 +0000 https://lifeboat.com/blog/2023/10/regulate-ai-now

In the six months since FLI published its open letter calling for a pause on giant AI experiments, we have seen overwhelming expert and public concern about the out-of-control AI arms race — but no slowdown. In this video, we call for U.S. lawmakers to step in, and explore the policy solutions necessary to steer this powerful technology to benefit humanity.

]]>
https://lifeboat.com/blog/2023/10/regulate-ai-now/feed 0
Will Superintelligent AI End the World? | Eliezer Yudkowsky | TED https://lifeboat.com/blog/2023/07/will-superintelligent-ai-end-the-world-eliezer-yudkowsky-ted Sat, 15 Jul 2023 19:23:18 +0000 https://lifeboat.com/blog/2023/07/will-superintelligent-ai-end-the-world-eliezer-yudkowsky-ted

Decision theorist Eliezer Yudkowsky has a simple message: superintelligent AI could probably kill us all. So the question becomes: Is it possible to build powerful artificial minds that are obedient, even benevolent? In a fiery talk, Yudkowsky explores why we need to act immediately to ensure smarter-than-human AI systems don’t lead to our extinction.

If you love watching TED Talks like this one, become a TED Member to support our mission of spreading ideas: https://ted.com/membership.

Follow TED!
Twitter: https://twitter.com/TEDTalks.
Instagram: https://www.instagram.com/ted.
Facebook: https://facebook.com/TED
LinkedIn: https://www.linkedin.com/company/ted-conferences.
TikTok: https://www.tiktok.com/@tedtoks.

The TED Talks channel features talks, performances and original series from the world’s leading thinkers and doers. Subscribe to our channel for videos on Technology, Entertainment and Design — plus science, business, global issues, the arts and more. Visit https://TED.com to get our entire library of TED Talks, transcripts, translations, personalized talk recommendations and more.

Watch more: https://go.ted.com/eliezeryudkowsky.

]]>
Zachary Kallenborn — Existential Terrorism https://lifeboat.com/blog/2023/07/zachary-kallenborn-existential-terrorism Thu, 13 Jul 2023 15:32:33 +0000 https://lifeboat.com/blog/2023/07/zachary-kallenborn-existential-terrorism

“Some men just want to watch the world burn.” Zachary Kallenborn discusses acts of existential terrorism, such as the Tokyo subway sarin attack by Aum Shinrikyo in 1995, which killed or injured over 1,000 people.

Zachary kallenborn is a policy fellow in the center for security policy studies at george mason university, research affiliate in unconventional weapons and technology at START, and senior risk management consultant at the ABS group.

Zachary has an MA in Nonproliferation and Terrorism Studies from Middlebury Institute of International Studies, and a BS in Mathematics and International Relations from the University of Puget Sound.

His work has been featured in numerous international media outlets including the New York Times, Slate, NPR, Forbes, New Scientist, WIRED, Foreign Policy, the BBC, and many others.

Forbes: New Report Warns Terrorists Could Cause Human Extinction With ‘Spoiler Attacks’
https://www.forbes.com/sites/davidhambling/2023/06/23/new-re…r-attacks/

Schar School Scholar Warns of Existential Threats to Humanity by Terrorists.
https://www.gmu.edu/news/2023-07/schar-school-scholar-wa…terrorists.

]]>
Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371 https://lifeboat.com/blog/2023/04/max-tegmark-the-case-for-halting-ai-development-lex-fridman-podcast-371 Sat, 15 Apr 2023 11:25:10 +0000 https://lifeboat.com/blog/2023/04/max-tegmark-the-case-for-halting-ai-development-lex-fridman-podcast-371

Max Tegmark is a physicist and AI researcher at MIT, co-founder of the Future of Life Institute, and author of Life 3.0: Being Human in the Age of Artificial Intelligence. Please support this podcast by checking out our sponsors:
- Notion: https://notion.com.
- InsideTracker: https://insidetracker.com/lex to get 20% off.
- Indeed: https://indeed.com/lex to get $75 credit.

EPISODE LINKS:
Max’s Twitter: https://twitter.com/tegmark.
Max’s Website: https://space.mit.edu/home/tegmark.
Pause Giant AI Experiments (open letter): https://futureoflife.org/open-letter/pause-giant-ai-experiments.
Future of Life Institute: https://futureoflife.org.
Books and resources mentioned:
1. Life 3.0 (book): https://amzn.to/3UB9rXB
2. Meditations on Moloch (essay): https://slatestarcodex.com/2014/07/30/meditations-on-moloch.
3. Nuclear winter paper: https://nature.com/articles/s43016-022-00573-0

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast.
Apple Podcasts: https://apple.co/2lwqZIr.
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

OUTLINE:
0:00 — Introduction.
1:56 — Intelligent alien civilizations.
14:20 — Life 3.0 and superintelligent AI
25:47 — Open letter to pause Giant AI Experiments.
50:54 — Maintaining control.
1:19:44 — Regulation.
1:30:34 — Job automation.
1:39:48 — Elon Musk.
2:01:31 — Open source.
2:08:01 — How AI may kill all humans.
2:18:32 — Consciousness.
2:27:54 — Nuclear winter.
2:38:21 — Questions for AGI

SOCIAL:
- Twitter: https://twitter.com/lexfridman.
- LinkedIn: https://www.linkedin.com/in/lexfridman.
- Facebook: https://www.facebook.com/lexfridman.
- Instagram: https://www.instagram.com/lexfridman.
- Medium: https://medium.com/@lexfridman.
- Reddit: https://reddit.com/r/lexfridman.
- Support on Patreon: https://www.patreon.com/lexfridman

]]>
Fmr. Google CEO Eric Schmidt on the Consequences of an A.I. Revolution | Amanpour and Company https://lifeboat.com/blog/2023/04/fmr-google-ceo-eric-schmidt-on-the-consequences-of-an-a-i-revolution-amanpour-and-company Sun, 09 Apr 2023 21:23:17 +0000 https://lifeboat.com/blog/2023/04/fmr-google-ceo-eric-schmidt-on-the-consequences-of-an-a-i-revolution-amanpour-and-company

Artificial Intelligence is here to stay. How it is being applied—and, perhaps more importantly, regulated—are now the crucial questions to ask. Walter Isaacson speaks with former Google CEO Eric Schmidt about A.I.’s impact on life, politics, and warfare, as well as what can be done to keep it under control.

Originally aired on March 23, 2023.

Major support for Amanpour and Company is provided by the Anderson Family Charitable Fund, Sue and Edgar Wachenheim, III, Candace King Weir, Jim Attwood and Leslie Williams, Mark J. Blechner, Bernard and Denise Schwartz, Koo and Patricia Yuen, the Leila and Mickey Straus Family Charitable Trust, Barbara Hope Zuckerberg, Jeffrey Katz and Beth Rogers, the Filomen M. D’Agostino Foundation and Mutual of America.

For more from Amanpour and Company, including full episodes, click here: https://to.pbs.org/2NBFpjf.

]]>
Eliezer Yudkowsky — Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality https://lifeboat.com/blog/2023/04/eliezer-yudkowsky-why-ai-will-kill-us-aligning-llms-nature-of-intelligence-scifi-rationality Fri, 07 Apr 2023 11:25:25 +0000 https://lifeboat.com/blog/2023/04/eliezer-yudkowsky-why-ai-will-kill-us-aligning-llms-nature-of-intelligence-scifi-rationality

For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.

We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.

If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.

Transcript: https://dwarkeshpatel.com/p/eliezer-yudkowsky.
Apple Podcasts: https://apple.co/3mcPjON
Spotify: https://spoti.fi/3KDFzX9

Follow me on Twitter: https://twitter.com/dwarkesh_sp.

Timestamps:
(0:00:00) — TIME article.
(0:09:06) — Are humans aligned?
(0:37:35) — Large language models.
(1:07:15) — Can AIs help with alignment?
(1:30:17) — Society’s response to AI
(1:44:42) — Predictions (or lack thereof)
(1:56:55) — Being Eliezer.
(2:13:06) — Othogonality.
(2:35:00) — Could alignment be easier than we think?
(3:02:15) — What will AIs want?
(3:43:54) — Writing fiction & whether rationality helps you win.

]]>
SPACE FORCE: The Secret Orbit — Arms Race in Space | SpaceTime — WELT Documentary https://lifeboat.com/blog/2023/03/space-force-the-secret-orbit-arms-race-in-space-spacetime-welt-documentary Tue, 07 Mar 2023 13:31:06 +0000 https://lifeboat.com/blog/2023/03/space-force-the-secret-orbit-arms-race-in-space-spacetime-welt-documentary

In December 2019, the United States established its new space force: the United States Space Force. A logical step in a globalized and digitized world whose infrastructure depends on satellites in space. This infrastructure is under threat. Also by a resurgence of conflict between East and West. This episode of Spacetime describes how the military conquered space and why the world is in a new arms race in Earth orbit.

#documentary #spacetime #usa.

📺 Watch more documentaries https://www.youtube.com/playlist?list=PL-5sURDcN_Zl8hBqkvZ6uXFpP3t55HU9s.

🔔 Subscribe to our full documentary channel.

]]>
Humanity Officially Has a Viable Defence Against Killer Asteroids, NASA Confirms https://lifeboat.com/blog/2023/03/humanity-officially-has-a-viable-defence-against-killer-asteroids-nasa-confirms Thu, 02 Mar 2023 17:31:45 +0000 https://lifeboat.com/blog/2023/03/humanity-officially-has-a-viable-defence-against-killer-asteroids-nasa-confirms

“This means that we could change an asteroid’s path with less warning time,” one scientist said of the DART test’s successful result.

]]>
We’re All Gonna Die with Eliezer Yudkowsky https://lifeboat.com/blog/2023/02/were-all-gonna-die-with-eliezer-yudkowsky Sun, 26 Feb 2023 11:22:41 +0000 https://lifeboat.com/blog/2023/02/were-all-gonna-die-with-eliezer-yudkowsky

Eliezer Yudkowsky is an author, founder, and leading thinker in the AI space.


✨ DEBRIEF | Unpacking the episode:
https://shows.banklesshq.com/p/debrief-eliezer.


✨ COLLECTIBLES | Collect this episode:
https://collectibles.bankless.com/mint.


We wanted to do an episode on AI… and we went deep down the rabbit hole. As we went down, we discussed ChatGPT and the new generation of AI, digital superintelligence, the end of humanity, and if there’s anything we can do to survive.

This conversation with Eliezer Yudkowsky sent us into an existential crisis, with the primary claim that we are on the cusp of developing AI that will destroy humanity.

Be warned before diving into this episode, dear listener. Once you dive in, there’s no going back.

]]>