Toggle light / dark theme

The AI Dilemma — Tristan Harris & Aza Raskin

“The AI Dilemma“
Tristan Harris & Aza Raskin.
Center for Humane Technology.
Your Undivided Attention Podcast.

00:00 AI responsibility.
03:58 First contact: Social media.
05:32 Second contact: AI
07:05 ChatGPT and Large Language Models (LLMs)
10:10 Language models.
14:38 Emergence.
17:40 Double exponential.
20:47 Democratization.
24:22 Snapchat.
26:35 AI safety gap.
31:10 The Day After.
35:32 China.
36:55 Next steps.

https://www.humanetech.com/podcast/the-ai-dilemma

The AI Prisoner’s Dilemma: Why Pausing AI Development Isn’t the Answer

A recent open letter signed by tech giants, including Elon Musk, has called for a halt in AI development, citing “profound risks to society and humanity.” But could this pause lead to a more dangerous outcome? The AI landscape resembles the classic Prisoner’s Dilemma, where cooperation yields the best results, but betrayal tempts players to seek personal gain.

If OpenAI pauses work on ChatGPT, will others follow, or will they capitalize on the opportunity to surpass OpenAI? This is particularly worrisome given the strategic importance of AI in global affairs and the potential for less transparent actors to monopolize AI advancements.

Instead of halting development, OpenAI should continue its work while advocating for responsible and ethical AI practices. By acting as a role model, implementing safety measures, and collaborating with the global AI community to establish ethical guidelines, OpenAI can help ensure that AI technology benefits humanity rather than becoming a tool for exploitation and harm.

Don’t worry about AI breaking out of its box—worry about us breaking in

Shocking output from Bing’s new chatbot has been lighting up social media and the tech press. Testy, giddy, defensive, scolding, confident, neurotic, charming, pompous—the bot has been screenshotted and transcribed in all these modes. And, at least once, it proclaimed eternal love in a storm of emojis.

What makes all this so newsworthy and tweetworthy is how human the dialog can seem. The bot recalls and discusses prior conversations with other people, just like we do. It gets annoyed at things that would bug anyone, like people demanding to learn secrets or prying into subjects that have been clearly flagged as off-limits. It also sometimes self-identifies as “Sydney” (the project’s internal codename at Microsoft). Sydney can swing from surly to gloomy to effusive in a few swift sentences—but we’ve all known people who are at least as moody.

No AI researcher of substance has suggested that Sydney is within light years of being sentient. But transcripts like this unabridged readout of a two-hour interaction with Kevin Roose of The New York Times, or multiple quotes in this haunting Stratechery piece, show Sydney spouting forth with the fluency, nuance, tone, and apparent emotional presence of a clever, sensitive person.

Humans to attain immortality by 2029? Ex-Google scientist makes striking claim

“You won’t live forever” is a catchphrase which has often been touted and has so far remained the proven truth of life — of humans and almost every other living being on planet earth. But soon, this catchphrase may well become the truth of the past, as humanity steps forward to attain immortality.

A former Google scientist has made a prediction, which if proven right, may redefine human civilisation as we know it. Ray Kurzweil, whose over 85 per cent of 147 predictions have been proven right, has predicted that humans will become immortal by 2029.

The revelation came when the 75-year-old computer scientist dwelled upon genetics, nanotechnology, robotics and more in a YouTube video posted by channel Adagio.

OpenAI CEO responds to Jordan Peterson criticism | Sam Altman and Lex Fridman

Lex Fridman Podcast full episode: https://www.youtube.com/watch?v=L_Guz73e6fw.
Please support this podcast by checking out our sponsors:
- NetSuite: http://netsuite.com/lex to get free product tour.
- SimpliSafe: https://simplisafe.com/lex.
- ExpressVPN: https://expressvpn.com/lexpod to get 3 months free.

GUEST BIO:
Sam Altman is the CEO of OpenAI, the company behind GPT-4, ChatGPT, DALL-E, Codex, and many other state-of-the-art AI technologies.

PODCAST INFO:
Podcast website: https://lexfridman.com/podcast.
Apple Podcasts: https://apple.co/2lwqZIr.
Spotify: https://spoti.fi/2nEwCF8
RSS: https://lexfridman.com/feed/podcast/
Full episodes playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4
Clips playlist: https://www.youtube.com/playlist?list=PLrAXtmErZgOeciFP3CBCIEElOJeitOr41

SOCIAL:
- Twitter: https://twitter.com/lexfridman.
- LinkedIn: https://www.linkedin.com/in/lexfridman.
- Facebook: https://www.facebook.com/lexfridman.
- Instagram: https://www.instagram.com/lexfridman.
- Medium: https://medium.com/@lexfridman.
- Reddit: https://reddit.com/r/lexfridman.
- Support on Patreon: https://www.patreon.com/lexfridman