Skip to main content

Turn any podcast RSS feed into searchable transcripts, summaries, and episode chat.

No card • 10 free transcript credits
Sign up free with Google
Dwarkesh Podcast
Carl Shulman (Pt 1) — Intelligence explosion, primate evolution, robot doublings, & alignment
Dwarkesh Podcast

Carl Shulman (Pt 1) — Intelligence explosion, primate evolution, robot doublings, & alignment

Dwarkesh Patel 2h 44m 34 months ago
Deeply researched interviews <br/><br/><a href="https://www.dwarkesh.com?utm_medium=podcast">www.dwarkesh.com</a>
Website

Show Notes

Tap timecodes to jump
In terms of the depth and range of topics, this episode is the best I’ve done.
No part of my worldview is the same after talking with Carl Shulman. He's the most interesting intellectual you've never heard of.
We ended up talking for 8 hours, so I'm splitting this episode into 2 parts.
This part is about Carl’s model of an intelligence explosion, which integrates everything from:
* how fast algorithmic progress & hardware improvements in AI are happening,
* what primate evolution suggests about the scaling hypothesis,
* how soon before AIs could do large parts of AI research themselves, and whether there would be faster and faster doublings of AI researchers,
* how quickly robots produced from existing factories could take over the economy.
We also discuss the odds of a takeover based on whether the AI is aligned before the intelligence explosion happens, and Carl explains why he’s more optimistic than Eliezer.
The next part, which I’ll release next week, is about all the specific mechanisms of an AI takeover, plus a whole bunch of other galaxy brain stuff.
Maybe 3 people in the world have thought as rigorously as Carl about so many interesting topics. This was a huge pleasure.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Timestamps
() - Intro
() - Intelligence Explosion
() - Can AIs do AI research?
() - Primate evolution
() - Forecasting AI progress
() - After human-level AGI
() - AI takeover scenarios
Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe

Transcript not yet processed.

Sign in to unlock (1 credit)

Full transcripts, AI insights,
episode chat — free.

Sign up with Google in one click. 10 unlock credits included. No card needed.

Google sign-in · No credit card · Cancel anytime