Skip to main content

Turn any podcast RSS feed into searchable transcripts, summaries, and episode chat.

No card • 10 free transcript credits
Sign up free with Google
The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: NVIDIA vs Groq: The Future of Training vs Inference | Meta, Google, and Microsoft's Data Center Investments: Who Wins | Data, Compute, Models: The Core Bottlenecks in AI & Where Value Will Distribute with Jonathan Ross, Founder @ Groq
The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch

20VC: NVIDIA vs Groq: The Future of Training vs Inference | Meta, Google, and Microsoft's Data Center Investments: Who Wins | Data, Compute, Models: The Core Bottlenecks in AI & Where Value Will Distribute with Jonathan Ross, Founder @ Groq

Harry Stebbings 1h 20m 14 months ago
The Twenty Minute VC (20VC) interviews the world's greatest venture capitalists with prior guests including Sequoia's Doug Leone and Benchmark's Bill Gurley. Once per week, 20VC Host, Harry Stebbings is also joined by one of the great founders of our time with prior founder episodes from Spotify's Daniel Ek, Linkedin's Reid Hoffman, and Snowflake's Frank Slootman. If you would like to see more of The Twenty Minute VC (20VC), head to www.20vc.com for more information on the podcast, show notes, resources and more.
Website

Show Notes

Tap timecodes to jump
Jonathan Ross is the Founder & CEO of Groq, the creator of the world's  first Language Processing Unit (LPUTM). Prior to Groq, Jonathan began  what became Google's Tensor Processing Unit (TPU) as a 20% project where he  designed and implemented the core elements of the first-generation TPU chip.  Jonathan next joined Google X's Rapid Eval Team, the initial stage of the famed  "Moonshots Factory", where he devised and incubated new Bets (Units) for Google's  parent company, Alphabet.
In Today's Episode We Discuss:
Interview with Jonathan Ross Begins
Scaling Laws and AI Model Training
Synthetic Data and Model Efficiency
Inference vs. Training Costs: Why NVIDIA Loses Inference
The Future of AI Inference: Efficiency and Cost
Chip Supply and Scaling Concerns
Energy Efficiency in AI Computation
Why Most Dollars Into Datacenters Will Be Lost
Meta, Google, and Microsoft's Data Center Investments
Distribution of Value in the AI Economy
Stages of Startup Success
The AI Investment Bubble
The Keynesian Beauty Contest in VC
NVIDIA's Role in the AI Ecosystem
China's AI Strategy and Global Implications
Europe's Potential in the AI Revolution
Future Predictions and AI's Impact on Society
 

Transcript not yet processed.

Sign in to unlock (1 credit)

Full transcripts, AI insights,
episode chat — free.

Sign up with Google in one click. 10 unlock credits included. No card needed.

Google sign-in · No credit card · Cancel anytime