No Priors: Artificial Intelligence | Technology | Startups
The evolution and promise of RAG architecture with Tengyu Ma from Voyage AI
At this moment of inflection in technology, co-hosts Elad Gil and Sarah Guo talk to the world's leading AI engineers, researchers and founders about the biggest questions: How far away is AGI? What markets are at risk for disruption? How will commerce, culture, and society change? What’s happening in state-of-the-art in research? “No Priors” is your guide to the AI revolution. Email feedback to show@no-priors.com.
Sarah Guo is a startup investor and the founder of Conviction, an investment firm purpose-built to serve intelligent software, or "Software 3.0" companies. She spent nearly a decade incubating and investing at venture firm Greylock Partners.
Elad Gil is a serial entrepreneur and a startup investor. He was co-founder of Color Health, Mixer Labs (which was acquired by Twitter). He has invested in over 40 companies now worth $1B or more each, and is also author of the High Growth Handbook.
Show Notes
Tap timecodes to jump
After Tengyu Ma spent years at Stanford researching AI optimization, embedding models, and transformers, he took a break from academia to start Voyage AI which allows enterprise customers to have the most accurate retrieval possible through the most useful foundational data. Tengyu joins Sarah on this week’s episode of No priors to discuss why RAG systems are winning as the dominant architecture in enterprise and the evolution of foundational data that has allowed RAG to flourish. And while fine-tuning is still in the conversation, Tengyu argues that RAG will continue to evolve as the cheapest, quickest, and most accurate system for data retrieval.
They also discuss methods for growing context windows and managing latency budgets, how Tengyu’s research has informed his work at Voyage, and the role academia should play as AI grows as an industry.
Show Links:
Voyage AI
Stanford Assistant Professor of Computer Science
Tengyu Ma Key Research Papers:
Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training
Non-convex optimization for machine learning: design, analysis, and understanding
Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss
Larger language models do in-context learning differently, 2023
Why Do Pretrained Language Models Help in Downstream Tasks? An Analysis of Head and Prompt Tuning
On the Optimization Landscape of Tensor Decompositions
Sign up for new podcasts every week. Email feedback to show@no-priors.com
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @tengyuma
Show Notes:
() Introduction
() Key points of Tengyu’s research
() Academia compared to industry
() Voyage AI overview
() Enterprise RAG use cases
() LLM long-term memory and token limitations
() Agent chaining and data management
() Improving enterprise RAG
() Latency budgets
() Advice for building RAG systems
() Learnings as an AI founder
() The role of academia in AI
Transcript not yet processed.
Sign in to unlock (1 credit)
Free to start
Full transcripts, AI insights,
Full transcripts, AI insights,
episode chat — free.
Sign up with Google in one click. 10 unlock credits included. No card needed.
Google sign-in · No credit card · Cancel anytime