Skip to main content

Turn any podcast RSS feed into searchable transcripts, summaries, and episode chat.

No card • 10 free transcript credits
Sign up free with Google
No Priors: Artificial Intelligence | Technology | Startups
Personalizing AI Models with Kelvin Guu, Senior Staff Research Scientist, Google Brain
No Priors: Artificial Intelligence | Technology | Startups

Personalizing AI Models with Kelvin Guu, Senior Staff Research Scientist, Google Brain

Conviction 40m 36 months ago
At this moment of inflection in technology, co-hosts Elad Gil and Sarah Guo talk to the world's leading AI engineers, researchers and founders about the biggest questions: How far away is AGI? What markets are at risk for disruption? How will commerce, culture, and society change? What’s happening in state-of-the-art in research? “No Priors” is your guide to the AI revolution. Email feedback to show@no-priors.com. Sarah Guo is a startup investor and the founder of Conviction, an investment firm purpose-built to serve intelligent software, or "Software 3.0" companies. She spent nearly a decade incubating and investing at venture firm Greylock Partners. Elad Gil is a serial entrepreneur and a startup investor. He was co-founder of Color Health, Mixer Labs (which was acquired by Twitter). He has invested in over 40 companies now worth $1B or more each, and is also author of the High Growth Handbook.

Show Notes

Tap timecodes to jump
How do you personalize AI models? A popular school of thought in AI is to just dump all the data you need into pre-training or fine tuning. But that may be less efficient and less controllable than alternatives — using AI models as a reasoning engine against external data sources.
Kelvin Guu, Senior Staff Research Scientist at Google, joins Sarah and Elad this week to talk about retrieval, memory, training data attribution and model orchestration. At Google, he led some of the first efforts to leverage pre-trained LMs and neural retrievers, with >30 launches across multiple products. He has done some of the earliest work on retrieval-augmented language models (REALM) and training LLMs to follow instructions (FLAN).
No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode.
Show Links:
Kelvin Guu Website
Google Scholar
FLAN: Finetuned Language Models Are Zero-Shot Learners
Simfluence: Modeling the Influence of Individual Training Examples by Simulating Training Runs
ROME: Locating and Editing Factual Associations in GPT
Branch-Train-Merge: Scaling Expert Language Models with Unsupervised Domain Discovery
Large Language Models Struggle to Learn Long-Tail Knowledge 
Sign up for new podcasts every week. Email feedback to show@no-priors.com
Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @Kelvin_Guu
Show Notes:
[] - Kelvin’s background in math, statistics and natural language processing at Stanford
[] - The questions driving the REALM Paper
[] - Frameworks around retrieval augmentation & expert models
[] - Why is modularity important
[] - FLAN Paper and instruction following
[] - Updating model weights in real time and other continuous learning methods
[] - Simfluence Paper & explainability with large language models
[] - ROME paper, “Model Surgery” exciting research areas
[] - Personal opinions and thoughts on AI agents & research
[] - How the human brain compares to AGI regarding memory and emotions
[] - How models become more contextually available
[] - Accessibility of models
[] - Advice to future researchers

Transcript not yet processed.

Sign in to unlock (1 credit)

Full transcripts, AI insights,
episode chat — free.

Sign up with Google in one click. 10 unlock credits included. No card needed.

Google sign-in · No credit card · Cancel anytime