Uncapped #46 | Brad Lightcap from OpenAI
Show Notes
Transcript
99% of people, get to use bad tools or don't have any tools at all. The quality of experience of the people that exist as their customers and users is not very good. Everyone has, like, lived the bad experience of going to modern life. And dealing with the things that we have to deal with. I think if you're kind of sitting there lamenting the idea that, you know, there's no more good ideas and no more new ideas, like, it's just kinda lazy. Alright. Did you, like, you film an intro? Do I film an intro? Or you just go? No. I just kinda Hard in. I just started. Yeah. This probably is the intro. Alright. So, Brad, thanks for doing this with us. I'm excited. Yeah. Me too. Do you have enough drinks? Would you like one more? Well, yeah. I'll take whatever I can get. We can load up. Well, I really appreciate you making time for this. I've been really looking forward to it. What I wanted to start with actually was I was just like thinking about this last night. And you joined OpenAI in 2018. And then like four years, you know, it was like research lab. You guys are like beating Dota. And then like four years in, like ChatGPT launches, and then it's like this whirlwind that's been, I guess, like three years, but I'm sure it feels like a lot more. I was just curious if you could like share your narrative or recollection of like what the journey has been like and like what are like the chapters? Like what's just your experience been like as you like look back on this so far? Yeah. Chapters is the right word.
It's the kind of journey of OpenAI, which I think tracks the journey of AI as a field, as an industry, has kind of been broken up into these weird periods. Like, when I joined, it was no one had really heard of OpenAI. Our work was, you know, relegated mostly to kind of small niches of San Francisco tech culture that followed such things as, you know, us beating the Dota, you know, best Dota players in the world and things like that. Really, was kind of, know, I didn't really like have anyone to talk to about it. It was like everyone was kinda like, are you like, what are you doing there? And what do you do there? And you were like the CFO when you joined, right? I was our CFO. What got you? Like, what what were you thinking when you joined? Like, what what did you expect it was gonna be? Well, I didn't know. I was 27. And so I was just kind of like, you know, and I'd maybe back up a minute. I was at Y Combinator prior working with Sam. And I was starting to spend a lot more time with what I call our hard tech portfolio in YC, so all the companies that are building everything that wasn't pure kind of SaaS and Internet, you know, consumer Internet. So spending a lot of time with, you know, everything from nuclear fusion to satellites to biotech to, you know, anything that would kind of fit outside. And OpenAI was kind of in that camp. Like, AI was kind of one of those things. It was like it was promised as this like future technology, but, you know, I wasn't really sure like who's like actually building this. OpenAI started, as you know, as like a YC research project. And so it was kind of in the family. And Sam had called me and was like, hey, I need someone to help basically do everything that isn't just the research at this company. Do you know anyone that would be good? And I tried to help them find someone, couldn't find anyone. And so I was like, I'll just help you, you know, myself on the side. But I started spending a lot of time with Greg and Ilya and the team that was there at the time. I kind of realized that they had this like crazy There are these crazy properties that apply to AI, which now we understand to be basically the scaling laws. And so consistently, the field was starting to discover that when you make things bigger, the results just get predictably consistently better. At that point then, it's like, okay, really this is just a compute problem actually. And intelligence basically can just be bootstrapped from basically scaling up very basic general architectures that can turn into a more general intelligence. And I was like, well, I don't know if this is true. And I don't know if this will hold. I'm certainly not qualified to judge that. But if it does, and these guys seem convinced that it is true, it's gonna be the most important thing ever. Yeah. And at 27, I was like, don't know. That just seems more interesting than investing in tech. Yeah. So you started doing that. And then what happened in those early years? Like, obviously, people are they're building things that were working, like beating the game and, you know, a lot of other projects. But what were you seeing on the inside from, let's say, like 2018 to 2022?
Obviously, was much more of a research centric culture. OpenAI is still highly research centric. I feel like people kind of think post ChatGPT, it became much more of this product centric culture. But research really drives everything. And I think that's started because of how much that was cemented in that period as, call it, the kind of cultural foundation of the company. So I spent a lot of my time really just trying to figure out what researchers needed to be successful. And that spanned from, you know, the capital that we need to invest in supercomputers to working with partners to do the supercomputer design and build out to things as kind of trivial and pedestrian as like our robots keep breaking and, you know, it takes too long to like drop ship parts from, you know, this one supplier that sits in some small town in England or something like that. How do we like tighten that loop and go faster? So it was this like very like kind of diverse set of problems early on that were really just about pure research acceleration. Obviously now, you know, it's kind of both research and deployment in our business. But it gave me an early on, like, appreciation of just like, I just spent all my time with researchers. And so it was really like, it gave me a firsthand understanding of kind of like what was happening before I think anyone else really appreciated it. So then there was chat in 2022, 2022.
Did you guys on the inside feel like, oh, this is gonna be something? Like, when you were playing with it before it got released, was the vibe inside like, this is like another cool thing. Let's just it's like a playground. Yeah. Or were people like, this is this is something. There's a word that sometimes people use in AI as to describe kind of when there's an indication of something that's happening, but you don't it's not quite happened yet, but you kind of get these, like, little you these little sparks. And that was kind of how I would describe the pre ChatGPT period is there were a lot of sparks. You could see that the models were now starting to get good enough that they could kind of emulate, you know, humans in a conversational format. You could see that there was an interest that people had in directly prompting the model. People forget that this was not the way that we originally engaged with language models. We thought of language models as completions engines. So you start a text string, and then it basically takes that as an input, and then it continues the text string on. This kind of more conversational, you know, dialogue based format is not the original invention of language models. And so, but what we were seeing is we had an API that was a completions API, and we had an interface that basically let people put text into an interface that would then, you know, show a preview of what the model would actually produce as an output. But people were trying to use that interface in a more kind of dialogue, kind of conversational turn based format. And so you could see it. You could just if you kind of paid attention, you listened, you could see that people wanted to talk to the model, and that was the natural intuitive way that people wanted to engage with it. But it wasn't actually quite built that way. The other thing that we saw ahead of time was we trained an early version of Dolly as a it was our first image model. It wasn't very good, but it was really a breakthrough at the time. And so for the first time, you could now generate images. And we had seen some adoption of that model in a more kind of consumer prompt based format. And so we had guesses leading up to ChatGPT that it was going to be something important, but we didn't appreciate the scale. I think my guess at the time, we all took guesses because we had to do the compute planning, was that at peak there'd be a million concurrent users.
And, you know, obviously we were very wrong. So what are the chapters since? Like, if you look back the last three years, what are the phases? Like, you were sort of like describing to a friend, here's the phases of my journey post ChatGPT, how would you bucket it? There's, you know, there's phases of the company's life, and then I think there's phases of the industry and technology. And on the technology side, I would say it's obviously there is this kind of proto period of research just starting to work. And I think I call that kind of the scaling period of where we just realized that you actually could go from something that was unusable to something that was kind of usable across, you know, basically most model formats that was kind of before mass consumer adoption. That was kind of 2018 to 2022. I think 2022 to kind of 2024 was really the period of chatbots where all of a sudden now it was, okay, you know, it was generative AI. It was people realizing that, you know, you actually could have something that was useful, but it was not totally clear exactly what it was useful for. You know, it was it was new and novel, and I think there was a there was a people had an appreciation for that. But, you know, the utility was still not totally there. Like, was kind of like a slightly better version of search. And then the next chapter, and I think what the one that we're in now is is this kind of period of agents, which is AIs that actually can go do things for you. They run asynchronously. You can give them instructions, and they can take an arbitrary amount of time and tokens to go off and think and figure it out. They can use tools. And I think we're in the middle of that period. I think that started in, for me, in December 2024 with the release of o one, and then kind of through 2025 and into 2026. And you said we're like in the middle of that now? Yeah. I think so. I think weirdly in each of these things, because the the kind of utility quotient on the models goes up by some enormous factor, I actually think it takes there's almost more time it takes in each of these eras to explore the kind of full potential of the model. I've always said to I say to our customers and partners all the time is like, you could stop progress right now. And I still think there's kind of a ten or twenty year diffusion and innovation cycle that just comes into the economy. Just to get it into the economy and for people to realize what these things are capable of. With chatbots, that maybe would have been five years or something like that. Yeah. But, you know, with agents, it's probably some multiple. And then question is obviously, as the technology will progress much faster than that. And so that dissonance of the diffusion period being kind of much longer than the actual kind of innovation cycle is going be something interesting to watch. How far away are we from the like completion of what agents can do? Like, is it the beginning of a thing that will never end? Are we halfway up an s curve? What is the current sentiment for, like, what the endpoint of, you know, agents' capabilities will be?
Personally, I feel totally unmoored here. I don't know. And, you know, the kind of historian and, you know, kind of technological economist in me kind of wants to think that everything has to fit into these very nice kind of s curve shaped paradigms and that, you know, everything will the innovation cycle will kind of look exactly as it is. And even if there isn't this curve that we can be Yeah. That kind of Carlota Perez, like, you know, okay, like, this will all be the way that it has been. But, you know, there's a lot of meta levels to this. I think we don't quite understand that when you've got systems that now have, in some sense, their own agency, there's almost kind of infinite levels of things that can happen, right? They can now start directing other agents. They can work together. You have the temporal aspect of they can just, you know, they can think and work for longer as long as they can kind of cohere the context basically through that period, which, you know, is something that I think will get solved. You know, even basic primitives like memory and other things that are core to very long horizon work and work that you would do kind of over multiple sessions, all of those things haven't even yet been sorted out but are starting to get figured out. Yeah. I mean, I've always thought, always in the last year, I've been like, why are we not gonna get to a place where you can just prompt, you know, build me a business, make no mistakes. Exactly. Yes. Yeah. No. And I But I can see why you couldn't be like, hey, can you go make me a million dollars, please? Right. And you play it out in the limit, you're like, I don't know, maybe that's possible. And I think that's kind of why even, you know, maybe if you go back and say even if you pause progress right now, maybe it's a longer Maybe it's forty years or something or fifty years of progress that will come from this, just on the basis of this step of the cycle. One of the interesting things that I've experienced is right before, right after ChatGPT, I think a lot of the conversation around AI was like living in sci fi land of are we gonna have like the next species take over? Are there dice in spheres? Like it was very like big. Yep. And then what I've experienced over the last few years is it's been extremely commercial.
In a good way, but in a very down to earth way. Like in the economy, operated by humans, it doesn't feel scary. It just feels like insanely sick software. But still, there's this lingering thing in the background that I think gets talked about a little bit less of, like, is there sentience? Does it go to this other place? Is that still a conversation that matters? Is it something that's still thought about? Or is it just like, hey, we actually feel now like this is just really good software. There's nothing to be worried about. It's just like an insane technical revolution. Yeah. This is a really interesting question. I think in some sense, the better the technology gets and the more it pushes toward that sci fi future, the more we actually end up having the conversation about it, diminishing it almost to just being a tool. And it's a weird paradox. And I've noticed the same thing. Because I used to sit at the OpenAI that was very much having the conversation about Dyson spheres. Because in 2018, that was kind of all you could talk about. You basically had something that was kind of barely working at the beginning, and then you could try and see You think about the whole thing. Because once you're in the middle of it, have the steps right in front you. Yeah. There's a local linearity starts to set in where you're a little bit like, okay, like, I I appreciate that this thing is a gazillion times better than what it was, you know, in 2018, and the capabilities are multitudes more than what they were even two years ago. Like as an example, you know, you talked about Dolly. Yeah. When that came out, I like, oh, that's cute. Yeah. But now not much, you know, just a few years later. I can't tell if the video is fake or real half the time. Yes. You know, it's like that's gonna get all the way there where you'll have no idea. No. Yeah. And I think that, like, in some sense, there will be this kind of like these parallel conversations that happen. Like, there will be the kind of like enterprise productivity conversation because that is something that actually people are thinking about when we talk about. Everyone's gonna kind of glob on to, you know, what is the narrative there that is just sort of funny. Like, we like waking up a new god? Or are we like helping lawyers be more productive? I think we're doing both. And I think, you know, the kind of parallel track of this insane level of empowerment of an individual person to do things that, like, would have been inconceivable even a couple years ago, you're already seeing examples of it. And that to me is like the weird sci fi future. There was the story over the weekend of there was a guy in Australia who, like, is curing his dog's cancer, who has no background in, as I understand it, in biology, but basically just had GPT-five effectively try and come up with some sort of RNA based, you know, that could treat, you know, could treat his dog.
And then he sent it, he worked with a lab to do the design of the treatment, and, you know, they kind of sent it back, and it seems to be working. And it happened in a matter of, like, for like $3,000 and like in a matter of, you know, a few weeks. And, like, it's kind of a crazy thing. Right? You know, that to me would qualify as, like, a spark of a sci fi outcome. It's crazy how fast we adjust to anything. It's like, you know, we could learn that there are, like, aliens tomorrow, and, like, we would next week be like, yeah. Of course, there's it. Know? It's just Yeah. One of my takeaways with this whole thing is we just people adjust to any new surrounding. A 100 normal in, like, no time. That's been my experience. It's like, things are novel for about three seconds. And the next day, it's like, okay, what have you done for me lately? Yeah. On this sort of topic of, like, what is the thing? I'm sort of watching it all. And I'm from St. Louis. Now I'm living in Silicon Valley. There's a very different perception of AI in, like, the St. Louises of the world and in like Silicon Valley. And like, I think here, the general sentiment is like, this is amazing. Thank goodness this happened. And I think around the country, maybe world, there's like real skepticism and anxiety and fear. And I think people here have that, too. But like, it's this interesting reckoning for people where you're grappling with simultaneously like, oh, my God, that's amazing. And that's awesome. Versus like, oh, my God, that's amazing. That's kind of a threat. How do you think about, like, what the right way to interpret this is? Like, what are, like, the genuine concerns and fears that, like, we're gonna need to work through? And, like, what are the things that you think are misunderstandings that will actually just be really positive? Yeah. And look, I I no one knows the future exactly. So I think everything here is speculation on all sides.
I think and I come I come at this kind of from a more of a like, you know, economics kind of history of markets background, which was more where I spent my time in college and trying to still spend a lot of my time trying to understand the world through that lens. So first of all, I think it is really a bummer that the world's view of AI is what it is. And I think I blame no one other than the industry basically for for that. I think we, as an industry, have done a horrible job of being able to paint for picked people a picture of a future that is way better than the future, than the world we live in today. And the crazy thing is I actually think that that is the reality. I think, you know, the stories of like the one of the guy who is curing his dog's cancer are going to become much more commonplace. And I tend to find a lot of comfort in the idea of, like, come back to individual empowerment of like anyone anywhere on Earth can have an idea, and the time to value from conception of idea to thing that exists in the world starts to collapse to zero. You know, not only from a time to value perspective, but also a cost of creation perspective. And I just I think amazing things are gonna happen when that when you reduce that friction and you increase that access. Like, people are incredibly innovative. They are incredibly creative. Everyone is motivated by their own set of circumstances and the problems that are in front of them to wanna improve the world they live in. And, like, I think 99% of it is there's a tools problem, which is they've historically had no means to be able to do that. And when you give people something that now enables them to start a business, do research, create a new thing, build a new service, serve customers more efficiently or cheaply. Like, only good things can happen in my mind. Now, obviously, there are things that come with that, and we have to be thoughtful about what the technology presents in terms of the flip side because it's as capable of, in some cases, doing harm as it is of doing good. But I tend to think that we will figure that out. Like we are resilient, and I would say also equally creative as a species. And I tend to think that whenever we're whenever we've been confronted with the opportunity to create something that has potential for greatness, we also have been really thoughtful about how we build institutions that protect against the downsides. So I have a more optimistic view. I think that the industry has more of a duty to help people appreciate and understand what's happening and to help people also, like, live the experience of it, like to use these tools to do the types of things talking about. An interesting instance of this sort of conundrum is in coding. And, like, I feel this is like something that's easy for us to talk about because we're very familiar And with it's one of the best applications of AI so far.
And so, you know, now, obviously, like, AI is really good at coding. So And then you could bump that up into the real world and say, are we gonna have more developers? Are they gonna be more people doing more things? Is it gonna replace I think the data I've seen so far is actually that there's more engineering jobs being posted every month than, like, ever before. Yep. But I'm curious how you think about this with, like, coding, like as an example of like what's going to happen when it bumps up into the real world of people doing stuff. This is where I come back to things. I try and come back as as rational as I can to this kind of economics based, kind of markets based view of how things have worked in the past, where you have, you know, distortions in kind of supply, demand, and cost that create these points that are these weird inflection points in in human productivity. If you reduce the cost of software engineering, for example, to virtually zero on the margin, then the simple thing think would be, okay, well, engineers won't exist anymore. The thing we're seeing in reality with tools like Codex and other things is, actually, when you reduce the cost of something to zero, the demand for it goes up significantly. And the job of the people who were previously described as software engineers, who were kind of hand typing every character of code Who are no guiding agents. Are now just doing a slightly different version of the job. Well, I think, you know, some of this is that the cost is lower, but it's not zero.
So That's true. Which is a good thing, I think. Because between two companies that are competing for a new market, let's say they're doing, you know, AI for construction. If you have two companies, the one even if engineering got much cheaper, if one just still decides to spend 10 times more than the other, presumably those people are not going to do nothing to improve the product. And so I think we're just going to it should be better software rather than fewer people working on it. Software is wildly underpenetrated in the world. I think if you actually zoomed out and basically said of all the places where software and good software, not just software. Yeah. And by the way, there's still so much bad software. Like, create Everywhere. If you, like, go to a hotel and you, like, look behind their screen, you're like, what are you typing on? You know, there's a lot of work to do. It's crazy. And that to me is also, by the way, if you want to talk about risks, like that's actually where I think the risk surface exists. It's the software systems that hospitals use, that our power grid uses, that, you know, store like, you know, customer information through a hotel or read. Like, these are all fairly archaic systems for, you know, institutions that actually span meaningful percents of the world's kind of GDP. And so I would kind of look at this as like, in some sense, this is almost the greatest thing to ever happen is that you've now got systems that can help update all of that software, that can bring software into places that there's 0% penetration of software where there should be, that can help reinforce and harden systems that are exploitable or vulnerable. And in some sense, like, you know, you kind of look at like where were we from a in terms of how much like we actually needed software relative to kind of how much we'd penetrated. I think if you actually could measure that, I think we'd be at 1% today. And so I have a maybe a slightly different view of this, and it's a personal view, of course, is if you have AI that can write really really good and obviously safe software, I think that is going to be one of the greatest gifts to the world. And I think the speculation around, you know, will there be software engineers in the future or not is kind of the wrong question. There are going to have to be people who oversee the design, implementation, and maintenance of what could be 10000x the amount of software and the amount of code that gets written in the world. And that is going to create a unique demand cycle that may not look exactly like what we do today in software engineering, but it's going to be important. Yeah, absolutely. What was the breakthrough that happened for you all recently with Codex? It seems like some step function thing changed in the last few months in the industry and for Codex in particular. Well, it's a few things. So I think one is like there's just the focus of the team at OpenAI building Codex. I think I've I've been at OpenAI a while, as you said, and the the work that that team is doing to drive that product with the amount of focus and intensity that they're doing it with is kind of a singular and unique effort, I think, in the history of the company. They are obsessive about the quality of the product. They're obsessive about the quality of the model. And because of where we are in terms of how models are trained, the cycle time on how fast we can kind of drive improvement is starting to collapse. And so that's why you're seeing these jumps from 5.1 to 5.2 to five point And three to five now it's it's not surprising that you get a model like GBT 5.4 that as of today is, you know, here we are in mid March, it's the model's a few days old and is doing a billion dollars run rate. Revenue is doing 5,000,000,000,000 a day. That's crazy. Is you know, now far and away our kind of most dominant model of our set of API models, and is also driving, you know, driving Codex growth at the rate it's going. And I think that's only going to increase this year. And so by the end of the year, I think we'll look at the models that power Codex and our APIs today and kind of think we'll laugh. We'll think they're kind of pedestrian. Obviously, like OpenAI started, you know, in chat and then moved into all these different things. And over time, I think has become probably it's one of the most unique companies in general. But included in that uniqueness is like, you guys have done a lot of things. How are you thinking about that now? Obviously, like the market is starting to somewhat mature. You guys have had new companies come out, spin out of OpenAI and focus on areas that have turned out to be really productive. I'm sure that's, like, changing the way you guys are thinking. So I'm just curious of, like, the state of the union. You know, in early twenty twenty six, when you like look at, you know, here's where we are, here's what's around us, what matters now, like, what do you care about? Like, what do say, this got us here, this is what's going to get us there? What's the focus? One of the cool things about OpenAI is it has a very wide aperture on, I think, how it looks at what its kind of ultimate mission is. These lines that people I think drew maybe in the world prior of, you know, your b to b or your b to c, or your hard tech or your software.
You know, all of the things that kind of the VC ecosystem segments themselves have. Got a little lean. Yes. We don't see those walls. We kind of see AI as having being this enabling technology that drives is gonna drive innovation cycles across all of the above. And that could be in, you know, it could be in in the enterprise. It could be in consumer. It could be in, you know, in in creativity. It could be in robotics. It could be in hardware. And I think what we want to understand is what do each of those bets look like. And OpenAI has an operating model that we've has been kind of tried and true for us really since the company started, which is being able to be experimental, being able to kind of try and iterate, being able to be very kind of model forward, I think, in how we think about a problem and not really feeling like we have the incumbency of the kind of last generation, and then trying to kind of see if we can build the thing that we think is possible. And if it works, you kind of build an effort around it. And if it doesn't work, then you kind of you shut it down and you recycle those people back into a new thing. Yep. And that was really the way that OpenAI operated early on. It still somewhat is, is this kind of expansion contraction model in research where you've got, okay, maybe there's 20 projects that are kind of all trying different things and going on at the same time. Maybe two or three of them will really work. You scale those up. You consolidate people kind of back into those projects to scale them up. And then over time, as you kind of shift into a next paradigm, you start to kind of, you know, you spread back out again and see if you can can take more bets. And I think that's gonna be how this goes. I think that same everything is, in my mind, downstream of research. And so if that's the kind of cycle of how research is working, in some sense, think the product and deployment cycle should look similarly. I also feel like I can just tell from the way that it's a unified model, the way the product's feeling, it's going to all just be a unified thing at some point here soon. Like, it's already kind of going that direction. And that thing will just be used by people, whether they're at home or work. And, you know, it's like people use Google at home and at work, it's just like know, it becomes the tool. Yeah. We need the models to start doing more work for users is what I would say. I think if there's been one really big gap in my mind in kind of the user consumer experience in AI so far, it's been that users have to do too much work. And you're kind of promised this future of these really smart models, and, you know, they're they kind of can solve all your problems very dynamically. And yet here we are, like, with 18 things in a model picker. And, you know, do you want, like, thinking fast mode and, you know, or do you want pro thinking hard mode? It's crazy. Just like It's to move on. Yeah. It's time to move on. That to me feels like the direction where I think you're describing of this more of this consolidated, like, I just don't want to think about it. I just want intelligence, and I'm gonna let the model kind of decide how to allocate that, you know, on a token level most efficiently. Okay. I want to move the conversation to a selfish place now. Okay. You've been an investor before.
My question is, what should I invest in? And like, you know, like, maybe to put a little, like, framing around it, there's like a frequent worry among founders of OpenAI releasing something and I'm going to get my face blown off. And you know, what's safe from AI? And what will or won't the models do? Where can a startup, like, predictably add value? You know, Sam talked about you should build your company such that you're planning for the models to get smarter. And if them getting smarter is good for you, that's a good thing. If them getting smarter is bad for you, you know, that's going be really tough. But, like, maybe can you, like, unpack it a little bit more now, just as months and years have gone on? What are like the safe places for a startup to try to like do work that they can expect to still be available to them in three years? Mean, I'll go back to the Or should they just all join OpenAI? I don't think they should all join OpenAI. First of all, the level of energy in the ecosystem right now is like nothing I've ever seen. Like, the quality of founders and the And the effort. The effort. There's this intensity and there's this like urgency that Do you remember the startup ecosystem, like, right before ChatGPT?
Like, you know, like, after Zerp, like, you know, we have come down from, like, the SaaS, you know, glory moment. That was tough. I don't know where we'd be right now without, you know, it would be not fun. I was at YC in, you know, in kind of 2016, mid twenty eighteen. That was good. It was the, like, front end of that was a fun time to invest at growth. You know, we were fortunate enough to invest in, I think, you know, in the growth rounds of a lot of the companies that had been built in, you know, call it the last five years prior to that. And then weirdly, it just got it got less fun Yeah. I think kind of in 2017, 2018. And I don't know what it was. It just it felt like the ecosystem was kind of tired. Like, I think there didn't feel like there were a of new ideas. I think a lot of the obvious stuff had happened at that point. And I think without like a new technology shift, like at some point, you know, there's always more to do. But at some point, first, you know, the '80 of the eightytwenty gets done. And now you're, you know, rooting around in the 20. I think that's right. But it feels firmly now like there is this entirely new cycle, and that the urgency and the excitement is is very much there. And I think the invest like, also just the ambition of the companies that we we engage with. You're like, like, I just it's it's like stunning to me sometimes. I'm like, you're gonna do what? Then you realize also there's an enablement factor of, like, as soon as you get models, for example, that are good enough at software engineering that they can start to, you know, themselves like design and write in new programming languages. Or that they can speed the time from being able to take old, you know, code bases, refactor them, and then kind of rewrite them into new, you know, new and modern frameworks that enable another company to exist and serve an area that was historically traditionally underserved. You realize that like, oh, like there's an entire industry here that didn't exist that's about to get built. And then you've got a founder who sees that and they're like, I'm gonna go after that. You know, that's partly the answer to the first question is, if you kind of think of think of model capability as, you know, kind of dropping successively larger rocks in the pond, and the ripples from those those rocks kind of, you know, reverberate wider and further, And it creates more and more surface area around the circumference. And I think the way I would kind of look at it is like you don't want to be right under the rock dropping. You're going to drown. That's a very hard place to be. But you want to really be right out on that outer edge, on that surface what is the thing now that is enabled by this advancement in the capability that wasn't previously workable before in a very specific and opinionated area on a very hard problem that has historically been underserved. I mean, guess to stick with your metaphor, I feel like some of the fear is that the next rock you drop is going to be bigger than the circumference of the ripple of the last rock.
And so things that, you know, were at the edge before are now squarely in the center of the model. Yeah. I think there's no substitute, though, for being familiar with a user, a problem, you know, like how the existing industry serves that problem or doesn't serve that problem, and just being very, very close. You know, like YC always had this thing. It was like basically, you know, effectively just like talk to users. It's kind of this simple advice. Sounds trivial, but not enough people do it. And when you actually get into it and you realize like, oh, like, the world is gigantic. You know, 99% of people get to use bad tools or don't have any tools at all. You know, the quality of experience of the people that exist as their customers and users is not very good. Yeah. Everyone's lived that in some capacity. Everyone has, like, lived the bad experience of going to modern life Yes. And dealing with the things that we have to deal with. I don't know. I just I think if you're kind of sitting there lamenting the idea that, you know, there's no more good ideas and no more new ideas, like, it's just kind of lazy. I feel like there's at least two other things that can just, like, give you comfort as a founder. One is that, like, don't think any company, no matter how great it is, can do everything. And there's just you know, there might be 10,000 people working at the labs, but there's millions of people other places and you just can't do everything. Yeah. The other is that I've been surprised by is some of these markets are just so ridiculously big that there's like eight things that are all doing well around, let's say, like, Codegen and website building and sort of like internal tool creation and whatever. You could do that probably straight out of Codex, but you can also use other products that are great, that are, you know, based on Codex and things like that. So I think some of it is just these markets are just hard to appreciate how big they are. Yeah. And everyone's got like Like, again, there's no substitute for being able to talk to users and being able to identify like what do people really want. Like, OpenAI is, like, focus is really on trying to improve the models and do the best research we can possibly do. But like, you know, for someone in a very specific area of the world who has a very specific set of needs, you know, who wants to do one thing and they wanted to do it really well, there's probably some alpha there. I do think it changes the kind of way you need to build a company versus in the past, though. I agree. Like, what I've noticed is a lot of the great founders today seem very willing to just rip everything out that they've done up till this point and keep only, like, their team knowledge, customer relationships. But if the product we built so far is wrong, we're gonna just trash it in a way that I think people were much more precious about before. But I think some of this goes to there's like a new, like, ephemerality to a lot of these things, which software is super easy to build. I can make a UI that works for me today, but I'll throw it away because I can just make a new one tomorrow. I think that's like an interesting trend, too. Yeah. I have seen a handful of times now founders of companies that were built in that period of between, call it, 2008 and 2016 or something like that, you know, who were kind of the canonical darlings of of software from the last, you know, decade or so, who have founders who are still running the company, who have basically decided, like, I'm effectively restarting the company. Yeah. And they have taken it on themselves to fork off of the kind of the mainline effort to basically go figure out, like, what does, like, the second chapter of this company look like in a world where the primitives and the tools and the assumptions have changed. Which is a hard thing to do for, you know, there's just so much sunk cost to it all. Yes. But I think the people who are able to adapt to that, it's a huge advantage, seems like. Totally. Like, there's no, in my opinion, like, there's no like, can iterate so fast now. Like, you can explore the action space so quickly. Yeah. And you have the benefit of, like, you know, legacy customer relationships. You've got the benefit of existing teams. So in some sense, you almost are starting with a head start. Yeah. The way I see it is like, like, you can learn faster. Yeah. Versus, you know, if I were to start a new company tomorrow, I'm starting with no customers, I'm starting with, you know, no funding, I'm starting with no product and no team. I guess related to this, how do you feel about the sort of like sell off in public markets? Like, obviously, outside of like, you know, the big companies which have done great, but sort of like, you know, public software companies have like taken a pretty bad beating. When you think about the work that you've been doing with them and what you've been seeing, are you watching that and you're like, this makes sense? Or are you like, actually this is like sort of a misunderstanding and you're feeling bullish about those companies?
Hard to comment on specifically. Like the market is like a very frenetic thing as you know. Here's what I kind of live day to day is so we work with basically every company that, you know, sits in the Nasdaq that you could could imagine. And a is like all of these companies are kind of as motivated and moving as quickly as any as any start up. B is they've got amazing customer relationships. They've got amazing kind of depth of understanding of the problems they're trying to solve, the areas that they serve. Obviously, they've got years and years of perspective that have been built. And I think, like, now in some sense, they're, you know, able to leverage and benefit from the same tools that anyone else is. And so the conversations we're having with them are really about them starting to rethink, you know, end to end their entire customer experience, their product, starting to think about, you know, how do they serve adjacent markets, starting to think about ways that they can pass capability through to their users. So, like, creating entirely new experiences that weren't possible before.
So I think you could take the other side, actually. I think you could basically take a very long view here, which is that Yeah. In some ways, the software itself is like the easiest thing at this point. Having all the relationships, the team, the trust with all the customers, that's actually the hardest, you know, pull of the tent to have now. You know, if that class, if that segment was asleep, I would say, okay, maybe that, you know, concern is more warranted. But they're not. No. And it's happening at the CEO level and the founder level in some cases where everyone is as motivated to figure this out and figure out, you know, how to create value for their customers and their business as anyone else's. And so I think, you know, it's the beginning of a new cycle is my guess. You're always gonna get new companies that form that are trying to take a fresh and new approach. Often, the benefit that those new companies have is that the incumbents don't realize what's going on and are too slow to move. Here, you actually don't have that dynamic. You've got everyone running, trying to run at the same speed. And so I think that's exciting. And I would say if you're kind of long long AI and long, you know, startups, then it might even make sense, maybe the contrarian opinion, be long legacy software too. I don't know if you're experiencing it one way or another, like what you think it takes for more experience. It doesn't have to be founders, but just like even people joining OpenAI from some old company, you know, that had not been AI native. Like, how do you help people reset? Like, what does it take for people who have lived in the pre AI era to, like, you know, work the new way? I think you got to, like, see it firsthand.
And if you're not, like, playing with Codex every day, like, I think it's hard to intuitively grok just, like, how disruptive and crazy it is. Mhmm. Like, Codex for me has replaced ChatGPT on a kind of daily driver basis. I'm And not even technical. Like, I don't I don't write software for a living, but it has a general capability that and I'm specific enough about the set of things that I want that I know and I've kind of developed enough, like, familiarity with it. So what are the, like, not what are you doing with it? Like, what's, like, a daily quick use case? My life is basically a kind of daily struggle of, like, thing that I would like to see get done. Then There's a question. My life is a daily struggle. Well, that too. Of, you know, thing that I would like to see get done and then kinda how fast can our team mobilize and operationalize to kind of get it done. And at a busy, you know, fast growing, very busy company, like, sometimes those timelines drag. And then when those timelines drag, it means, like, the thing that I kind of want to see us do starts to drag. Yeah. And everything kind of elongates into this kind of like, okay, something that really should take if everyone 100% focused on this thing, something that should take two days, you know, now takes basically kind of a month. And so one of the things I've started, you know, using it for basically is kind of supplementing that thing. It gives me like a first version of everything. So for example, I were building a fairly substantial for deployed engineering org, which we can talk about. But recruiting for that has been, like, challenging. Like, recruiting's hard. Plus you're using it to recruit. Well, I'm using it actually to basically kind of go figure out, you know, of lists of people that we're thinking about recruiting. How do you navigate and stack rank among that list before you start getting into, you know, the candidate engagement? And it's crazy because, like, everyone today kind of has this, like, online presence, and a lot of people have blogs and x accounts and all that. And so I just told Codex, was like, here, take this list and basically go figure out like what public presence any of these people have, and you know, basically come up come back to me and effectively, like, read, you know, read their online thing and score it against how you think about some of the kind of technical elements of our work and what, you know, the job descriptions are of the things that we're doing. It works for even what is kind of a nontechnical task like that. It basically writes a program, and it it will come up and and figure out how to, like, go efficiently look at each of these profiles and come back and and give me kind of these scores on how good, you know, it thinks each of the each of these candidates kind of online writing has been. Yeah. And it's cool because it actually surfaced for me, you know, three or four candidates who I couldn't have picked off the list, staring at a list of 200 names. But where I was like, okay, like, let me go double click on this. And now it gives me an opportunity to go like really look into that candidate's, you know, profile and their blog and whatever and start to just get to know them better.
And that process would have taken, you know, a kind of a normal busy recruiter probably a couple weeks. Right? It's a lot of names. Yeah. And here it's just like it collapses down to 20 By way, bet a lot of this is like what is going to be needed for people to just broadly be excited about AI, not, like, frustrated about it, is using it and realizing that it's, like, super empowering. Very much. Yeah. I think, like Versus thinking, like, oh, all these other people are using it to be empowered. It's like, no, just start using it. And I guess a lot of that is, you know, you getting the tools to a place where, it can be adopted super easily by everybody. For sure. And I think, like, almost in some sense, one of the things that feel like is kind of the story that hasn't, like, yet really diffused into more mainstream conversation on this. It's just like how general these tools are. Like, you don't have to be a software engineer to use Codex. It's just fascinating that you prefer Codex over Chat for a lot of your work. It's cool. Yeah. I mean, the Codex app is is amazing if you haven't used it. I have. I check check it out. But it's it is, you know, so, like, the terminal based use is maybe a little more intimidating if you're not technical. But, you know, in an app interface, it kind of just looks like Chat. And I think the, you know, it's got much more general agent capabilities. Yeah. On on the topic of, like, the forward deployed stuff and Yeah. Private equity, like, what's the thinking there? The thinking is is very much what I was kind of talking about earlier, which is if you think about kind of, like, the way that software is gonna get built in the future, In some sense, now, any specific problem within any company in any part of their process, historically, it would not have made sense economically to have spent a lot of time thinking about how to solve that one corner of a problem. It's too expensive to hire a bunch of people, to build a bunch of, you know, software, and, you know, for that software to then have to be maintained. And obviously, for the most important problems in most large enterprises, you could hire people to do that type of thing, and there's entire industries that have gotten built around that. But for, you know, 99% of problems, for kind of 99% of businesses, that's totally out of reach. You'd have to either decide that you wanted to hire a couple people to try and build something on their own that maybe didn't work super well, or you look to see if the market offers a solution. And the problem but the problem is that solution doesn't necessarily fit exactly what your shape of problem is. So now you've got people kind of contorting themselves trying to figure out how to adopt the thing off the shelf that wasn't really built for their company. It was just built as a kind of general purpose tool. And I think that that entire era is is over. I think, like, now you actually can reason how almost every problem inside of a business can have solutions that are kind of custom built for it. And it goes back to this kind of weird paradox of what do you think is gonna happen with with jobs, where, you know, we wouldn't be wanting to hire FTEs as aggressively as we would if it felt like software engineering jobs were going away. The jobs of those FTEs are different. You know, if you'd hired an FTE five years ago, they'd be doing something different than what they're gonna do in the future. But the amount of demand and the amount of opportunity that we see to be able to go address surgically every area in a business that could benefit from solution design, and not solution design that happens on the order of eighteen months, as is the kind of industry norm, solution design that happens on the order of maybe eighteen days, if not faster. That to me is like an incredibly large opportunity that I think will be the story somewhat of how the next few years goes. And so the FTEs we're hiring is really to help address that. Last question I have is just sort of your reflections working with Sam. It's kind of funny. Just, you know, I've known him as a brother. You know him as someone you've worked with for a long time now. I'm curious sort of like what the evolution you've seen has been like now that he's obviously gotten to a different place than like the public sphere and there's this whole public persona.
Then you obviously work with him on a daily basis. Just like, what's the whole experience like for you with him? Yeah. You know, I think Well, so we worked together for ten years. Ten years in January. And the first year or two was YC? Yeah. First two and a half years was YC. And then I got to OpenAI before he did. I recruited him at OpenAI. I love that. But, you know, he's he's like a he's a remarkable individual. You know that. And I wish more people could spend more time with him kind of off the record. I think he's not innately, I think, someone that enjoys being kind of a public face of things. I think, certainly, it feels like an unnatural Yeah. Thing for him. He is someone who much prefers spending his time Yeah. Sitting in a huddle of, like, five people talking about the future and having a deeply technical conversation about some niche topic. That's kind of who he is internally at OpenAI. It's what I've always known him to be. And and I think that that if if you could spend more people to spend more time with him, you'd realize he's like an infinite optimist. That's crazy because the way I experience it, it's almost like this, like, sacrifice to have done to put himself out so publicly, which is a requirement, I think, to make all of this happen and, like, show the world that by accumulating talent, compute, and all these ideas in one place, like, that's what made all of this possible, then everybody can see it. But, like, that's such an uncomfortable thing to have done.
Yeah. Well, you know, it's it's it's interesting because, like, he thinks on a timescale that's, like, more like a decade plus. And I think the the world kind of struggles to think beyond like a quarter forward. Yeah. I've always felt like there's this kind of mismatch in There's a total mismatch. And so it's like he'll say something, and everybody's like, that's crazy. Yes. And then three years later, it's exactly where we are. Yes. Sometimes sooner than that. And then it's like, you know, there's no, like, reconciliation backwards. It's just like now he's saying a new crazy thing. And people are like, oh, you've been crazy all along. And that's like a weird thing to watch. And there's no sort of way to tie that together, really. No. Everyone's trying to figure out what's happening right now. Because I think in some sense, the whiplash is so real. And I have like a lot of empathy for that as, you know, I spent a lot of time with like our customers, you know, friends, family, like, that are kind of like looking at me and calling me being like, what is going on? Like, what is happening? What is this Codex thing? Like, why is everyone and I think in Sam's head, we're already so far beyond that point in terms of what's coming that it's trying to kind of bridge for people like where we're going relative to where we are. And I think it's disorienting. It's really an insane thing that you all have done and continue to do to pull all these pieces together. Like, think this has got to be like the most hard mode company of all time. It's very, very impressive. I'm sure you, like, are just used to it all, but hopefully, you appreciate what a ridiculous feat you guys are pulling off. Well, I appreciate that.
I I very much feel like it's it is far from incomplete. Far from complete. It's highly incomplete. And I feel like, you know, it's like interesting. When we formed the company early on, the mission orientation of the company was like very strong. But I always kind of tell people, like, in a very literal sense. Like, I think a lot of companies have these kind of high level, of lofty missions that you can't really actualize. Like, it's like, okay, no shade on anyone specifically, but, like, it's like, don't be evil. Okay? Like, that seems like a good thing. It's or it's like, make the world more connected. Seems good. It's also like, okay. So if the plan is don't be evil, like, then what? It's like very debatable from there. Well, how do you actualize that? What do you do? Right? And I think one of the kind of interesting things about OpenAI is the mission from day one is this very actualizable mission. We try and kind of run everything that we do somewhat through the lens of, okay, is this consistent with the outcome that we are trying to create? And I always used to joke at OpenAI, like, there was a world where we talked about like, okay, we do the thing we say we're gonna do, and then we like go home and we're done. Like, it's like, okay, like, you know, that's the end of the story, and like we all go back. And, you know, in practice, is it gonna work that way? I don't I don't know. I don't think so. But maybe. But it is a company that has a very specific orientation toward a very specific goal. And I think amid all the craziness of all the things that are happening, like, it's very focusing to be like, okay, guys. Like, there's still this one thing that we're really trying to deliver. It's very easy to come back to that mission and say, is this something that drives toward that outcome or not? And if it's not, we're just not gonna do it. Love it. Well, this was really fun. Brad, thanks for your time to do it. Yeah. Good to see you.
Full transcripts, AI insights,
episode chat — free.
Sign up with Google in one click. 10 unlock credits included. No card needed.
Google sign-in · No credit card · Cancel anytime