4:44 Link copied!
But we
got more out of Rory than he got out of us,
Exceptional man, but I wanted to start, and we were just chatting, I was running around the pot listening to the Dwarkash and Jensen episode,
and I was like, I don't think Jensen came out very well. Do you agree with me that Jensen didn't come out very well from that episode? I think this is like the greatest Rorschach test of all time of where somebody is mentally on AI. So I happened to see a bunch of the tweets before I watched it, and so I was a little bit obviously biased in advance. But if I hadn't seen any of the commentary and I had just watched it, I would have been very confused by commentary
post interview.
And to be clear, I kind of jumped to the more salacious part of China and that topic. But I'm almost probably 80% with Jensen. My sort of way of thinking through the logic actually works much closer to Jensen.
The idea that we're in some kind of existential race where a month or two of advantage
is going to change the total outcome of AI progress and what everybody does between us and China, I just don't agree with. I think what we are in is a commercial and economic race. Obviously, with safety built into that, there's no question. And I think we actually have a lot more power globally
if it's our technology stack that's powering AI. And so I kind of am more in the camp of Jensen on his lines of logic. Know, Dorekash kind of oversimplified a few components.
You know, he said with Methos,
if we get early access to that, then we can go and upgrade all of our systems. And, you know, with, again, great respect to Dourkech, it's like upgrading software is a multi year effort. So unless they somehow keep Methos
closed for the next decade, there's not like some magical moment where you can just secure everything. This is an ongoing, endless, till the end of time. You're always in this sort of leapfrogging
between the defensive side and the offensive side. So I just don't think these things are as binary. And so I actually more I'm inclined to Jensen's view of that. And then Jensen had a really key point that was didn't go viral yet. So maybe you could kick it off. But he had this little small vignette, about ninety seconds in the whole conversation, where he said, you know, we're going to do ourselves a disservice if we scare people out of engineering, if we scare people out of radiology, if we scare people out of health care because they think all these jobs are going to get eliminated with AI. That is not helping us. That is it's doing a disservice to the next generation. It's doing a disservice to society as a whole. Like, we don't yet know any way to use AI in a capacity other than augmenting our work where we still eventually have to go and review the work in some
form. Maybe you don't have to review the tiny little parts of it anymore. You can review a bigger part of the work product that happens. But we haven't removed humans from the loop. We've just changed where they enter the loop. And I think that Jensen has a more pragmatic view of the technology.
We should be very thoughtful about how we make these systems safe, but I much more land in Jensen's camp on the overall kind of contours of the debate. Okay, first, a disservice by discouraging people to go into categories like radiology or engineering.
Do you think you will have more engineers at Box in five years' time? We will. Everybody is so myopic about this, I want to just like shake the industry. We are so myopic and self interested and we think that the entire industry is the tech industry. And when you go around the country or world and you go and talk to a tractor company and a bank and a pharma company and you ask them, do you think you have enough engineers to go and automate what is going to happen in your industry going forward? They absolutely, unequivocally,
universally always say no. And so what the breakthroughs of Claude Code or Codex or others are doing is it's making it so those companies now can actually do the same kind of engineering that Silicon Valley has been able to do. And so we are myopic because we think that tech is the only use of engineers.
And tech is only, I don't know what the right number is, eight, ten, 15%
of GDP in the economy. What happens when 85% of the economy now gets access to engineering
tech has always had? That is what will happen. And so, yes, maybe if you're graduating,
you know, name your computer science school today, you don't go immediately to Google. You go to literally John Deere or Caterpillar or Eli Lilly. But the skills that you have are going to be just as relevant in just a different domain. You're not going to be building a little app with little buttons.
You're going to be automating pharmaceutical research. You're going to be doing AI for the future of farming and industrial equipment. So we're just too myopic about about how this works.
And you can already start to see this sort of playing out. There was a really funny Feet article, which is lawyers are being inundated
by all of these kind of AI responses that they're now getting from their clients saying, hey, can you review this contract or can you review this memo or can you look at this this case? Well, guess what happens when everybody thinks that they're a lawyer? Do you know what the ultimate constraint is? The ultimate constraint is the actual number of lawyers that actually are able to go and review all of this stuff being produced. So I would take the other side. I'd rather like there are going to be more lawyers in the next five years than we have today because we've made it easy to generate legal content. But it has not gotten any easier to actually get any of that approved by any court system or file a patent or any of the things that law actually ends up relating to. So these are, again, this is where I just differ from the rest of the industry. Do you really think so? With the greatest of respect, we are seeing the eradication
of kind of lower ranking legal positions. And that is different issue, which is how do you do the next generation of mentorship and apprenticeship
when AI does automate the maybe traditional
tasks that those workers are doing? A big question facing every bank in the world, every law firm in the world, anybody who had a sort of an apprenticeship model. I don't doubt that that's a real issue, but that's different from the constraints that all of this work ends up resulting in that you still have not been able to automate. We had a customer conversation
two weeks ago and this is just going to sit with me forever. I'm going to always have this example. They've automated or they're working on automating patient referrals when, you know, when you want to go and see the radiologist or the high end doctor for whatever issue you have. They're automating that, which is awesome. So now you don't have to be on the phone for, you know, a week or whatever. Well, guess what? You can automate anything,
but if it still is eighteen months out before an appointment is available, whether your ultimate constraint is still the health care institution
and the amount of doctors we have and actually the amount of real labor we have across those organizations. So yes, maybe you don't want to, you know, stake your career on being a frontline,
you know, customer service rep in health care right now. But first of all, that same person will have a lot of other types of jobs that they'll have access to, but you still will end up having all of these other constraints that eventually we will need to produce more and more jobs to go and resolve. So automation is going to actually just force us to see the next set of bottlenecks that are in all of these industries that we didn't perceive that we had before because everything was so slow and manual. What job title does not exist today that will be incredibly prominent in five years' time? So I'm workshopping, and a bunch of people are doing this, so this is like not my invention, but I'm workshopping Aaron, you've got to take attribution.