How Capital is Powering the AI Infrastructure Buildout with Magnetar Capital Managing Director Neil Tiwari
Transcript
Hi, listeners. Welcome back to No Priors. Today, I'm here with Neil Tiwari of Magnetar Capital. This is a $22 billion alternative asset manager at the center of the AI compute buildup. We talk about the financial innovation, depreciation of GPUs, and what's next in AI compute. Welcome. Thanks so much for doing this, Neil. Absolutely. You know, really happy to be here. So you are leading AI infrastructure at Magnetar. You're at the center of the build out, enabling it, financing it. For any of our listeners who haven't heard, can you just explain a little bit what Magnetar is? Sure. So Magnetar has been around for actually this is our 20th year. We're an alternative asset manager and that can mean a lot of different things. But we have three primary strategies. The first one is private credit. The second one is a venture strategy. And the third is more of a systematic or quantitative focused public strategy as well. And so I think, you know, when when people look at us and why are we here in this moment, especially on building out AI infrastructure, I think a lot of it has to do with the fact that we're I remember hearing about you and Magnetar initially. I was like, who's this big owner of core weave and also, you know, helping open AI with some of their early build outs. When did you guys first start looking at the problem and thinking about how to how to solve it? Yeah, so we actually, you know, stumbled across the compute problem before it was compute. You know, we met core weave back in 2021. And that was when they were actually transitioning from mining Ethereum into high performance compute.
And at that time, it was using the GPU as a, you know, an instrument to mine cryptocurrencies. And interestingly, that same instrument could be used for high performance computing applications. And the first one was visual effects. So think of like things like movies, Marvel movies and things like that. And so they were transitioning at that point between crypto mining into the first kind of high performance compute use case. And this was all before AI. And so we made our first investment before the AI trade started. But we added a lot of optionality where, you know, we could envision a world where the GPU could be used for a lot of different high performance kind of computing applications. I think, you know, AI was on the radar, machine learning was on the radar for us. But I wouldn't say that we could foresee everything that happened. We just happened to be, you know, at the right place at the right time. And we continue to double down as the company progressed and started, you know, shifting into I'm a little bit more of a A little bit more of a I'm a little bit more of a machine learning and kind of AI training base. Did you have like an existing significant data center investing footprint? No, I mean, I think, you know, interestingly at Magnetar there, you know, we have invested across asset classes. So we, we've done a lot of property investing, real estate investing as an example, investing in energy. We had an energy business historically. And so a lot of the elements for what constitutes a data center, power, energy, land, real estate. We had a lot of the background in those spaces. I think we were new to compute, right? Like, that was a new sector for us. And so kind of those two worlds merging, we obviously came up on the curve on the compute side. We had a lot of background on the elements that constitute what it means to build a cloud. So you guys just really, you're in this company, you saw the demand and you said, like, it's going to grow and we're going to make this a big part of our business.
Exactly. I think, you know, what was interesting was we made our first investment in 2021. And then about a year later, we continue to see expansion of use cases for at that time, it was called high performance compute. And then it was kind of towards the end of 22, the whole AI discussion started. And as we entered 2023, CoreWeave started to train models for open AI. And that's when things really started growing, because the sheer amount of compute that was needed to train an LLM, this was like the first Actually, all I can say is thanks for looking at this too, I think it's super interesting. But we're going to do 12th, up to clip 1. Next slide. Anyway just to take a energy And so they had these elements with them They obviously brought on a lot of talent on the cloud side And you put all these together And at that moment it allowed them to you know build very large scale reliable clusters for open AI and obviously many other customers since then And I think the last comment I make is what really allowed them to kind of win this I think the biggest market early on was focused on two things. It was scale and reliability. And I think those were the two things that are really difficult for a lot of the new entrants since then. Because scale has to do with your access to capital, your access to energy, power, data center. And then reliability really had to do with their ability to manage a giant fleet of GPUs, which is actually quite complicated. you know, whether it's reliability from GPU failures or software challenges, you know, building a fleet that can healthily be online all the time at 99.9% reliability is incredibly difficult. And that's something that they had started back in 2017, 2018 timeframe. And they were at the right moment at the right place with the right technology stack to really build the optimal cloud for that moment.
I've definitely experienced that with our portfolio of companies that are building large trading clusters. It has a reputation for reliability that not everyone has reached. Can you just help characterize if you fast forward, like two and a half, three years now, like what is the scale of the problem today? Yeah, so if you look at kind of CapEx, right, let's starting with that. So CapEx for AI compute and infrastructure in 2026, you know, at least from the hyperscalers is projected to be between 660 and 690 billion dollars. And over the next several years, you know, that scales to trillions of dollars, right? And so the scale of the problem is how do you build, you know, that size of CapEx efficiently? And I think a lot of people are a lot of that has to do with not only your ability to have access to those core elements, energy, power and your ability to have data center space, etc. But I think one of the things that's not talked about as much is capital and access to capital and how is capital structured. And what I mean by that is this is billions to trillions of dollars of CapEx. And just using equity dollars alone is not an efficient And if you have an IT experience, you might have a lot of questions about your product or service. You can reach out to me at the email address at the end of this episode. I will be happy to answer any questions you may have. Thanks for joining us. So, that's the first one. The second one is the second one is the first one. I think that's a great way to scale this. That's obviously a massive dilution. You know, there's there's it's not an easy problem to solve. Yeah. When we first met, I had, like, slowly come to this realization. I was like, I don't think we should take the dilution for the cluster. Right, exactly. And so that's where I think, you know, when you and I have talked about, like, structuring and I can give a couple examples if that's helpful. I think the first one was DDTL structures or SPV debt structures that had a, think of it as like an SPV. Inside of the SPV are the cap is the CapEx, the collateral, which is the GPUs and the contracts themselves. And so in this example, the actual asset or collateral was not really just the GPUs themselves. It was really the contracted cash flows from in this case, in
Old and 120Huhman Yes. Michael Malo, MD, PhD rocket time. K跑倍 provided answers appear so tore, and that's like putting a used car as collateral, which is obviously just going to depreciate incredibly fast. You know, that's a very risky kind of structure. And I think what got missed was the GPUs themselves were actually like the second, second or tertiary level of collateral in those instruments. The primary collateral was the I'm not sure if you're familiar with the term, but it's a term that comes from the term, the term is the contract of cash flows from investment grade counterparties. It's Microsoft or Nvidia or somebody like that saying, I'm committed to pay you. Exactly. I know you can pay me. Take or pay contracts and they like five years in length So I think that was like one feature that unique to talk about And then the second one really has to do with the debt itself and how it amortizes And so like in simple terms you know when you have debt you have principal and interest and you have to pay it off over time And in these structures typically the payback period on the CapEx was roughly two to three years And the structures themselves the debt was over five years you know four to five years in length
where the entire debt amortized during the outstanding period that the debt was out. And so at the end, you ended up with zero balance for the debt, and there was no balloon payment or anything that was really due on the back end. And so the question that often comes up is, isn't that a very risky type of structure because these things are depreciating incredibly quickly? So I think there's two comments here. First is, On that depreciation question, in these kind of debt structures, it doesn't really matter because the debt's fully paid off by the end of the debt term against committed contractual contracts from investment grade counterparties. And at the very end, the actual upside or residual value, and I know there's a lot of questions on residual value, is held by the cloud player in this example, right? Wehelp.com All of this CapEx is paid off incredibly quickly. And there's an opportunity to redeploy it where you can redeploy it without having to pay for any additional debt, obviously, against that redeployment. How have the instruments changed? They've changed in several ways where, you know, the first is, and when you look at these SPVs, I think you're starting to see ways to change the portfolio construction of who can go inside of one of these debt structures. and so you know early on in the early days these were all only investment grade counterparties because there was the space was so nascent the operators had no experience and i think now what you're starting to see is a blend of investment grade and non-investment grade so like what does that actually mean what that means is you know you're you're seeing these structures with investment grade counterparties like your hyperscalers and your other corporates that are ig mixed alongside some of the ai native companies and so think of the ai model companies the labs
You're seeing those companies get mixed in alongside the IG companies to build a portfolio. Because now you have, you know, the history that you can do this. Yeah. And now you have structures where you can kind of balance the risk with IG and non-IG. And we're continuing to see that kind of move to be able to help finance, you know, really the model companies and a lot of these startups. Obviously, that was difficult to do, you know, three or four years ago, that's starting to become easier. As these companies have more All our portfolio companies that buy compute tell me it's a supply-constrained market today. One, is that true? And two, when you think about, like, continuing to grow your business or grow this ecosystem, like, what's going to stop it? Like, what could slow down a build-out? Yeah, I mean, I think what's interesting is, if you look at like 2023, 2024, we were very supply constrained and the supply constraint was chips. No one could get access to chips. Yes, we bought chips. We bought chips, right? Yeah. And, you know, there was this thought that, okay, there's gonna be an overbuild of chips and then the supply constraints will go away. Well, you know, fast forward to 2026. And what we see is, you know, there is obviously more availability of chips, but to build and operate these, you know, data centers requires people, power, infrastructure, a lot of these things that have a lot of bottlenecks. And so actually, taking these chips and then making them into useful revenue generating assets is really the bottleneck.
It's also not clear that there is supply of chips at the latest generation at scale. That's true. Soon, which is how everybody wants them. Exactly. I think, you know, you see, not only you're starting to see interesting and not only just the high end players want access to the latest chips, you're seeing the latest, you know, obviously startups want access to those. And I think it has to do with efficiency. You know one of our friends or one of your friends as well Dylan Patel over at Semi Analysis posted this interesting article last week on inference and inference spend and inference kind of performance And you know there a lot of you know jokes made about Jensen math And it was interesting because the Seems pretty good at math honestly He actually great at math And so for the hoppers the H100 or H200 series of GPUs into the Blackwells there was a lot of was a claim made that it could be 30 times more efficient And I think the data from you know some analysis showed that it was 90 to 100 times more efficient in terms of inference performance And so I think part of the need to go to these new chips is not it yes more computing power but it actually the it can be cheaper to operate It's price performance. Price performance, exactly. Yes, my favorite Jensenism is the more you buy, the more you save. Exactly. It's actually true. Yeah. Crazy. Help me address like this criticism around circular financing. Yeah, I know. It's obviously a topic du jour. And I think, you know, the way we see it and frame it really has to do with the demand signals. And who are the eventual buyers and how is this being used? And so at least from what our perspective, we continue to see insatiable demand. Um, and if you go back to, you know, the previous kind of big tech build out back in the early 2000s, there was obviously a lot of fiber that was being built and you had dark fiber, you know, in an overbuild happening. And I think what you see here is I have, you know, you don't see any dark GPUs.
No, I've been looking. Exactly. Any GPUs used. Yeah. And then number two, you're starting to see actual economic value. So I think last year, enterprise AI had about 37 billion of total TAM. And it's continued to grow like crazy. And at least personally, and I'm sure you see this too, but I use these tools all the time, all the time, and I find it incredibly valuable, right? The actual tokenomics of positive ROI is is actually here now, I think, from our perspective. And so that the circularity, you know, comment, I think, applies when you're building, you know, speculative I'm not sure if you're talking about compute and capacity, or if you're purely doing vendor financing and it's, you know, you're trying to do some type of unique, some type of, you know, RevRec type item related to that. And that's not what we see. Like, what we see is financing to support to build out the demand against use cases that are very positive in their ROI. And so, like, our perspective is that that's, you know, not a real, real concern that we have. And it really has to do with who are the ultimate buyers here. The ultimate buyers are the ones that are going to be the most I'm up against my max limit all the time in a way that was not true. Uh, uh, uh, initially, how does the inference workloads actually growing? I mean, it's a, it's a good demand signal that there is value, but how does that change your business? Yeah. So I think one thing that's interesting that we're seeing is obviously there's been the shift from training to inference, you know, over the last few years that that split continues to grow on the inference side as usable, uh, and ROI positive applications get developed. I think the two things I see on the inference side now is,
Um, inference has is a lot more complex than I think initially thought. And what I mean by that is it's not as simple as, um, you train a model and then it's easy to inference it. In some certain cases, you can do that on similar infrastructure, but there are issues around latency, um, fungibility of that, uh, and, and really optimizing the cost of your compute on the inference side. Um, how do you manage, uh, you know, peaks of inference demand? And obviously it's not linear. and you know the last time on a like training and your GPS are on all the time, you know, a hundred percent of the time. And so with inference, you have a lot more variability and so there's a lot more nuances in optimizing inference. I think the second thing that's observed that I've seen is inference is definitely a memory problem. Memory throughput problem on the inference side, you know, you have these kind of phases called prefill and decode. Right? problem. Um, and then the third is what I would say is distribution. Um, you know, a lot of times training infrastructure is, is quite centralized. What you're seeing with inference is in many use cases, as this becomes more ubiquitous, you're going to have more and more decentralized, uh, inference clusters. And actually one of my favorite companies is one of your companies, base 10, which is really, you know, optimizing distributed inference at scale. And I think one thing that's interesting when you look at companies like like that and other inference clouds is how do you optimize the compute and build out these clusters that could actually look very different
training cluster where training cluster might be 50, 100, 150 megawatts and one kind of four walls. I think you're starting to see distributed inference, which could be, you know, four or five megawatts and five separate data centers and stitching them together in different areas. Right. And that looks very different from a kind of power perspective, how you, you know, the software matters a lot more when you're doing like distributed inference. And then in terms of your question, how it impacts us, I think one of the things that I think is I think now that you have this new crop of inference clouds and application layer companies that are I think the key question that we're really focused on is how can we finance the next build, which is distributed inference. And maybe the last, you know, one or two takeaways would be one thing I'm seeing is, you know, for every application layer company out there, the highest line item from COGS is compute. and then the inference companies and inference clouds out there. Most of them are purchasing up compute from either other clouds or unused act capacity. And when you look at like margins for that, you've got like layered margins. And so there's a push to kind of own your own infrastructure to really drive and increase, you know, profit margins. But also it's the ability to kind of have control of your own destiny. And I think a lot of folks are starting to the application I am too, and I think one of the things that is going to make a big difference in this ecosystem is like, can the inference clouds like base 10, can they deliver reliability that you would expect from a cloud, like a traditional cloud?
I'm not sure if you're familiar with Silicon Data. They put together a lot of data on spot pricing and price per token performance. This is Carmen Lee's company. And one thing that I think was really interesting in an article she published last week had to do with the fact that Silicon Data is a very with how two pieces of compute that look identical on paper have wildly different performances, everything from reliability to cost to speed. And I think as you distribute, you know, have distributed inference, how do you mash together very different types of compute and try to optimize reliability I think is super interesting. And that gets to kind of one thing I find really interesting that NVIDIA is doing is this concept of AI factories. And building AI factories, you know, behind corporates and AI companies. And maybe the way I unpack that is you've got kind of more large monolithic Cloud players, the hyperscalers, and the Neo clouds. They're building large scale, you know, Cloud environments, and a lot of where I think NVIDIA and others see this going is, yes, those are going to be important components. And those are going to be huge markets, but, corporates, Fortune, you know, 500 AI companies, companies that use a ton of compute will want dedicated AI factories associated with workloads that they run and that they have control over And so I think you starting to see you know the early indications of how do you finance and build out uh almost think of like literally AI factories that sit on with a company that can operate their workloads You talking about my Mac mini farm
Exactly. No, but all joking aside, I think one thing that is another supporting factor for use of all of the compute we have is and can create over the coming years is power is clearly the limiting factor. It's easier to get more power in smaller units. I think that as inference demand is growing, these, uh, anyone who has a usable compute for inference is going to find a lot of partners for offtake. Exactly. Okay, let's look at the future a little bit while we while we have 10 minutes. Let's talk about the macro. Like, people talk about energy, they talk about natural gas, the grid, the slowness of nuclear, like, what do you think about over the next six or 12 months? Over the last year, I've been spending a ton of time in the power and energy markets, and looking at interesting solutions that can help scale power, you know, for the gap that we see. I think a few observations that we've seen. The first is we do have a power problem, but I think it's a bit more nuanced than a lot of the reporting out there where... I just... We can't generate. We can't generate. Yeah. I think there's actually quite a bit of stranded power across the grid, across the country. And what I mean by that is, you know, a lot of the utilities are built in a way where they're focused on peak power, right? So they've got natural gas peakers and they're focused on, you know, providing peak power for those moments where demand is kind of off the charts. and that's obviously only for a few days out of the year. So there's lots of generating assets out there. The question is they're a bit stranded, right? And so there's kind of, I look at the power problem as being kind of multiple fold. The first one is how can you take the power we have on the grid and actually make it usable? And a lot of that has to do with flexibility and storage. And so we've been spending a lot of time looking at an energy in the energy storage business and distribution. How can you store unused capacity?
cyclingcenters.com, CAT boxes, site chrome, It's more on the distribution and storage. And then the other piece I would say is, you know, the true bottleneck, at least in the short term, the next six to 12 months is, it's incredibly, I don't want to use the word simplistic, but it's things like structural steel. It's finding electricians that can, you know... Sorry, you can't get enough steel? You can't get enough steel. You can't... This is not something I was aware of. Yeah, like you can't get steel. You can't get, you can't find enough electricians to build out, you know, the power infrastructure, substations, transformers, air chillers. These are like very specific power infrastructure needed to just get to a point where you can start to build a powered shell on a piece of land. And so the bottlenecks in the short term really are people, equipment. And then the other interesting thing is that on the generation side, What you seeing is regulatory obviously is a big challenge And so there a combination of bring your own capacity There a lot of that that interesting right now And so a site that can potentially grow to 50 100 megawatts might start with only 10 megawatts of grid interconnect But can you add solar net gas turbines put these various bring your own capacity kind of pieces of technology together to make that site usable And so I think a lot of what being looked at and a lot of what I looking at right now is really on the bring your own capacity at least in the short term
Yeah, I think if people don't know the origin story of Crusoe and fler gas, like it's actually really interesting as an example of, you know, there is actually lots of energy. Yeah. You know, some energy out there and you can make much more of it consumable. Yep, exactly. A couple topics to hit before we lose you. New players, how do you think about the sovereigns and what they're doing in their build outs? Yeah, I think... They seem to be able to fund themselves to some degree. Exactly, right? You know, you saw the news from India last week. Obviously, a lot of the news in the Mideast, Southeast Asia. I think, you know, we're continuing to see that sovereigns view compute and AI, you know, as and even we do here in the United States as a matter of national security. and obviously the funding of those clusters is very different than funding like a private cluster. And so you've got, you know, government capital that can be used for that. So I think there's two things that, you know, I find interesting in that space. I think one is, who are the partners that are going to build those that capacity? And what are the cybersecurity kind of implications and environments for that? And so those are the two nuances I think with sovereigns is, They need to find players that can rapidly scale compute in their countries. And oftentimes they don't necessarily have these players that know how to build and scale GPU compute. I think that's a great place for the United States to lean in and help build sovereign ecosystems around the world. And then there's a matter of cybersecurity and how do you make it into a truly safe ecosystem for those sovereigns. And so I think there's a lot of work to do still on the cyber side, especially as you look at scaling sovereign AI. What is your thinking on physical AI? It's another, you know, if it works, CapEx intensive build. Absolutely. And, you know, maybe I'll just take a second to say one of the things that we observed from 2010 to like, you know, the early 2020s was we were in a very capital asset light mode of build. Like SaaS was, you never heard Magnetar and SaaS, right? No. Because it was just purely asset light. Compute and everything we saw starting in, you know, 2021 is asset heavy. That's where you started hearing
I think physical AI is actually an extension of that. And so what you're seeing is part of the reason I think, and I think we all have scars from the 2010s of hardware companies that did not make a lot of money for us. Part of the scars was it was so difficult to scale hardware companies, you know, because the software was so difficult to build. You needed to spend so much money building the hardware. The software was an afterthought. What you're seeing now is now that you have more general purpose software via AI, it can make the software hardware easier to scale because you have, you know, software that can be, you know, can interact with more, more hardware. And so I think the natural kind of extension of what we see is kind of what happened in the compute markets where you really needed flexible capital, where it wasn't just equity, it was debt and, you know, a variety of project finance to really scale CapEx. You're going to see that same kind of need in physical AI. And it simply has to do with capital intensity, right? You know, on the compute side, For like core weave as an example, they needed billions of capital to scale, uh, you know, that cloud. And I think whether it's a robotics company or whether it's a, you know, uh, a manufacturing, uh, focus company, drones, defense, all of these areas are incredibly capital intensive. And then now that you add AI into them, I think it can help them scale faster. Uh, quite frankly, and, uh, capital intensity is still there. And so there's a moment in time now where you have to really look at optimizing balance sheets. I think to your point of how the early AI compute contracts were structured I went from learning to be an investor in an era and an environment where robotics
was a great way to lose a lot of money for a long period of time. You remember that too? Now I sit on the board of two robotics companies, so let's hope it's not true anymore. But I'd say, like, it's just a question of capability to me. Like, you know, whether it's in the home or industrial settings where, like, it is simply I think the products will support investment-grade buyers who are going to have contracts that say, like, we want it, and you can raise debt against it. Exactly. Right. And so I think actually that that feels of a very similar shape. Last question for you, because it is so timely. What do you make of the general cap rotation out of out of software, the end of software and it's all it's all infrastructure labs and AI natives. I guess. Neil Tiwari, Managing Director Magnetar Capital Managing Director, No-Priors.com It's interesting to see that every day there's another industry that kind of tanks whether it's you know, you saw the wealth advisors tank for a few days. You saw the consultant consulting companies. You saw payments, real estate, right? I mean, I think what you're seeing at least is at least in my view, what I saw was towards the tail end of 2025 and into 2026, like there was, at least in my view, a big step up in performance of usable AI. And I think what Anthropic was doing really and Claude and like we use it all. Obviously, we use all the models, but there was a definite step up in performance in making AI usable and seeing that it can truly disrupt these non-AI native industries. I think the reaction and rotation out of each of these names is a bit much because when you think there's there's two factors I look at. One is when you look at valuations as an example, I think from a free cash flow perspective, SaaS companies are valued at the lowest they've been in years, you know, and there's a huge margin difference between what those rev multiples are today and what they've been in the past. And so free cash flow margins have steadily increased significantly.
Transcripts available at podcast.ai.au and it. i the there are a number of applications that, you know, on paper sound really interesting. Like, oh, yeah, I could just rebuild Slack or you could rebuild Salesforce or could rebuild, you know, X, Y and Z. I think, you know, the... it's not just the product. It's the way that's integrated across multiple services and systems across the enterprise that is a lot more difficult to just replicate than I think some of the mark public markets are kind of reacting to. I do think there's a fundamental question in addition to what you said, which I agree with, of like, does anybody want to rebuild it and own it? And, you know, there are, to your point of like within the software sector in particular, there are companies where they're structurally more protected than there are companies that are at more risk. Right. And I think it's as simple as like, you got to go select. Yeah, exactly. This has been so fun. Thanks so much, Neil. Yeah, I really appreciate it. Congratulations on all the innovation and on building out all the compute. Awesome. Thank you. Good to be here. Find us on Twitter at no priors pod. Subscribe to our YouTube channel. If you want to see our faces, follow the show on Apple podcasts, Spotify or wherever you listen. That way you get a new episode every week and sign up for emails or find transcripts for every episode at no-priors.com
Track every episode
without falling behind
Get automatic transcripts and summaries for every show you follow.
Free to start. No credit card. Unsubscribe anytime.