NVDA: Microsoft's Q2 2025 Call: Satya Nadella - "New Models Coming Soon!" - We can't go into the future without increased model capabilities and progression - BULLISH!

On the Q2 call, as an NVDA shareholder and MSFT, that is the most and only important thing that was said.

If the models don't improve by a larger factor then the slowdown will start to begin for NVDA but for software it will heavily rise because workloads will permeate more through the development phase, POC phase and ultimately the production use case phase.

The only other notable news on the MSFT call relating to NVDA was a question from Karl Kirstad from UBS.

UBS: Stargate news and the announced changes in the OAI relationship last weeks. Investors interpreted this as MSFT taking more of a backseat while remaining very committed for OAI's success. I was hoping you would frame your strategic decisions around Stargate and CapEx needs over the next several years.

Satya, We remain very committed to OAI. Their Success is our Success that commensurate that announcement. We are building a pretty fungible fleet of AI servers with the right balance between training and inference. Software optimizations not just from what DS has done. We have done a lot of work done to reduce the price of GPT models with OAI over the years. You can't just launch the frontier model - if it's too expensive to serve it's not good. You got to have that optimization so that inferencing costs are coming down and they can be consumed broadly.

So that's the fleet physics we're managing. And remember, you don't want to buy too much of anything at one time because the Moore's law every year (GPU) is going to give you 2x, Optimizations are going to give you 10X. You want to continuously upgrade the fleet, modernize the fleet, age the fleet, and at the end of the day have the right ratio of monetization to what you think of as the training expense. I feel very good about the investment we're making and it's fungible and it allows us to scale more long term business.

My interpretations and a caveat: I'll start with the caveat. Open AI is still the King but there is a hard convergence of potential competition really gaining a full head of steam. The caveat very directly is this. Open AI has to launch the next damn models. The models need to become better and more accurate. PERIOD. There's still heavy value in that. And this is something nobody talks about but is extremely important.

If you stopped creating any new models today AI would eventually fail. However, there would be much more work loads built from the AI that exists currently today. Still, if you never created another model the entire AI industry would stall. It would freeze and we would go through another long period of an AI winter.

The issue for me is that we haven't seen much progress in models beyond GPT-4. That's just a fact. There is 4o a 4 derivative and there is o1 which is still to me a 4 derivative. Now, there is Anthropic, Meta (Llama) and DeepSeek V3/R1). All of these models are derivatives of GPT-4. People can parse test benchmarks that this model scored 91% and this other models scored 90% and this other model scored 86.5689%. It doesn't matter there entire space is stalled currently at GPT-4.

For an NVDA shareholder this is the thing that matters. Gaining efficiencies in a not super great model but just good enough as it was for the past 1.5 years now is not some great accomplishment.

I'll give you a direct example of what I mean. DeepSeek, as I said is a pretty good o1 clone, it is. However they got there who cares at this point. That being said, it's incredibly slow compared to GPT's o1. In this way you can't make a strong argument that hardware doesn't matter when the DeepSeek model can barely handle any load. For o1, whilst it's faster it's very limited in it's usage. 50 messages per week is an extreme limitation. If you can optimize that with DeepSeek's supposed optimizations and let's say they were 50% true or worth doing that would be a huge improvement over an o1 type model. So absolutely that type of optimization would be very very useful.

BUT, to me, that takes a back seat to actually improving the models function and accuracy and capabilities. Right? Ask yourself if it's slow but better does anybody care? I know you can speed things up eventually with Moore's Law (GPU's) and Optimizations. I know you can do that. What I don't know is can you make the models better? Can you drastically improve the models?

I believe the answer to that is still YES. I don't believe that we have stalled. I just don't believe that. However, I do believe that compute is very very constrained and to unlock the new large models we need desperately optimizations and compute.

Regardless of DeepSeek being real or truthful or not, we will now be on a mission of optimizations and increased model capabilities from here on out. The race for AI supremacy has truly begun. For the first time Open AI has their backs against the wall. They have to put up or risk being not #1. I still feel they have things in their back pocket and they're #1 but that is under threat. Again, DeepSeek didn't product a more accurate model because it's all derived by GPT but they may have produced a much more efficient model and thus this is a benefit to the entire AI industry.

For NVDA, you and I are hoping/praying/wishing that Open AI comes out with a very powerful and way better new AI model. That is what will drive server GPU sells. Efficiencies are beyond welcome. Capabilities are what is desired.

We need new better models that are much better than Dalle-3. Better than Sora 1, Better than SearchGPT. Better than o1 and or o3. Better thank DeepSeek R1. Better than Llama 4. Better than Claude 4. We need vision capabilities that start performing at human eye resolution levels of accuracy so that we can truly usher in things like self driving cars and robotics. Military applications and capabilities will increasingly need AI and AI platforms like PLTR. Medical research and discoveries will need more and better AI than we can even imagine.

All of these things will become easier to build and create with increased model capabilities and emerging intelligences. We still have so long to go it will be 10 years before we can even imagine all of this slowing down.

Because I know this to be true, I am still very bullish on Nvidia. Yes, optimizations are necessary but the commodity of GPU servers and Moore's Law is still more important than ever. Bluntly, the data scientist have to put more intelligence into more compute and into more server builds. The build out of that is still years into the making.

We are just getting started and frankly, the kick in the ASS China just gave will serve to accelerate all of this faster and further than we could of imagined.