Where AI Value Accumulates
Published on
I was at a dinner and someone asked if it would be better to invest in the foundation models or the applications built on top of them. I believe the answer is both; but it will be lopsided towards the application layer.
I view Large Language Models (LLMs) as a new layer in the modern application stack. There's a (pejorative) description of AI-native apps as just being "ChatGPT wrappers". But couldn't many of the largest companies like Salesforce and Meta be described as MySQL wrappers?
This perspective - viewing foundation models as reasoning layers in the application stack - gives us a lens to evaluate potential value distribution. Let's look at some concrete examples from the database world:
MongoDB, a leading NoSQL database, has a public market cap just shy of $20B. Snowflake similarly is in the tens of billions. The companies built on top of these databases - PostgreSQL, MySQL, etc. - are worth multiple orders of magnitude more. We're talking trillions in market cap.
So when we look to see how the value of AI accrues I think we'll see a similar split.
There will be gobs of money made at the foundation model layer. Tens of billions is a TON of money. But there will be orders of magnitude more value created and captured by the applications that build on top of these models. It's why we see OpenAI, whose consumer-facing ChatGPT was the fastest consumer product to reach 100 million customers, still investing so much in building up the capabilities of their app layer on top of their models. Why? Because they understand this distribution: while there's great money to be made by offering cutting-edge AI models, there is oodles more to be made with a compelling app on top of it.