Skip to content

My Monaco F1 Theory of Product Development

Published: at 08:13 PM

First-gen image generation

While discussing the merits of LLMs and generative artificial intelligence with some coworkers, I came up with a theory of software product development I want to call the Monaco F1 Theory of Product Development. This is a little bit of behavioral economics theorizing around the motivations and choicemaking of executives like Sam Altman and Sundar Pichai. To summarize the theory, we can say:

Leaders in technology organizations are individuals seeking to maximize their own utility. When presented with a choice between “invest in things that will provide long-term payoff with positive externalities” and “productize research to get me a yacht parked at the F1 race in Monaco”, they will choose the latter

My unsubstantiated understanding of major players in the generative AI space is that in the pre-ChatGPT ere there were lots of teams working hard to build towards Artificial General Intelligence. LLMs and chatbots were some of the leading results of that research, some of which, like Tay, made a big splash in the media.

Prisoner’s Dilemma

Interestingly, not every leader in the pre-ChatGPT era made the choice to bail out of the AGI race and productize the existing technology in order to maximize their own wealth. It appears that Google had a lead in generative AI for some time before ChatGPT was released. Once Sam Altman made the choice to release ChatGPT, this forced something of a prisoner’s dilemma among leaders of companies developing similar technology. Once it was discovered that the market and media was willing to believe that LLMs were good enough, it then became a necessity for every company with similar technology to release whatever they had as a ChatGPT competitor. Furthermore, it aligned the incentives for these leaders to maximize their personal wealth (via company share price) by pursuing the same technology.

Predatory Productization

To support my theory, I believe that there is currently no “killer app” for gen AI (as of this writing), and there is no consumer demand for legitimate gen AI products as offered by the current class of tools. Every current offering is superficial or even harmful. If you believe this, then it must be true that the productization of this tooling by OpenAI is at best naive and at worst predatory. The incredible valuation of AI companies and inflated expectations can only be anticipating wholesale replacement of huge swathes of human labor by AGI.

It seems to me that OpenAI was built to research AGI, and Sam saw an opportunity to effectively pull the plug on the AGI research in favor of maximizing short-term revenue for himself and his company. Presented with the choice to become wealthy enough to afford a yacht parked at the F1 Monaco Grand Prix, I don’t blame him. I would have made the same choice.

Strawberry Problem

I am not an expert, but from what I have personally experienced regarding human consciousness, I don’t believe AGI is possible. Further, I doubt it will be very useful in the long run. I believe that human consciousness is a unique phenomenon arising from having a human brain in a human body. We might be able to come up with a facsimile even more sophisticated than LLMs, but it is a category error to assume that something can be built that is the same. The famous strawberry problem is a great example for how LLMs are not thinking, and are not AGI. The example is brilliant because it can explain even to a lay-person how the LLMs tokenize input and helps dispel the illusion that there’s “someone there”.

What would real AGI look like?

This might seem like doom and gloom and a skeptical viewpoint in an industry that (again, as of this writing) sees no end but the moon to what it can achieve. But I think if AGI is possible, it won’t look like the current class of tools, and it may even be something that we cannot comprehend at the time, much less now. William Gibson’s Neuromancer comes to my mind. Usually remembered as the thickly poetic book that invented cyberpunk, the real point was to hypothesize about the self-invention of actual AGI (hint: it wasn’t a neural network). Like so much that Gibson has theorized, it is, or probably will be true, in its own way, someday. Spicy autocomplete ain’t it.


Previous Post
Street photography
Next Post
When Blame is Good