A Primer On The AI Economy

Written by gcuofano | Published 2022/12/30
Tech Story Tags: business-strategy | artificial-intelligence | technology | ai | machine-learning | data-science | ai-applications | future | web-monetization

TLDRSince the release of ChatGPT at the end of November, one thing is clear AI commercial viability is accelerating, giving us a glimpse into how the AI ecosystem is building up. The AI foundational layer (still based on centralized cloud infrastructures) might power up the next wave of consumer applications. On the other hand, the built-in AI into the plethora of tools on the web will be able to read, classify and learn patterns.via the TL;DR App

Each time a new business ecosystem forms, we have to ask a simple question: where's value created?

And once we are able to classify the ecosystem based on where value is created we can ask: how's value captured?

From the above, we understand the business models building on top of that ecosystem.

Since the release of ChatGPT at the end of November, one thing is clear AI commercial viability is accelerating, giving us a glimpse into how the AI ecosystem is building up.

Let me explain.

The foundational layer

That might be comprised of general-purpose engines like GPT-3, DALL-E, StableDiffusion, and so on.

This layer might have the following key features:

General Purpose: it will be built to provide more generalized solutions to any specific need.

This layer might be mostly a B2B/Enterprise layer, on the one hand, powering up a plethora of businesses.

Just like AWS in the 2010s, powered by the applications made of Web 2.0 (Netflix, Slack, Uber, and many others).

The AI foundational layer (still based on centralized cloud infrastructures) might power up the next wave of consumer applications.

That will be a commercial Cambrian explosion…

Multimodal: these general-purpose engines will be multi-modal.

Meaning they might be able to handle any sort of interaction, be it text-to-text, text-to-image, text-to-video, and vice-versa.

Thus it might move in two directions.

On the one hand, the UX might be primarily driven by natural language instructions.

On the other hand, the built-in AI into the plethora of tools on the web will be able to read, classify and learn patterns from all formats available on the web.

This two-way system might bring the next evolution of the foundational model, to become general-purpose engines able to do many things.

Natural Language Interface: the main interface for those general-purpose engines might be natural language.

Today, this is expressed in the form of a prompt (or a natural language instruction).

Prompting though might remain a key feature of the foundational layer; it might instead disappear in the apps' layer, where those AI engines might primarily work as push-based discovery engines (the AI will serve what it thinks it's relevant to users).

Real-time: these engines might be able to adapt in real-time, with the ability to read patterns as we navigate the real world.

This - I argue - will be a key feature to enable these general-purpose interfaces to be integrated into augmented reality!

A middle layer

That might be comprised of vertical engines (imagine here you find your AI Lawyer, Accountant, AI HR Assistant, or AI Marketer).

This middle layer might be built on top of the foundational layer, combining other "middle layer" engines able to become great at very specific tasks.

This middle layer might:

Replicate corporate functions: thus, the first step in this direction might be an AI that might be able to replicate each of the relevant corporate functions.

From accounting to HR, marketing and sales.

This middle layer will enhance a company, making it possible to run departments that are a combination of humans and machines.

Data moats: here, differentiation might be built on top of data moats.

Meaning that by continuously fine-tuning foundational layer engines to be adapted to middle layer functions, these AI specialized will become relevant for specific tasks.

AI engines: these middle-layer players might also have the ability to add other engines on top of the existing foundational layers in the creation of specific data pipelines to train the models for specific tasks.

And the ability to have those models adapted to make them more and more relevant to the specialized functions.

And the app's layer

That might see the rise of a plethora of smaller and much more specialized applications built on top of the middle layer.

These will evolve based on the following:

Network effects: here, scaling up the user base will be critical to building network effects.

Feedback loops: users' feedback loops might become critical to enforce network effects.

What business models will we see?

In my opinion, the Foundational Layer might be together with the new App Store and AWS!

On the one hand, it will work as the underlying infrastructure to build new apps.

On the other hand, it might be the marketplace where these apps are built!

The Middle Layer might initially primarily work as an Enterprise Business Model.

Thus, providing organizations with very customized solutions that will fit the company's goals.

Companies might have those AI Engines on the paychecks, almost as if this is the new employer's force.

The Apps' Layer might follow three main kinds of business models: Ad-based, Subscription-based, and Consumption-based.

If we all build tools on top of ChatGPT or similar models, how can we actually build a competitive moat?

In other words, how can we build a company on top of AI that has a long-term advantage and that can't get easily commoditized?

That's a critical question to answer, and I've been thinking about it a lot!

So let me answer a few points.

In the three layers of AI, I explained what the AI business ecosystem might look like.

Now, once you've understood that, let's see what - I argue - can create a competitive moat in AI.

Remixing foundational models (GPT-3, DALL-E, Stable Diffusion, Midjourney, and so forth)

Right now, there are still some arbitrage opportunities.

Meaning foundational models (the general-purpose engines like GPT-3 or Stable Diffusion), are still good at handling specific modalities.

For instance, GPT-3 is incredible with text-to-text, but it sucks with images.

DALL-E is incredible with text-to-image, but it sucks when it comes to generating text that makes sense on top of these images.

So, for instance, if you're building an AI product, you can re-mix these models too:

  • Improve their ability to handle more modalities. So, for instance, you can combine the capabilities of GPT-3 and DALL-E to enable the tool to handle both text and images properly for the user.

  • In addition, you can fine-tune your model based on more foundational models. For instance, imagine you're building an AI marketplace; rather than relying on just DALL-E or Stable Diffusion, or yet Midjourney, you can remix these three models to make the images much more interesting for the years.

  • A third element is you can add, as we'll see, things on top of these foundational layers to make the final output much more polished.

Of course, chances are that this market might become an oligopoly, where a few players might control most of it and thus capture a good chunk of the value, which creates for them an incredible moat.

That's because, if you're OpenAI, you can handle a large generative model like GPT-3 forward.

If you're a small startup, trying to do it from scratch might be much harder.

And the more these foundational models evolve, the more the barriers to entry will be hard to break, thus generating a leap ahead for the foundational layer players like OpenAI and the rest.

The way OpenAI and other foundational layer organizations might capture value might be in the form of open APIs, as it is today.

Or those might really become a sort of App Store for AI applications, where they will be able to tax each of the AI tools developed on top of each ecosystem, thus capturing value from that, similar to what happens today with Apple's App Store.

Data moats

AI models right now might have become extremely good at many tasks.

However, to make them relevant for companies, at an enterprise level, or for users, at specific applications, data becomes critical to enable the model to be customized based on the tech stack (for enterprise companies) and context (for users) they sit on.

For instance, imagine the case of an organization that wants to leverage AI to deliver custom experiences to its users.

In order to do that, it'll need to use its "first-party data" to be integrated within these models.

For instance, let's say the company wants to build a very specialized chatbot for support.

Of course, it can do that in many ways,

  • Train it on the content the company has built over the years.

  • Build a Q&A dataset based on the most frequent requests of users.

  • Tackle with much more accurate questions tied to conversion based on CRM data.

In short, the enterprise organization will use this first-party data to integrate it into an AI model to make it as relevant as possible.

That is how that AI application becomes valuable.

For that, it becomes critical:

Data Integration: understand what data is really relevant for the AI to become way better at specific tasks.

Data Curation: to understand how to clean the relevant first-party proprietary data which can be used to train the model.

Fine-tuning: foundational models are very powerful. However, they have been trained to perform many tasks. You can fine-tune the foundational model (by feeding contextual data and by tweaking these models) to generate much better outputs. Fine-tuning becomes, therefore, critical to making sure you can build a valuable AI product on top of existing foundational layers.

Middle-layer AI engines: another interesting element is the fact that as a middle-layer AI company, you can still build refined engines on top of existing foundational models. For instance, take the case of a company that builds an AI tool for resume generation. You can still add an AI engine on top of it, which does rephrasing, further grammar checking, plagiarism, and more, which will be a value-added layer on top of the foundational layer! That is how you can transform a standardized output from the foundational, general-purpose engine into something way more specific.

Prompt hacking (the new coding paradigm hidden in the backend)

For years neural nets had been stuck until they started to do incredible things.

And the most interesting part?

Most of these interesting things were the result of scaling these networks.

In other words, once a new architecture has been employed (transformer-based architecture), the rest of the work has been achieved through scaling.

Now, there is an unpredictable component to scaling.

Just like, when you scale a company, after a certain threshold, you don't know how that company might change and what properties might emerge.

When scaling neural networks based on the same architectures, various properties emerge.

In biology, emergence (or how a complex system shows completely different behaviors from its parts, as the overall system depends upon the interactions between its parts) is extremely powerful.

Indeed, even in a real world that often looks fractal (the smaller resemble the larger), in reality, the much larger shows properties that are completely different!

This is one of the topics that fascinates me the most in business.

And this is also what makes AI so interesting to me right now.

By scaling AI systems, we get - unpredictable - emergent properties that, for better or for worse, might affect the evolution of AI.

For instance, prompting, or the ability to change the output of AI models based on a natural language, has been an emergent property.

None has coded it into the system; it just came out from scaling these AI models.

And another exciting aspect - I argue - is that prompting does look more like coding than searching or querying.

Indeed, those who compare prompting to search are getting it backward, in my opinion.

Prompting is way more powerful, and over time it might become something hidden in the user interface rather than shown to final users.

In this context, prompt hacking, or the process of tweaking the natural language instruction to have the AI model completely revamp its output, can be extremely powerful.

That is why I'm adding prompt hacking within the key elements to build an AI moat.

My main argument is that, in a codebase that becomes much more commoditized (today, ChatGPT can generate code and also fix bugs in the code), prompt hacking might be the core value of the software, as it will enable an AI model to slightly improve its output!

Of course, for enterprise-level AI applications, prompting might still be part of the interface, as it gives the enterprise client a chance to customize the input highly.

Yet, there will be a piece of prompting (prompt hacking) which might be hidden in the UI.

While for consumer applications, prompt hacking might be hidden entirely in the interface, giving users standardized options to customize their outputs.

The remaining part of the customization will happen based on the context and interests of the user.

Network effects and fast iteration loops

The Internet Business Playbook has taught us that the value of a web app might be built not only in its tech but in its ability to get better at scale.

You might be using every day, apps like Netflix, Uber, Airbnb, LinkedIn, YouTube, TikTok, and so forth, which value stands in their ability to become better and better the more users join in.

This is known as network effects.

Would you still jump on YouTube if you had not such a vast library of content and a discovery engine that keeps recommending interesting and engaging stuff to you?

So just like digital businesses can build their moats via network effects, AI companies can do the same.

There is nothing new here, as companies like Meta, Google, Netflix, TikTok, and many others have been using human interactions combined with AI algorithms to improve their products at scale.

For instance, in 2019, I argued that TikTok was so interesting, not because it was a new social media app.

Quite the opposite.

It was so interesting because it moved beyond the social network, employing AI algorithms to make users discover content beyond their connections!

That is what made TikTok so sticky...

Workflow

How a company combines all the above to create fast iterative loops to develop, launch, iterate, maintain and grow AI applications will become the critical moat for the company!

Each AI company that works at scale will have its own workflow that will work as a barrier to entry, as opposed to economies of scale.

It will be the equivalent of the economies of scope (with a more effective workflow, AI companies will be able to build more and more features and bundle various products to create a more comprehensive experience).

Thus making it harder and harder for other companies to replicate!

Brand and Distribution

Where technology might get commoditized, over time, branding creates a strong differentiation.

This has been true for tech companies of the web era, and it'll be true for AI companies, which will be an amplified version of tech companies.

In addition to that, just like distribution played a key role for early tech players (I covered at great lengths deals like  Google-AOL or  Apple-AT&T ), it will be so when it comes to AI players.

Indeed, we've seen already how some key partnerships have developed already:

  • OpenAI/Microsoft

  • DeepMind/Google

  • Stability AI/Apple

  • Amazon AWS leveraging its own stack.

  • And so on...

The way those partnerships will form will not only be important from a technological standpoint.

They will matter from a distribution standpoint.

Indeed, the paradox of these AI models right now is they work extremely well as general-purpose engines.

Yet, suppose we were to limit them by adding too many guardrails.

In that case, they might well end up losing relevance also at specific tasks (for instance, limiting ChatGTP's ability to give answers on topics where it can be misleading and factually worn answers might actually hamper its capabilities!)

Thus, this implies that with these AI companies, we might see a different kind of distribution model, where for those models to become great at specific tasks, they will need to be employed first at much broader tasks.

This is a paradigm shift from a distribution standpoint.

Where in the past, we've seen tech players start as niche, then scale from there (Amazon was an online bookstore, Facebook was a social network for Harvard undergraduates), we might see these AI companies go broad, right on, and then narrow down their spectrum of applications!

For instance, right now, ChatGPT might be a general-purpose engine.

Yet, over time, once they figure out which applications are well-suited, they might also be released for specific verticals.

This sort of broad distribution requires strong partnerships with other large tech players, which can take that burden!

Capital Deployment

The whole new field might require substantial capital to scale, not to kick off, at a foundational level.

It will make it much cheaper to build basic (initially) and more advanced (later on) applications.

Thus lowering substantially the cost of doing business.

Yet, building powerful foundational AI engines might require massive capital.


Written by gcuofano | Gennaro is the founder of FourWeekMBA, a leading source on business model innovation.
Published by HackerNoon on 2022/12/30