Sous les briques, le soleil

AI and Intelligence, a new layer for the OSI model?

February 14, 2025 12:00 PM

The OSI model has been widely discussed. This is a simple way of explaining how networking works and how responsibilities are shared in a stack model.

I did study Computer Science and Telecommunications. I loved it, because this is where I understood how systems can interoperate and provide value to other systems, while handling a little bit more of abstraction as you go from bottom to top.

As an example, during my time as a student researcher at LIP6, FTTH deployment was just starting. But we knew that a better set of transport layers (more reliable, with less jitter, less delay, more bandwidth, from physical to transport) would allow better products to appear.

Remember when high-speed internet was 512k DSL lines? Good luck growing Twitch or YouTube to what they became.

I am seeing the same parallels with intelligence.

With OpenAI and Anthropic’s new models, one developer can use a simple API and, for a few cents, access a model that can do what would take a team of engineers months to encode in rules—and still fail at edge cases. A rules-based system, however sophisticated, can only handle what a human anticipated. An LLM reasons across what it has never seen. That’s a different category of tool.

In the pre-2025 world, intelligence was embedded as a set of static rules (algorithms and code) into applications. It did allow observability and repeatability, automating workflows, office tasks, robot operations, etc. Output was closely defined by input without a lot of enrichment.

Now, we are starting to see applications, especially in software engineering, where data processing is not defined by a specific algorithm.

With LLMs, data is passed through natural language (prompts) to a model. The model provides an output enriched with, and by, its intelligence.

And I now understand that there is a new layer in the OSI model: the intelligence layer.

It is now part of the infrastructure. Why?

  1. Because all software will rely on it in one way or another. It will be ubiquitous.
  2. Because frontier models can be used in a wide array of scenarios, from a baking app to a health app. Like an OSI layer, it does not care about the app’s specialisation itself.
  3. Because it is a true abstraction layer: it takes input, produces output, communicates with other models, and calls external services like web search. Applications are built on top of it, not around it. Just as IP standardises how machines communicate, this layer standardises how applications access reasoning—without needing to know how the model works internally.

Every layer in the OSI model changed what was possible above it. Fiber made Twitch possible. Nobody predicted Twitch. We’re at the same inflection point—except the new layer doesn’t move bits faster. It thinks.