Accelerating AI: The Power of Interoperability
We all have that drawer full of chargers at home. It took USB-C a decade to clean that up. Typically, vendors only embrace interoperability when market forces demand it. But AI flips that script.

CRM software tools like Salesforce and Hubspot are already 25 years old. Anyone that has ever tried moving from one to another knows how painful this can be. The reason? By design, they are not meant to be interoperable to increase the vendor lock-in. Different data structures, workflows, configurations and integrations. This basically holds for almost any key piece of IT technology, from SaaS to analytics and the cloud.
Initially, vendors will not focus on making it easy for you to switch to an alternative. Rather, they will try to get you onboarded in their ecosystem one step at a time. Typically, vendors will enable easier interoperability only when there is a strong market push for this. It took USB-C ten years to become a standard and clean up all our drawers full of different chargers.
But one piece of technology has a very different approach here. And it is this way so from the moment it got out of the gate. Today's AI models are designed with interoperability at their core, accelerating innovation at a pace that we have not witnessed before.
Equal input and output
The center of AI innovation today is around Large Language Models, which due to their multi-model capabilities are able to understand more of the world around us with every new release. Their basic input? Tokens. The output? Tokens. Even if you send images of video through an LLM, you’ll still direct the tasks you want it to execute using tokens.
This means now suddenly we have a new piece of technology where input & output is standardized across vendors, languages, regions and even closed vs open-source. All these LLM models are interoperable.
Why is that such a big deal?
Near-zero switching costs
Because all LLMs run on the same basic input and output, a development team with a good setup and infrastructure can switch models, vendors or versions very easily, without breaking any of their working code and pipelines. The costs of switching are near-zero. This means teams can easily try out new releases or players and validate if they perform better, faster or cheaper for their use-case. And if they do, the switch can be made in minutes.
Very short development cycles
Releases with new capabilities or whole new LLM models are taking over the tech news on a daily basis. Players like OpenAI, Anthropic, Google Gemini, Meta Llama, Mistral and recently DeepSeek are competing aggressively. All are trying to get as large a footing as possible among developers. And because the switching costs are so low, the competition is heating up. You can see in the releases that these players are all working on the same kind of capabilities at the same time. Developers just have the tools to measure accuracy & fit to their use-case accurately and decide which model is best for them at any given time, so it’s not only key for the LLM providers to get new models out as the first, but also to do this at a high quality level + attractive price.
Vendor chains
Because LLMs have the same kind of input & output, you can chain them together. Using one model to continue or improve the work of a previous one in the chain. This allows you to not only have different models from one vendor working together, but also from very different vendors.
We have customer use-cases where LLMs of 4 different vendors are used together to solve a single complex challenge. And in a few weeks that might just be 2 vendors, or 6. It just depends on performance, updates and the best fit.
Low barrier for new entrants
This also means that experimentation can pay off quickly. A prime example being DeepSeek, which was not known outside of a hardcore AI community, received an avalanche of mainstream media attention last week. Besides the fact that they launched something remarkable that was challenging the leading AI players, the more interesting aspect here is the speed of developer adoption.
Within days benchmarks, examples and real use-cases were flying across the internet. And even more significant, you could start to use DeepSeek’s R1 on AWS or Azure within days. A blazing fast speed of adoption, with a very low barrier to test and use new entrants in existing AI applications.
How to take full advantage of AI’s interoperability
Every company can make use of these advantages of interoperability. Having worked on numerous infrastructure setups, we think there are a few engineering steps to consider:
- Create a crystal clear definition of quality, and the tools to measure this. Otherwise, it will be guesswork which model does what best for your use-case.
- Break down your LLM pipelines in very granular steps, enabling you to start measuring quality, speed and costs at each of these.
- Write your code with the intent of swapping each LLM for a totally different one tomorrow. Even in runtime, the option to swap models in case of downtime (or slow response times) can be a great user experience driver.
- Setup access and billing relationships with the key and new AI players early. That way you’re not back in line when a DeepSeek moment occurs. This also means legally vetting where the data flows.
- Experimentation is your friend. No big cycles, but single-day experiments often deliver the best answer if you should start using one LLM or the other. And it forces you to focus on simple metrics and decision drivers.
AI is a very special field of technology, on many fronts. The built-in interoperability of the most important models in the field make it even more exciting. It’s a great time to be building.
We’re hosting quarterly private roundtable sessions where we bring together customers and contacts to discuss topics like exactly this. If you're interested, have a look here and we'd love to hear from you.
“This Vincent guy really, really knows his shit!”
As stated by one happy customer