Beyond the AI Giants
Why the Future Belongs to Many Models
OpenAI commands enormous attention—and billions in investment—for building large language models (LLMs). But the AI landscape extends far beyond any single player. Anthropic makes Claude; Google makes Gemini; Meta developed the Llama family of models. Mistral leads European AI development, while China has DeepSeek.
The cloud computing giants are in the game too. Amazon Web Services offers customers a variety of models at different scales. Microsoft makes its own models—and others’—available through Azure. Alibaba’s flagship is Qwen3. Enterprise software vendors like Salesforce and SAP have AI models tailored to their own applications.
Big companies aren’t the only ones developing AI models. On Hugging Face—a platform where people share, discover, and use pre-trained AI models—more than two million models are available. Some come from the companies listed above. Many others were developed by entities you’ve never heard of. The most popular have been downloaded millions of times.
Mixing and Matching
The AI landscape is actually big, complex, and dynamic. While it may consolidate over time, many companies will shape AI’s future. This benefits companies developing AI applications: competition drives faster innovation and lower prices.
But it’s not just about having multiple vendors to choose from. Companies building AI applications actively use more than one model. According to a recent survey of chief information officers at major companies, over half use at least four different AI models. Why such promiscuity?
Each LLM has distinct strengths and weaknesses. Companies use the best model for each task. Venture capital firm Andreessen Horowitz observes a clear trend toward multi-model strategies:
Enterprises are also becoming more sophisticated in matching specific use cases with the right model. For highly visible or performance-critical applications, companies typically prefer leading-edge models with strong brand recognition. In contrast, for simpler or internal tasks, model choice often comes down purely to cost.
This creates opportunities for AI developers to differentiate themselves. Consider AI-assisted software development, now a major application of generative AI. Anthropic currently has the most widely used model for automated coding—with double OpenAI’s usage. Coding is one of the most common uses of both Anthropic’s chatbot and API.
Open Source Has a Foothold
The big commercial models aren’t the only option. For some purposes, companies prefer open source models. The code, model weights, and sometimes training data are freely available, allowing anyone to use, modify, and distribute them. Companies can customize and deploy open source models on their own infrastructure—a major advantage for those concerned about data privacy. One venture capital firm estimates that 13% of AI workloads run on open source models, while OpenAI, Anthropic, and Google handle most of the rest.
Small Is Beautiful
With AI models, bigger isn’t always better—another reason companies use multiple models. LLMs excel at a wide variety of tasks and contain vast amounts of embedded knowledge. But smaller models optimized for specific tasks can be dramatically cheaper to develop and run. Some estimates put the cost of running a small language model (SLM) at up to 100x cheaper than an LLM. Some SLMs are compact enough to run on your mobile phone. The vast majority of models on Hugging Face are small models.
Better Together
The proliferation of large and small AI models is leading companies to design applications using ensembles—groups of models, each serving a distinct purpose.1 An email management system might use:
A classifier model to sort incoming messages by type (invitation, information request, newsletter, etc.)
A summarizer model to condense long messages
A decision helper to make recommendations, such as suggesting meeting times in response to scheduling requests
A drafting model to suggest first-draft replies
New Risks Emerge
AI will increasingly become part of our society’s ongoing digitalization. Along with benefits, this brings safety, security, and fairness challenges unique to AI. Systems orchestrating multiple AI models could exacerbate these challenges.
When several models work together, tracing responsibility when something goes wrong becomes nearly impossible. A small model that classifies data, another that drafts text, and a third that approves output may each behave appropriately in isolation—but their combination might yield biased, unsafe, or deceptive results. No single model “decided” to cause harm, yet the system as a whole did. This diffusion of accountability makes it harder to regulate or audit AI systems for safety and fairness.
Multi-model systems may also amplify security risks. Each model introduces potential vulnerabilities—through prompt injection, data leakage, or malicious fine-tuning. When models pass data between them, these weaknesses can compound into chain reactions where compromised models silently manipulate others. As organizations weave AI into critical infrastructure and business operations, securing multi-model systems will become one of AI governance’s thorniest challenges.
The White House’s AI Action Plan, released this summer, acknowledges the need to invest in science and technology for measuring, evaluating, interpreting, controlling, and testing AI systems. The proliferation of multi-model systems makes these recommendations more urgent than ever.
A Diverse, Dynamic Landscape
The competition among frontier AI model providers is absorbing enormous investment capital and media attention. But AI’s future will be more textured than a battle of titans. This doesn’t look like a winner-takes-all game.
I believe diversity in our AI ecosystem—like diversity in natural ecosystems—will prove more dynamic and resilient than an AI monoculture. The key is not neglecting the risks that come with this complexity.
As collective nouns go, “ensemble of models” has nothing on “parliament of owls.”



Thanks for writing this, it clarifies a lot, and while the diversity you highlight is exciting, I sometimes wonder if we're just seeing different flavors of the same core teh, or if real paradigm shifts can truely come from so many directions at once.