Be careful with your context
April 13, 2026
In AI agents, the real lock-in risk is not only the model you choose. It is the system that becomes the home of your working context.
People still talk about AI agents mostly through the lens of models. Which one is smartest, which one is fastest, which one writes the nicest code. That matters, of course. But the more important shift is happening somewhere else. The real battle is increasingly about context, because context is what makes an AI agent useful in practice. A strong model without context is often just impressive. A strong model with the right context starts to become part of how a team actually works.
That is also why it helps to be precise. Not all context creates the same kind of lock-in. Some context is relatively portable. Claude Code is a good example of that direction. Anthropic documents project memory through CLAUDE.md, project and global configuration through settings files, and tool connections through MCP. That does not remove switching costs, but it does mean that an important part of the setup can live in files and interfaces you control, rather than only inside a vendor-managed product surface.
The stronger lock-in risk appears when context stops mostly living in files you control and starts being absorbed into a managed layer that retrieves, ranks, permissions, interprets, and operationalizes it for you. OpenAI's company knowledge brings context from connected apps into ChatGPT. Microsoft says Copilot uses Microsoft Graph to access the user's unique context, and Atlassian is even more explicit: Teamwork Graph unifies data across Atlassian and many external apps and then "learns your context."
That distinction matters because switching away from a model is one thing, but switching away from a context layer is something else entirely. Once repositories, chats, tickets, docs, permissions, search behavior, and workflow logic have been wired into one system, the migration problem changes character. You are not only replacing the engine. You are reconstructing the environment that made the engine useful in the first place. Vendors do not need to say this explicitly for it to be true. It follows naturally from products that centralize retrieval, indexing, grounding, and permissions inside their own platform layer.
There is a second layer to this as well, and I think it is the more important one. The next moat is not only that vendors hold your context. It is that they can observe how that context is used. OpenAI now documents workspace analytics for ChatGPT Enterprise and Edu around adoption, engagement, and how teams use ChatGPT. Anthropic documents both usage and cost reporting and a Claude Code Analytics API that exposes usage analytics and productivity metrics. That means the platform does not just become the place where context lives. It also becomes the place where workflow patterns become visible.
That is why I do not think the strongest version of this argument is that vendors are simply training on your company data and that this is the lock-in. That claim is weaker than it sounds. OpenAI states that, by default, it does not train on inputs or outputs from business products such as ChatGPT Business, ChatGPT Enterprise, and the API. Anthropic says the same for its commercial products by default. So I would not anchor the argument on default enterprise model training.
The stronger point is that vendors can still gain strategic advantage from behavioral patterns even without training on your business content by default. Once a platform can see which workflows are repeated, which tools get connected, which teams adopt fastest, and which actions create value often enough to become habits, it is in a very good position to turn those patterns into product. That does not require reading some secret plan from the vendors. It is simply the natural next step for products that already combine context, analytics, and execution surfaces.
You can already see the direction. Anthropic is positioning Claude Cowork as a system that executes multi-step knowledge work on a user's behalf across files, folders, and applications. Microsoft is explicitly building agents on top of Microsoft 365 Copilot for organization-specific workflows. OpenAI is expanding ChatGPT's connected app model so the product can not only retrieve from external systems, but also work through them. In other words, the stack is moving from context retrieval toward autonomous or semi-autonomous execution.
That is why companies should be careful with their context. The issue is not that these tools are bad. Quite the opposite. They are useful precisely because they gather and apply context well. But companies should stop treating context as a harmless convenience layer. It is increasingly becoming infrastructure. The more context you place into portable files, explicit conventions, and open protocols, the more room you keep to move later. The more context you allow to be absorbed into proprietary graphs, managed memory layers, connector ecosystems, analytics surfaces, and workflow engines, the more you should assume that future switching costs are quietly increasing in the background.
That is the version of vendor lock-in in AI agents that seems real to me. Not that one model is hard to replace, but that the system holding, observing, and interpreting your working context becomes hard to replace. And once that system starts working well, the most valuable thing vendors may be collecting is not your prompts, but the shape of the work that happens once your context lives inside their system.