A Pragmatic Approach to Context Engineering
March 25, 2026
In the following I will share my thoughts on context engineering, by outlining a practical blueprint for solving the context problem teams are facing.
Right now, engineers often run into situations where AI agents make suboptimal decisions. This is usually brushed off as the agent simply not being "smart" enough yet. In my experience, that is rarely the real issue. More often, the problem is that the agent was not given the right context.
Providing the right context is cumbersome, and it can be especially difficult when little documentation exists in the first place. I wrote earlier about what kind of context is necessary in a previous article.
A useful way to think about this is to imagine that you have hired a new colleague. That colleague is probably quite skilled, but they have no context about the product, the organization, or the way things are done. An AI agent is similar, just more extreme, because the AI agent starts from scratch every time.
A practical approach that has worked well for me is to make sure all relevant context is tracked explicitly. I have found Markdown files to be highly effective for this.
To begin with, there are two simple things to do:
- Create a repository for the context the agent needs when starting a task.
- Ensure that the repository the agent will work on contains at least an
AGENTS.mdfile.
Once you have that, you can start structuring the context the agent needs. A good way to do this is to think of it as a simple tree structure, with AGENTS.md acting as the entry point and linking to more detailed information.
The new agent repository can contain the following:
AGENTS.md- the entry point that tells the AI agent what to do and where to lookdecisions/- decisions that have been made and may become relevant laterproduct.md- an explanation of the productreviews.md- review notes and findingsrouting.md- instructions for when agents should do what, and where they should gather information, such as repo locationsspecs/- specifications for different areas of the codebase, repositories, and teamsworkflow.md- the order of work and the documentation steps the AI agent should followconventions/- code conventions and organizational conventionsroles/- different roles such as Developer, Product Owner, Reviewer, Architect, and AI Agent Workflow Improvertasks/- task specifications
With that setup, you can begin in the context repository by asking the agent to read AGENTS.md and take on the role of Product Owner. From there, the Product Owner can start specifying tasks and placing them in the tasks/ folder. The Developer role can then pick them up and execute them.
The roles need to be instructed to use the different folders and files, and also to write down findings as they go. This is important. When you notice an agent doing something against a convention, for example, you should ask it to record that in the conventions/ folder so the same mistake does not happen again.
I will not go into detail on every role here, but this approach has been fairly effective for managing context across several small and larger projects I am working on.
If the Markdown files eventually become too large to remain effective, you can introduce a PostgreSQL database with vector embeddings, wrapped in an MCP, to enable more efficient knowledge retrieval.