of your AI coding agent is critical to its performance. It is likely one of the most significant factors determining how many tasks you can perform with a coding agent and your success rate in doing so.
In this article, I’ll discuss specific techniques I use to improve the context of my AI agents. I’ll explain specifically how I do it, and why. It’s important to understand why I’m using these techniques, so you can start developing your own techniques in the future, and really optimize your agentic coding.

Table of Contents
Why optimize agentic context
The context you provide your coding agent is all the information it has to complete a task. Thus, properly managing your context is incredibly important if you want your coding agent to work well.
Improving your context by a few percent will have a massive impact on your efficiency as an engineer if you spend many hours each day programming. I thus spend a lot of time, constantly trying to optimize my programming with my coding agent.
The four techniques I’ll present in the next section are a result of my testing a wide variety of different techniques and approaches. In this article, I’ll only cover four of the most important techniques and why they work so well. In the future, I might also cover some failed techniques and reflect on why they didn’t work
4 specific techniques
In this section, I’ll cover four specific techniques I utilize to optimize the context for my coding agents. I’ve written the techniques in no particular order, and I consider them all important to me in my quest to be as efficient an engineer as possible.
Always update AGENTS.md
Probably the most important technique I use is to constantly update the AGENTS.md file. Continual learning is still an unsolved problem for LLMs, thus we need to come up with our own solutions to make coding agents remember our preferences.
I’ve written a rules file for my coding agent, which specifies some preferences I have:
- Always write Python 3.13 syntax if using Python
- Never use the Any type
- Always use types and docstrings for functions
These are preferences I have across all the repositories I touch, and which I thus always want my agent to follow. I recommend spending time reflecting on your own coding rules and specifying them to your agent.
Furthermore, whenever my coding agent makes an error, I help it correct the error and tell the agent to remember the fix in AGENTS.md. This makes sure the agent avoids this error in the future, and simply makes the agent faster and more efficient.
If you continue doing this over time, you’ll notice the agent becoming significantly better and more proficient at performing the tasks you ask it to perform. This could be:
- Implementing new features
- Fixing bugs
- Checking production logs
This works so well because you’re providing your coding agent with the necessary context that you possess, but you never wrote down. By informing the coding agent in AGENTS.md, you provide the model critical context for problem solving.
Note that you can use any Markdown files that you prefer. Claude Code uses CLAUDE.md, Warp uses WARP.md, and Cursor uses .cursorrules. However, I find that most coding agents always read AGENTS.md, which makes it a good file name to store agentic memory in.
Provide documentation links
Another tip is to provide relevant documentation links to the model, or to explicitly tell the model to find documentation online through a web search.
I sometimes find that my coding agent is using outdated syntax, for example, when interacting with the OpenAI API. In these instances, I provide the model a link to the latest OpenAI documentation and tell it to base its code on this.
The problem of coding agents using outdated code typically occurs because LLMs have a cut-off date, which necessarily must be before the model was done training. The cutoff date for any given model could be over a year ago, in which a lot of API documentation has changed. Thus, it’s very important to make sure the model uses the latest available documentation by providing it with links to these docs.
Coding agents often uses outdated code because of the model knowledge cutoff. The fix to this problem is to provide the agents with the latest API documentation
Provide IaC stack as context
Another technique I utilize is to provide information about my infrastructure as code (IaC) stack as context to my coding agent. This is incredibly useful when using an agent to check out production logs (which you should do).
I started using this technique after I noticed my agent was spending a lot of time finding information, such as the names of my database tables. For example, if the agent wanted to find information from a table, it first had to list all tables, guess which table is relevant, and try it. If it failed, it would have to try a different table.
This takes a lot of time and tokens, costing you both efficiency and money, and is thus something you need to avoid.
To provide my agent with all the IaC context, I had an agent go through all of the relevant IaC repositories and create a single Markdown file containing all relevant context, for example, the names of all my database tables. I then provide this file as context to my coding agent whenever it’s relevant.
New threads on a new context
Another simple technique I utilize is to start new threads whenever I’m dealing with new contexts. For example ,if I just finished implementing a new feature, and now want to fix a bug, I almost always start a new thread in Cursor.
The reason is that when implementing the new feature, the model stores a lot of context that is completely irrelevant to fixing the bug. This not only fills up the model context, but can also act as noise, distracting the model from more relevant information.
Thus, whenever you can, you should make sure to start new threads whenever changing contexts. This could be after you implemented a new feature, and want to fix a bug. Or after you fixed a bug, and want to check out production logs with your agent.
This works well because the important context that should be stored across threads is stored in AGENTS.md, as I discussed in an earlier section.
Conclusion
In this article, I’ve covered four specific techniques I utilize to optimize the context of my coding agents. Utilizing these techniques makes me a significantly more efficient engineer, because my coding agents can work much more efficiently. I recommend trying out these techniques for yourself to find out if they work well for you. Furthermore, I recommend experimenting with new techniques and approaches yourself, which can make you more effective. Whenever you notice your coding agents are unable to do something, you should immediately start ideating and thinking about how to make them able to perform such tasks.
👉 My Free Resources
🚀 10x Your Engineering with LLMs (Free 3-Day Email Course)
📚 Get my free Vision Language Models ebook
💻 My webinar on Vision Language Models
👉 Find me on socials:
🧑💻 Get in touch
✍️ Medium
Source link
#Optimize #Coding #Agent #Context
























