- The AI Stack for devs
- Posts
- How to give your memory to your agents
How to give your memory to your agents
Use progress.md


Before you jump in, here’s something you’ll love.
I’ve put together a curated list of MCP servers and MCP registries to support your development workflows. When you subscribe to the newsletter, you’ll get instant access to this resource in your welcome email.
Bonus: I’m also working on a collection of free educational resources for building AI agents at every skill level. Coming soon for subscribers. Stay tuned!

We've all been there. You give your AI agent a whole codebase, and it starts refactoring with confidence. Then, over time, it starts to create AI slop. And more AI slop. It suggests changes to files you fixed a while ago. The context window says that 180K tokens have been used, but the agent has somehow forgotten what you're really building.
Putting whole codebases, chat histories and debug logs in your agent's context doesn't make it smarter, though. Modern LLMs are basically just pure functions. Put garbage in, and you'll get garbage out. You will achieve precision if you provide them with laser-focused inputs.
Use progress.md as memory
Instead of managing sprawling chat transcripts, it’s better to maintain one file that acts as the agent's working memory. Here’s a template you can use as your progress.md file
# Project Overview
- Purpose: Refactor auth system to support OAuth2
- Tech Stack: Next.js 14, Prisma, PostgreSQL
- Architecture: /docs/auth-design.md
# Progress / Status
- [x] Done: OAuth provider integration
- [ ] Current: Implementing token refresh logic
- [ ] Next: Add refresh token rotation
# Task List
- [Update refresh logic] — auth/tokens.ts:L45-89 — add rotation — expect passing tests
- [Add middleware] — middleware/auth.ts:L12-34 — validate tokens. Expect 401 on expired
# Style Guide
- Use async/await over .then() chains
- Prefer early returns over nested conditionals
# Workflow Instructions
- Build: npm run build — expect 0 exit
- Test: npm test auth — expect all green
- Type check: tsc --noEmit — expect clean
# References
- OAuth flow diagram: /docs/oauth-flow.png
- Token rotation RFC: /docs/rfc-token-rotation.md
# Notes
- Auth service expects Bearer prefix (breaks without it)
- PostgreSQL requires explicit UTC timestamps
How to use progress.md

Keep context less than 40%. When you get close to this threshold, update the progress.md file before compacting, and then restart the session so that the next session loads from this file instead of starting fresh.
Don't add to the file after every action by the agent. Rewrite progress.md only before compacting or creating a new session
Declare in AGENTS.md that this file always needs to load at the start to make all IDEs and runners behave the same.
For larger projects, break knowledge into modular markdowns (feature_todo.md, sprint_todo.md, prd.md, etc.) and have progress.md reference or link these. Update or rewrite individual files as needed to keep context targeted, then only inject relevant sections into theagent for the task at hand.
Be mindful of context. For longer sessions, progress.md may grow too large for the context window, which leads to context overflow and reduced model performance. Periodically clean up the file history, irrelevant context, and progress.
Next week, I'll show you why to keep your context below 40%. Until then, try replacing your next massive context dump with a tight progress.md file. Your agent (and your sanity) will thank you.
If you’re not a subscriber, here’s what you missed this month
Subscribe to get access to such posts every week in your email.


Want to get the most out of ChatGPT?
ChatGPT is a superpower if you know how to use it correctly.
Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.
Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.

🚢 What’s shipping this week?
Google has built Stax, an experimental developer tool to streamline the LLM evaluation lifecycle.
xAI’s has released Grok Code Fast 1 agentic coding model which prioritizes blazing speed and low cost over perfect accuracy (intentional to focus on smaller tasks iteratively)

⭐️ Trending open-source repos

👓️ Worth reading for Devs


My Recommendations
Techpresso gives you a daily rundown of what's happening in tech and read by 300,000+ professionals.
The Deep View The go to daily newsletter for 250k+ founders and knowledge workers who want to stay up to date with artificial intelligence.
Looking for more such updates in your inbox? Discover other newsletters that our audience loves to read here
