Context is a critical factor to getting your desired output from AI. It’s a balance to give the appropriate level of context – especially when you’re trying to manage your usage. While AI doesn’t usually object to unnecessarily compute-intensive operations or ambiguous prompts, it doesn’t mean there isn’t a sharper way to prompt and get the result you want.
That’s context engineering. And once you understand it, you stop tweaking prompts and start designing something much more durable.
This article highlights several current methods to manage context and how I’m also going about trying to optimize my own workflows with these lessons. This article will start a series of lessons that I’m writing as I’m learning – following my article on Karpthy’s local wiki setup– I’ve built a wealth of wiki articles that are worth writing blog articles about.
This articles skew towards using Claude Cowork and Claude Code as that is my current tool of choice, but I look forward to trying these same principles with Codex and ChatGPT (or Antigravity) as well as time allows.
What Context Engineering Actually Is
Context engineering is the discipline of controlling what information your AI has access to, when it sees it, and how it’s structured.
It’s not about writing clever prompts. It’s about building the environment your AI operates in — so that by the time you type anything, Claude already knows your voice, your standards, your project, and your goals. You’re not asking it to figure those things out every time. You’ve already built them in.
The difference in output quality is dramatic. Not marginal — dramatic.
Here’s where most people are leaving that quality on the table.
The Projects vs Local Folder Distinction
If you use Claude’s Projects feature and think you’ve solved the memory problem, I’ve got some bad news.
Projects let you upload files — your brand guidelines, your writing samples, your company information — and Claude references them across conversations. That part works. But here’s what nobody tells you: Claude can never update those files. They’re read-only. If you have a great conversation about strategy and want to save what you decided, you have to manually export it. Projects give you the illusion of memory without the reality of it.
Compare that to a local folder — which is how Claude Code and Claude Cowork work. Here, Claude reads and writes to files on your machine. You finish a session, Claude logs what was decided, what patterns emerged, what to remember next time. You open a new session, Claude reads that file, and picks up exactly where you left off. The next session is smarter than the last one. That’s real persistent memory. It compounds.
Practical rule: if the output of a conversation should matter beyond that session — if you’re building something, refining something, running a recurring workflow — use a local folder. Not a project.
The CLAUDE.md File: Your AI’s Frontal Cortex
If you take one thing from this article, let it be this.
A CLAUDE.md file is a plain markdown file that sits in the root of your project folder. Claude reads it before doing anything else in that folder. Every session. Without being asked. If you use Codex this file is called Agents.md instead but the rest applies.
Think of it as the standing brief you’d hand a new team member. Not a full manual — the key stuff. What this project is for. The rules that matter. The mistakes that have already been made so they don’t get made again.
What to put in it:
- What this project is and what you’re trying to achieve
- Your tech stack or workflow (whatever’s relevant to the task)
- Formatting and output preferences
- Common mistakes Claude makes in this context — and the corrections
What to leave out: everything else. Keep it under 100 lines. The goal is signal density, not completeness. A bloated CLAUDE.md is as useless as no CLAUDE.md — the AI loses the thread in a sea of rules.
One of the most practical things I’ve found: whenever Claude does something wrong, I tell it to update the CLAUDE.md. Claude writes surprisingly good rules for itself. Over time the file becomes a record of every lesson learned in that project, automatically. That’s the compounding effect in action. For a hands-on setup guide, this article on making Claude actually remember walks through the full CLAUDE.md structure from scratch.”
Progressive Disclosure: Only Show What’s Needed, When It’s Needed
Once you’re past the basics, here’s where context engineering gets genuinely interesting.
The instinct when building AI workflows is to frontload everything — dump all your instructions, all your reference material, all your context into one place so Claude always has it. This feels thorough. In practice, it degrades quality. An overloaded context window means the AI is working harder to find what’s relevant and more likely to drift or miss things.
The better approach is progressive disclosure — loading context in layers, only when it’s relevant:
Layer 1 (always on): Your role, your communication style preferences, maybe your industry. Ultra-minimal. Under 5 lines. This lives in your global settings.
Layer 2 (per-project): The CLAUDE.md in your project folder. Covers the specific context for this area of work. Loaded every time you open that folder.
Layer 3 (per-task): Reference files or skill instructions that only load when you’re doing a specific type of task. Your brand guidelines when writing copy. Your technical spec when building a feature.
The result: Claude isn’t reading your entire brand bible every time it writes a one-line reply. It’s reading exactly what it needs, and nothing more.
Reference Documents Over Prompts
Here’s something I’ve seen multiple AI practitioners converge on independently: the secret to good AI output isn’t better prompting, it’s better reference documents.
The principle is simple. Instead of trying to describe your voice, your audience, your standards in the prompt itself — write it down properly, once, in a reference file, and let the AI read it.
This sounds obvious but the implication is significant. If you’ve already written a thorough explanation of your process for humans — a course module, a training document, a workflow SOP — you already have most of a high-quality context file. The thinking is done. You just need to add the framing.
Reference documents that consistently improve output:
- Voice and tone guide — examples of writing that sounds like you, examples that don’t. If you’re not sure where to start with a tone guide, this article on why AI sounds fake covers exactly how to fix it.
- Ideal customer profile — who you’re writing for, what they care about, what they struggle with
- Brand guidelines — visual and written standards (even for text tasks, having the full picture helps)
- Examples of “good” — 3-5 diverse examples of outputs you want Claude to emulate
The bigger these files get, the better the outputs tend to get — up to a point. A 5,000-word reference document often produces noticeably better outputs than a 200-word version of the same material. The AI is working with the same richness a human would have from proper onboarding.
When AI output is consistently off, the fix is almost never a better prompt. It’s a more complete reference document.
Instruction Rot: The Problem That Sneaks Up On You
The last piece most people miss — and the one that causes the most mysterious quality degradation.
As you build your context files over time, you add rules. Claude formatted something wrong — add a rule. Claude missed a tone thing — add a rule. This feels like improvement. Past a certain point, it isn’t.
Instruction rot is what happens when accumulated instructions actively degrade output quality. There are three types:
Stale instructions — your process changed but your instructions didn’t. Claude’s still following rules for a workflow you abandoned six months ago.
Contradictory instructions — rules added at different times that cancel each other out. “Be concise” and “be thorough” in the same file. When Claude hits a contradiction, it picks one at random. This is the root cause of chaotic, inconsistent outputs.
Redundant instructions — rules that were necessary for older models but aren’t needed anymore. State-of-the-art Claude understands “warm and professional tone” without a list of 8 sub-rules explaining what that means.
The fix is a monthly review. Read your instructions yourself first — you’ll spot the obvious problems. Then paste them into a fresh chat and ask Claude to identify stale, contradictory, and redundant rules. In practice, 30–50% of rules can typically be removed without any quality loss. Often quality improves.
Source: Dylan Davis, youtube
Putting It Together
Context engineering isn’t a single technique — it’s a way of thinking about AI work.
Most people approach AI as a conversation: ask, get an answer, refine the ask. Context engineering shifts that to a different frame: build the environment, then operate inside it. The conversation becomes the easy part because the foundation is already there.
The practical stack looks like this. A local folder (not just a project) for anything that matters long-term. A CLAUDE.md in every project folder with the standing brief. Reference documents for the recurring context — voice, audience, standards. Progressive disclosure so the AI isn’t reading everything at once. And a monthly review to strip out the rot before it accumulates.
Start with the CLAUDE.md. Get that right for one project. The rest follows naturally.
The prompt isn’t the problem. It never was.
