Claude
Gemini
ChatGPT
Artificial Intelligence
27 Jan 2026
4 MIN READ
Context Rot: When Your AI Gets Brain Fog (And What To Do About It)
WRITTEN BY
Adrian Griffith
Context Rot: When Your AI Gets Brain Fog (And What To Do About It)
Context is critical when working with AI. The more relevant information you give it, the better it performs. Simple, right?
Except there's a limit. Give it too much context (particularly over long, complex conversations), and something weird happens. Your AI starts to struggle. It forgets things you told it earlier. It loses sight of the original goal. It becomes less accurate, more confused, more... useless.
This is 'context rot'. Or as I prefer to call it: AI brain fog.
Who needs to worry about this?
Honestly? Not everyone.
If you use AI in quick bursts – dipping in, getting an answer, getting out – you'll probably never encounter it. Especially if you start fresh conversations each time.
Context rot becomes a problem when you're working on longer, more complex tasks with your AI. Think:
Extensive data analysis spanning multiple hours
Ongoing problem-solving sessions over days
Complex coding projects with lots of back-and-forth
Any conversation where you're building on previous responses continuously
Rule of thumb: if your conversation wouldn't fit on a single printed page, you're entering rot territory.
What causes it?
Your AI tries to read the entire conversation history every time you send a message. Early on, that's fine. But as the thread grows, it's like asking someone to recap 'A Brief History of Time' before answering a simple question. Eventually, the sheer volume overwhelms the system's ability to track what's actually important.
How to avoid (or fix) context rot
I've read plenty about context rot's existence, but not much about how to actually deal with it. So here's what actually works for me:
1. Start with a clean, sharp 'killer prompt'
Yes, I bang on about giving AI detailed context. But there's a difference between "detailed" and "bloated."
Give it everything it needs. Nothing it doesn't.
The tip: Use AI to write and optimise your prompts. (I know, inception. But it works.)
2. Use specialist tools for specialist tasks
General-purpose LLMs are brilliant, but they're also a touch "jack of all trades." If you're doing something specific (for example, coding), use a tool built for that purpose.
For coding: Try Claude Code or Cursor instead of a general chat interface. These tools have system prompts optimised for development work and they do the 'do one thing well' thing, erm… well.
For data analysis: Use tools with built-in analytics capabilities alongside AI capabilities, rather than trying to do everything through a standard LLM.
The right tool means less extraneous context to manage in the first place, plus 'mission-focused' system prompts.
3. Start new topics in fresh chats
This sounds obvious, but some folks don't do it. They just keep chatting away in the same thread about completely different topics.
Every time you ask something new, your AI is trudging through the entire history trying to understand what's relevant. That's exhausting (for it) and inefficient (for you).
The tip: New topic? New chat. And make sure you're logged in so your AI can access your persistent knowledge and preferences; that way you don't lose genuinely useful context.
4. The 'pass the baton' technique
This is my most effective method for dealing with long threads, and it's criminally underused.
When I sense the AI is getting brain fog, I ask it to pack its own bags before we move to a new thread. Here's how:
"Hey [AI name]. I want to tackle the next part of this in a new thread, but I don't want to start from scratch. Can you summarise the key points of this conversation and create a prompt for your future self? Stick to what's essential. Leave out anything superfluous. Write it in the best way for you to understand."
Then copy that response, start a new chat, paste it in, and continue.
It sounds a bit meta, but it works brilliantly. You get all the important context without the baggage. And remember: writing good prompts is the AI's job, not yours.
The bottom line
Context rot is real, but it's manageable. The key is being intentional about how you structure your conversations with AI.
Think of it like having a meeting. You wouldn't invite someone to a two-hour meeting, recap everything from the last five meetings, then ask them one simple question. You'd give them the relevant brief and get to work.
Treat your AI the same way. Clear context. New chats for new topics. The right tools for the job. And when things start getting foggy, hit refresh.
Your AI will thank you. Well, it won't actually thank you because it's not sentient. But it will praise you because it's been taught to, and it will perform better, which is basically the same thing.
