Development

What AI Mistakes Reveal About Your Project's Documentation

Patrick Hammond
Patrick Hammond
CTO
January 7, 20263 min read
What AI Mistakes Reveal About Your Project's Documentation

Errors Are Clues

Last year we replatformed our own website. Working closely with AI, we were making quick progress. Then the same questions started surfacing. Why does this page sound different from that one? Why doesn’t this sound like us?

Then came the change that made it click: our AI tooling had rebuilt a card component that already existed. Twice. Slightly different paddings, typeface sizes, borders. Two versions of something that shouldn’t exist.

Our AI tools couldn’t find a centralized component library because we hadn’t told it about one. The inconsistent copy? Our brand voice guidelines were scattered across files, some outdated, some contradictory, some only in people’s heads.

These weren’t AI failures. They were project documentation gaps we’d never closed.

In software development, context engineering starts with your project documentation.

As a team that builds custom software for clients, we’ve seen this pattern on projects where AI-assisted development is new. The first few weeks expose undocumented team knowledge. And catching that sooner means you can start improving the entire system, not just the AI output.

The teams that grow with AI aren’t the ones chasing perfect output. They’re the ones treating every gap AI exposes as a chance to strengthen the system for everyone, human and machine alike.

Ask “What Was Missing?” Not “What Went Wrong?”

When AI gets something wrong, it’s tempting to fix the mistake and move on. Fast and practical. And when you’re under deadline pressure (which is always), stopping to investigate feels like overhead you can’t afford. The fix works. Ship it.

But that reflex quietly kills the chance to improve.

That duplicate component wasn’t random. When we dug in, we found the same root cause behind most of our AI hiccups: knowledge that existed but wasn’t accessible.

  • Undocumented architecture. Decisions that were never written down.
  • Tribal knowledge. Standards that exist only in people’s heads.
  • Implicit rules. Domain knowledge everyone “just knows.”
  • Unspoken expectations. Requirements implied, not stated.

If someone new to the project would struggle with it, your docs don’t cover it. And if AI didn’t have access to that information, the next team member won’t either.

Asking “what was missing?” instead of “what did it get wrong?” shifts the conversation from blame to improvement.

Put It Where the Next Person Will Find It

Knowledge isn’t useful if no one can find it. Most teams already have documentation. It’s just scattered across wikis, Slack threads, and the heads of previous team members. The information exists. The problem is discoverability.

When you capture something, ask: where will the next person actually look? Not where it “should” live by some org chart logic, but where someone will find it.

For us, that meant consolidating into a single source of truth at the project root, where both humans and AI would encounter it naturally. A few principles that helped:

  • Put decisions near the code they affect. Architecture decisions belong in the repo, not a wiki three clicks away.
  • Optimize for the newcomer. If someone new to the project wouldn’t find it in their first hour, it’s not discoverable enough.
  • Update where you work. If updating docs requires switching contexts, it won’t happen. Keep docs in the same tools you already use.

We already had a CLAUDE.md/AGENTS.md file at the project root. We improved it by adding pointers to our newly consolidated documentation, then used AI tooling to refine the structure until it worked well for both humans and machines.

We went further, building subagents and workflows that reference appropriate docs automatically (a future post!) to make the tooling more effective. But that tooling only works because the docs are in the right place first.

Old Discipline, New Speed

Strong teams have always done this: learned from mistakes, asked why before fixing, strengthened systems instead of blaming individuals. It’s continuous improvement. Nothing new.

What’s new is the speed, on both ends. AI exposes gaps in your team knowledge while you’re still in the middle of the work, not months later when everyone’s moved on. And it can help you close them just as fast. You get the feedback while the context is fresh, while the fix is cheap, while the learning can actually stick.

Teams with the discipline to pause and capture those insights get better every cycle.

With tighter feedback cycles, documentation stops being a chore you do once and forget. It’s a living system that directly shapes how effective your tools are. Treat your documentation with the same care as production code.

If your team keeps hitting the same challenges with AI adoption, the fix probably isn’t better prompting or tooling. It’s better documentation: context engineering in practice. We partner with teams to build these systems. Start a conversation.


Photo by Shamin Haky on Unsplash

Ready to Build Something Amazing?

Let's talk. Schedule a consultation to explore how our empathetic, strategic approach can help you turn complex needs into intuitive and enjoyable digital experiences.

Start a Conversation Let's Build Together