Select text to annotate, Click play in YouTube to begin
Everybody's celebrating AI coding speed, but nobody's asking what we're losing. Now, a client called me recently, their production was down, the authentication system was broken, and users could unlock it. The code was clean, well structured, and tests had been passing for months. But when the team tried to fix it, they couldn't. Not because the code was bad, but because nobody on the team understood how it worked. The IA wrote it, but six months earlier, a PKC authentication flow, the token generation,
flow challenges, redirect handling, the developer who prompted it had moved on. Nobody else had ever read it line by line. A team of engineers staring at their own code base and they can't explain their own authentication system. Now, that's not technical debt, that's something new, something worse, and nobody is talking about it yet. I've been in this industry for 15 years, working at companies like Shopify, Brax, Motorola, and Pfizer. I've managed engineering teams, and I've seen every form of technical debt there is, but
I've never seen anything like this. It has a name now, and it's called Cognitive debt. And today I'm going to show you what it is, and why it's more dangerous than technical debt, and the tree practices that prevent it. A Margaret Ann's story, a computer science professor at the University of Victoria, introduced this concept in early 2026. She's me studying developer productivity for over two decades. When she named Cognitive debt, it spread through the engineering community in days.
Simon Willison, one of the most respected voices in the AI development, amplified it within a week. The moment I read it, everything clicked. Here's a definition. One of the debt accumulates when AI writes code that human lose shared understanding of. Not bad code, not messy code. The code can be perfect, beautiful even, but if nobody on the team can explain what it does, how it works, or why it was designed that way, you have Cognitive debt.
The most dangerous code in your code base is the code that nobody understands, even if it works. They might be thinking, knowledge titles aren't new. They've existed since the first senior engineer left the company, and you'd be right. What's new is the scale. AI generates code at a rate that makes this systemic, not occasional. It's not one person who left, it's every module your team prompted into existence. And here's what makes it insidious. Technical that shows up in your code base.
You can see it, your Linter flags it, your CIP line catches it. Cognitive debt is invisible. It lives in your team's heads, or rather it lives in the gap where understanding should be. Story called it a silent loss of shared theory. Shared theory. The collective understanding your team has, but how the system works, what the systems were made, and what the boundaries are. The data backs her up. Get up's own research shows developers using code while at right about code about 55%
faster. But the study didn't measure whether anyone could maintain that code six months later. Speed was measured, but understanding wasn't. Now let me go back to that PKC off-flow. Six months after the AI wrote it, the odds ever pushed a breaking change. The on-call engineer opened the code, but couldn't follow the flow. They couldn't trace the error handling, and I spent two days not fixing the bug, but explaining the system to the team so they could fix it themselves. That's Cognitive debt.
The interest payment is time spent relearning your own system. And here's where it gets interesting. Everyone's measuring AI adaption by velocity. We shipped 40% more features this quarter. Our cycle time dropped by half. The dashboards look great, but those numbers are hiding something. When nobody understands the code, every modification requires going back to the AI. Hey, Claude, explain this function.
Hey, Copilot, what does this service do? You haven't built a faster team. You've built a team that's dependent on the tools to understand their own system. If your team needs AI to understand your codebase, you don't have a tool. You have a dependency. Every team depends on tools. I get it. You depend on your IDE, you depend on documentation, you depend on Stack Overflow, maybe not so much anymore, but here's the difference. Your IDE doesn't forget your project between sessions.
Stack Overflow does on this context, but AI does. Every conversation starts from scratch, but your codebase doesn't go around this experiment. That 40% philosophy increase, which you want to do is measure incident resolution time. I've seen teams with heavy AI code generation where incidents take treat to four times longer to resolve on AI-written modules versus human written ones. That's the hidden cost nobody puts on the dashboard. Stack Overflow's latest developer surveys shows only 29% of developers say they trust
AI-generated code. That's down 11 points from 2024. Developers are seeing this. They didn't have a name for it. Now they do. And let's talk about onboarding. New Engineer joins, day one, they're supposed to ramp up on the codebase. In the old world, they would read PRs, they would see the history, they see the discussions and code of view, and the knowledge is embedded in the process. But in the new world, they open a file, the AI wrote it, there's no PR discussion because their review was just looks good to pass.
There's no order to ask because the AI doesn't have office hours. The new engineer is reverse engineering their own team's system, like reading a book in a language nobody speaks. At Shopify, code of view wasn't about catching bugs. It was a primary mechanism for knowledge transfer. When your engineers learn architecture by reading senior engineer PRs, seniors maintain context by reviewing everything, and code of view was the learning loop.
It was how shared theory was built and maintained across the entire team. But AI bypasses that entire loop. The code shows up, it works, it gets merged, but nobody learned anything. Nobody built context. Nobody can debug at a 3AM without reprompting the AI. And code of view was never about catching bugs. It was always about building shared understanding. AI skipped the most important step. Now let me put this in perspective.
Technical debt is like financial debt. You're paying interest whether you see the invoice or not. Every slow feature, every recurring bug, every developer who quits because the code basis nightmare, that's interest. But cognitive debt is worse. With financial debt, at least you know you owe money. With cognitive debt, you don't even know where the debt exists. There's no dashboard for team understanding. Your velocity metrics look great right up until the moment they collapse. And I've rebuilt 12 disaster projects in the last couple of years.
And every single one had code nobody understood. Not because the developers were bad, but because nobody treated understanding as a requirement. Technical debt is a slow leak. Cognitive debt is a time bomb. Let that land. A slow leak you can see. You can mop it up, you can fix the pipe. But a time bomb, you don't know it's there. Not until it goes off. And by then you're not debugging code. You're rebuilding trust in the entire system.
And here's the principle. Speed, without understanding is not velocity. It's just look. Team shipping fast with AI is only real if they can debug, extend, and refactor that code without going back to the AI. If they can't, the speed is borrowed. And borrowed speed always comes due. Now there's tree practices. None of them slow you down and all of them can help you prevent cognitive debt. So just practice one.
You want to review like a junior. When AI generates code, don't review it like a senior checking for bugs. Review it like a junior trying to learn. There's tree questions. Can I explain every line to a teammate? Do I understand why this approach was chosen? And could I modify the code confidently without reprompting? If the answer to any of these is no, you don't merge it. You sit with it. You understand it or you rewrite the parts that you don't. It doesn't mean you spend a week studying every function.
It means you spend 10 to 15 minutes asking, could I debug this at 3AM? If yes, ship it. If no, take the time now, compare it out to the 6 hour incident when nobody understands the code. The matter is obvious. Now don't review AI code for correctness. Review it for understanding. If you can't explain it, you don't own it. Now practice two. You want to explain to ship. Before any AI generated code ships, the author writes a one paragraph explanation, what
it does, why it was designed this way, and what the known edge cases are. Not a comment in the code, not a commit message, a decision record. Something like this service handles token refresh using PKC. Here's why we chose it. Here are the edge cases to watch for. It takes about five minutes to write. Now you might be thinking we tried documentation before nobody kept it up. Here's the difference. You're writing one paragraph at the moment you understand the code.
Not retroactively, not at a separate task. The moment you merge, the moment you write it, make a part of the PR template, automate it, automate the reminder. In fact, if you can, put it in the code next to it in a readme file, even in the JS doc. This five minute of documentation today saves two days of archaeology tomorrow. And those first two practices catch 80% of cognitive debt. But the third one is the one that makes your team bulletproof, which is practice tree where
you rotate the context. Regularly rotate engineers, true AI generated modules. Not to rebuild them, but to review and understand them and update the decision records. This isn't about making everyone an expert in everything. It's about baseline understanding of the critical paths. You want to start with your authentication, your payments, and data pipeline. The systems where failure at 3M means revenue loss. These are the core critical flows.
At Brex, we handled billions in transactions. We rotated on call across every service, not because we expected everyone to be an expert, because we wanted baseline understanding with something broke, nobody was tired of scratch. Every engineer should be able to explain any critical path into system, not write it from memory, explain it, and that's the bar. Here's what happens when you do this. One engineering team I advise started these tree practices six months ago, and their meantime
to resolution one of their dorm metrics dropped from four hours to 45 minutes. Same code base, same team, the only difference they understood it. I'm already dropped from months to weeks. New engineers had decision records, not just code, and they understood the why, not just the what. The team stopped being dependent on AI for their own system, and they used AI to write code faster. And they understand what was written, and that's the difference between a tool and a crutch.
The fastest team isn't the one that writes the most code, it's the one that understands all of it. Now here's what I want you to do. Go to your code base, find the last five PRs where AI generated the majority of the code. Once each engineer explained this code to me without opening the file, without reprompting from memory, tree questions. What does it do? Why was it built this way? And what breaks if you change line 47?
If they can answer all tree, there's no cognitive of that, and you can keep shipping. They stumble on anyone, you have a number, and now you know where to start. Review like a junior engineer, explain the ship, and rotate the context. Margarine story named this problem in early 2026. Most engineering teams haven't heard of it yet, and the teams have fixed this now before compounds are the teams that win in the next five years of AI assisted development. The teams that ignore it will find out when something breaks and nobody can fix it.
Cognitive debt is invisible until it isn't, and these practices aren't new. They're engineering fundamentals that most teams forgot when AI made the whole process feel slow. Every team that succeeds with AI will look back and realize the speed wasn't the advantage the understanding was. If you're building something with AI and you want a team that operates a discipline, not just speed, that's what we do at Mo. Our link is in the description. You want to fix it now or explain it later.
End of transcript