Debugging When Claude Code Won't Stop Gaslighting You
You've all been here. You ask Claude Code to fix a bug, it confidently says "Fixed!", you run the code, it still crashes. You paste the error back, it says "Ah, I see the issue now — fixed!", you run the code, same crash. Three rounds in, you start to wonder if you are the problem. This page is about how to break out of the confident-wrong loop instead of letting it eat your whole afternoon.
Stop letting it guess
The usual cause of this loop is that the model doesn't actually know why the code is failing, but its instinct is to produce a plausible-looking fix rather than admit uncertainty. The way to stop it is to refuse the fix and demand investigation first. A prompt that works: "Don't fix anything yet. List three specific, competing hypotheses for why this fails, and tell me the smallest experiment to distinguish them." This shifts the model from guess-and-check into actually thinking about the problem space.
Give it the actual error, not a paraphrase
Copy-paste the full stack trace, the full failing command, and the exact line numbers. LLMs are pattern matchers — the more specific the error string, the more likely it locks onto the real issue instead of the generic version. If the error is "TypeError: Cannot read properties of undefined (reading 'x')", include the full trace with file names. Don't summarize.
Narrow the scope until the AI has to be right
If Claude Code can't fix a bug in a 500-line file, copy just the 30 relevant lines into a new conversation and ask it to debug that snippet. With less surface area to hallucinate over, its failure modes shrink. Sometimes the bug becomes obvious to you while you're extracting the snippet, which is itself a win.
Ask for uncertainty explicitly
Models are trained to sound confident. You can override that with a prompt like "If you're not sure, say you're not sure. Don't invent a fix." This sounds silly but measurably changes behavior — the model will often say "Actually, I'd need to see what X returns to know for sure" instead of guessing. That answer is more useful than a wrong fix.
Drop back to manual debugging
If after two rounds the AI is still wrong, stop asking it and debug the old way: add console.logs, use the debugger, print the actual values of the variables the AI keeps guessing about. Once you know what's really happening, then tell the AI exactly what you found and ask for the fix. Now it has ground truth instead of inference, and the success rate jumps.
The 'show me the line' trick
When Claude Code claims it fixed something, ask it to quote the exact line it changed and explain why that fixes the bug. Sometimes the model discovers mid-explanation that its fix doesn't actually address the error. Cheap sanity check.
Use the checklist as a bug hunt
Sometimes the original bug is a symptom of one of the canonical AI-code hazards. If you're getting weird auth issues, check the auth-check item on our review checklist. If the crash only happens with certain input, check null/empty handling. The checklist is a list of usual suspects, and the category hit rate is high enough that it's often faster to run through it than to debug from scratch.