Don't Let LLMs Think for You
LLMs are incredible tools. They can draft emails, debug code, summarize papers, and brainstorm ideas faster than most people. But there's a cost that nobody talks about enough: the more you outsource your thinking, the worse you get at thinking.
This isn't a hypothetical concern. It's something I've noticed in myself and in people around me.
Key Takeaways
- Relying on LLMs to do your thinking weakens critical reasoning skills over time, similar to how GPS navigation erodes spatial awareness.
- The core risk is "fluent nonsense": AI outputs that read convincingly but may contain errors you miss because the language sounds authoritative.
- Effective AI use means thinking first, then prompting. Draft your own ideas before asking an LLM to refine them, and use AI for critique rather than generation.
Why is over-reliance on LLMs a problem?
When you use an LLM to write your first draft, you skip the hardest part of writing - organizing your thoughts. When you ask it to solve a coding problem, you skip the part where you build mental models of the system. When you let it summarize an article, you skip the part where you engage critically with the content.
Each shortcut is small. But they compound.
Over time, you start losing the ability to:
- Structure an argument from scratch without a prompt
- Debug systematically without asking an AI to "find the bug"
- Form original opinions on topics you've only read AI summaries of
- Sit with ambiguity instead of reaching for an instant answer
The irony is brutal: the people who rely most on AI for thinking are the ones who need to think the most - because they're actively weakening the skill.
What is the trap of fluent nonsense?
LLMs produce text that sounds authoritative and well-reasoned. This makes it dangerously easy to accept outputs without scrutiny. When an answer reads well, your brain treats it as correct - even when it's subtly wrong, biased, or missing critical context.
This is especially true for topics you're not deeply familiar with. If you ask an LLM about a field you're learning, you have no baseline to evaluate its output. You end up building your understanding on foundations you never verified.
How can you use AI without losing critical thinking skills?
I'm not anti-AI. I use LLMs daily. But I've developed rules for myself:
1. Think First, Then Prompt
Before I ask an LLM anything, I spend time thinking about it myself. Even if it's rough and incomplete, I want my own perspective on the table before I see the AI's. This way, the AI's output becomes a comparison point - not my starting point.
2. Use AI for Proofreading, Not Drafting
I write my own first drafts. Then I'll use an LLM to catch typos, improve clarity, or spot logical gaps. The thinking is mine; the polish is shared.
3. Use AI to Spark Ideas, Not Replace Them
LLMs are excellent brainstorming partners. I'll ask for alternative approaches, edge cases I haven't considered, or counterarguments to my position. But I decide which ideas have merit - the LLM doesn't.
4. Use AI for Negative Feedback
One of the best uses of LLMs: asking them to critique your work. "What's wrong with this argument?" or "What am I missing?" forces you to engage with your own blind spots. The AI becomes a sparring partner, not a ghostwriter.
5. Never Skip the Struggle
If I'm learning something new, I resist the urge to ask an LLM to explain it until I've wrestled with it myself. The struggle is where learning happens. Skipping it with an AI shortcut means you understood the explanation - but you didn't learn the concept.
The Principle
Don't use AI to do your work for you. Use it as a partner.
A good partner challenges you, fills in your gaps, and makes you better. A bad partner does everything while you watch. The difference isn't in the tool - it's in how you use it.
AI will keep getting more capable. The question is whether you will too.
Written by
