- What's brewing in AI
- Posts
- š§š¼ Rescuing my critical thinking skills
š§š¼ Rescuing my critical thinking skills
Using AI correctly is not enough
Rescuing my critical thinking skills
Using AI gives an illusion of learning. Hereās my plan to stay sharp.
Was this email forwarded to you? Sign up here.

AI has made me more productive than ever.
But if youāre feeling dumber from using AI sometimes, youāre not alone.
Some people want you to think itās a skill issue: that if you just use it the right way, then thereās no negative effects.
They seem to ignore that the medium itself has effects beyond how you use it.
My last email (me, realising Iām overusing AI), struck a chord with many of you. I received an unexpected amount of DMs from people saying they experience the same phenomenon, āI needed to hear thisā, etc.
In this email:
What weāre missing out on when using AI
Why keeping our critical thinking sharp is more important than ever
Why using AI correctly is not enough
What Iām doing to balance out the downsides of using AI
My brainās natural superpowers

A simplified mental map of how my brain tackles a problem
As a first stepāto identify what weāre missing out on when using AI, I find it helpful to remind myself of how my brain works without AI:
It has a clear sense of why I have a problem, why I want to solve it and what the end goal looks like
It explores solutions, specific directions and makes a thesis about which solution is right
Thereās a feedback loop in which trial and error from attempting to solve the problem refines my understanding of the world, how I observe, and my point of view. Solutions inform judgement.
It enjoys the process of learning new things, exploring ideas, directions and solutions. And Iāve noticed that in that process, I often also get some great random ideas apart from the answer I set out to get.
AI is cool, but impacts our critical thinking
Emerging research is showing AI causes a cognitive debt.
My intuition is that using AI all the timeāwhat some CEOs call reflexive use of AIāsubtly encourages cutting off our cognitive process at mere problem awareness. We miss out on a lot of the effort we typically have to do to get the result, which means less experience go back into our memory networks which, paradoxically, over time leads to a weaker starting point for using AI.
If that explanation went above your head, worry not because this meme illustrates it perfectly (I personally choked on my coffee at āwhat are my needs?ā).
I looked into the research, and hereās why we still need a sharp head in an AI-first world:
Effortful learning builds metacognition; when itās underdeveloped, weāre prone to automation bias and AI more easily misleads us and inflates confidence.
LLMs arenāt really capable of advanced reasoning, though they give off an illusion of thinking.
Because theyāre trained to produce likely-sounding text, LLMs can be very persuasive, even when theyāre wrong.
Autonomous systems will likely need human supervisors for years to come.
And without solid judgement and decision-making skills (our starting point for using AI), LLMs wonāt create value and they wonāt drive innovation.
By defaulting to AI for every problem, we avoid the kind of effortful learning that builds experience, and lose joy and serendipity in the process. We also become less able to make AI helpful and more prone to errors and deception.
Armed with that understanding, Iām now actually motivated to not take the shortcut of AI all the time.
IN PARTNERSHIP WITH RUBRIK
With 82% of cyberattacks targeting cloud environments where your critical data lives, every second after a hit counts. Join top IT and security leaders on December 10 to learn strategies for turning worst-case scenarios into minor incidents. Discover how to bounce back in hours, not weeks.
How you use AI isnāt everything
Influential people, like Nvidiaās CEO Jensen Huang, want you to believe that itās only about how you use it.
He makes the argument that if you just ask AI to teach you things you donāt know or solve problems you couldnāt otherwise solve, youāre actually enhancing your cognitive skills.
This makes it sound like weāre dealing with a skill issue.
Itās a seductive argument, but it ignores that learning itself requires effort.

By using AI, you're still bypassing a large area of brain processing which normally requires high effort on your end.
Understanding something new quickly can be extremely useful, but it also gives you an illusion of learning.
And as you know, these models arenāt actually thinking for you, theyāre juggling probabilities.
While you think youāre learning, the opposite thing might be happening, given that our critical thinking skills atrophy when we rely on AI.
The irony is complete: We are under the illusion of learning by models that give us an illusion of thinking, and the critical thinking needed to steer them is vanishing in the process.
The three steps Iām taking
I personally notice a low-key reluctance to think when I know AI can just give me the answer (ref. my keyboard story).
I want to heal that.
Recognising that AI, as a medium, subtly conditions me to outsource my thinking feels empowering and seems like the obvious first step to improving my relationship with it.
Hereās my recovery plan:
Keeping score. From now on, Iāll use AI with the awareness that Iām taking a shortcut. To illustrate this, Iāll use the metaphor of going somewhere and having the choice between walking or taking the car. If I take the car all the time, I need to make sure to balance that choice with enough exercise elsewhere. Likewise, Iāll use AI knowing that Iām building up a cognitive debt. No harm in thatāas long as I pay it off with enough real, non-AI assisted learning elsewhere.
Showing up with a hypothesis. When I use AI on a subject I actually seek to understand, Iāll do my homework first. Iāll reason through it as far as I can, exploring solutions and creating a hypothesis in my headābefore starting to prompt. Itās easy to stop my thought process when I feel like Iāll have a āgood enoughā prompt to get the job done; the idea is to discipline myself to go one step furtherābefore asking AI. This way, Iām deliberately leaning into the territory which AI would otherwise handle for me (the purple quadrant of my illustration above). The idea is that Iāll still get some of the mental exercise while using AI, comparable to riding an e-bike versus taking the car.
Going offline. Iāll do a no-tech day every so often, e.g. every two weeks. No computers. And especially no LLMs. This is the measure Iām least excited about, which probably means itās also the most effective one.
These measures recognise that AI as a medium conditions certain behaviorsāoutsourcing effort and cognitive processes that build our understanding.
Ironically, creating something valuable with AI requires us to maintain those same behaviours that weāre often outsourcing.
You can't fix that with better prompts.
These measures make sense to me personally, at this point in time. Everyone should approach this individually.
The bigger picture is this: stay willing to contemplate how AI affects you.

THATāS ALL FOR THIS WEEK
Iām curiousā
Have you to noticed any downsides from using AI? If yes, whatās one thing youāre doing about it?
If you havenāt noticed anything, I want to hear that too.
Your POV is valuable to me! Hit reply or leave a comment in the poll below.
Was this email forwarded to you? Sign up here. Want to get in front of 21,000+ AI builders and enthusiasts? Work with me. This newsletter is written & shipped by Dario Chincha. |
What's your verdict on today's email? |
Affiliate disclosure: To cover the cost of my email software and the time I spend writing this newsletter, I sometimes link to products and other newsletters. Please assume these are affiliate links. If you choose to subscribe to a newsletter or buy a product through any of my links then THANK YOU ā it will make it possible for me to continue to do this.



