A CHI 2025 paper recently went viral; my summary is below:
Researchers asked 319 knowledge workers to describe real tasks that they had completed with a GenAI tool, and how the tool affected their critical thinking. As expected, workers’ confidence in AI doing the task negatively correlated with critical thinking, while workers’ confidence in doing the task themselves and evaluating AI responses positively correlated with critical thinking.
Based on the workers’ free-text responses, the researchers found that the main motivators for critical thinking were work quality, potential negative outcomes (e.g., getting fired), and skill development. The main inhibitors of critical thinking were workers’ lack of awareness of AI limitations, low motivation, and inability to improve AI-generated output.
Accordingly, the authors suggest that AI tools should include features that encourage critical thinking, such as “allow[ing] the user to explicitly request critical thinking assistance,” “providing explanations of AI reasoning,” and “offering guided critiques.”
In this post, instead of describing some details from the paper (as I’ve done in previous posts), I’ll simply list a few highlights and comment on them.

The cost of overreliance
Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved. As Bainbridge [7] noted, a key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise.
Similarly,
Without regular practice in common and/or low-stakes scenarios, cognitive abilities can deteriorate over time [5], and thus create risks if high-stakes scenarios are the only opportunities available for exercising such abilities.
I think schools are a natural environment for providing students with “routine opportunities to practice” (e.g., without AI), “low-stakes scenarios,” and constructive feedback that they might not receive outside of school. At the same time, educators need to strike a balance between pedagogical purity and pragmatism — they cannot pretend that students will always use GenAI exactly as instructed.
Potential benefits of AI
Speaking of feedback,
Although human feedback has traditionally been necessary for effective self-improvement, the integration of AI into tools like Microsoft Word could democratise access to writing skill development by providing consistent, low-cost feedback [2, 123].
While I’m sure that AI can help people (including me!) improve their writing, I feel obligated to note that six of the paper’s seven co-authors work at Microsoft. (The seventh is a PhD student at CMU.) On a similar note,
GenAI tools could incorporate features that facilitate user learning, such as providing explanations of AI reasoning, suggesting areas for user refinement, or offering guided critiques. The tool could help develop specific critical thinking skills, such as analysing arguments [72], or cross-referencing facts against authoritative sources. This would align with the motivation-enhancing approach of positioning AI as a partner in skill development.
A friendly AI teacher sounds nice, but unfortunately, I share many people’s skepticism of technological solutions to educational problems.
“Saving time”
For example, participants discussed a lack of time (44/319) for critical thinking at work. For instance, a sales development representative (P295) noted that “[t]he reason I use AI is because in sales, I must reach a certain quota daily or risk losing my job. Ergo, I use AI to save time and don’t have much room to ponder over the result.”
Technologies often market themselves as time-savers, and sure, I’m glad that washing machines exist. But as the quote above suggests, as technologies increase productivity, do people actually save time? Or do expectations simply rise while quality potentially declines?
Building confidence
The paper’s main finding is the negative correlation between confidence in GenAI and critical thinking:
High task confidence is associated with users’ ability to delegate tasks effectively, fostering better stewardship while maintaining accountability. Conversely, lower self-confidence may lead users to rely more on AI, potentially diminishing their critical engagement and independent problem-solving skills.
Upon hearing this, I think it’s natural to ask, “So how can users build self-confidence?” I don’t think the authors explicitly answer this, but they do say the following:
Training programs should emphasise the importance of cross-referencing AI outputs, assessing the relevance and applicability of AI-generated content, and continuously refining and guiding AI processes. Additionally, a focus on maintaining foundational skills in information gathering and problem-solving would help workers avoid becoming overreliant on AI [102].
Perhaps one day, AI will be so reliable that we’ll all be highly confident in whatever it says, and nobody will worry about overreliance, reduced critical thinking, or anything like that. But until that day arrives, maybe we should keep trying to improve our own capabilities.