tools for
humans
opinion5 min read29 march 2026

Deskilling Is Real, But It's Not Inevitable: How to Use AI Without Losing Your Edge

AI deskilling is real and measurable, but the teams building skills faster with AI share one trait: they treat it as a thinking partner, not an answer machine.

Deskilling Is Real, But It's Not Inevitable: How to Use AI Without Losing Your Edge

tl;dr

AI can erode skills, and the research confirms it, but the outcome depends almost entirely on how you use the tool. Teams that build skills faster with AI share one habit: they stay in the reasoning loop. Use AI to check your thinking, not replace it.

The trap isn't that AI does the work badly. It's that it does the work well enough that you stop practising how to do it yourself.

This is the core mechanism behind AI-driven deskilling, and it's more subtle than most people expect. You don't notice the erosion while it's happening. The output looks fine. The deadline is met. The feedback is positive. It's only months later, when you face a genuinely hard problem without the tool, that you realise the mental muscle has weakened from disuse.

There's a sharper version of this risk for early-career professionals. Researchers writing in a 2025 review on AI and clinical training coined the term "never-skilling" to describe what happens when AI arrives before foundational skills are built: the trainee never acquires the reasoning ability in the first place. They don't lose a skill. They simply skip the part where they would have gained it. That's a different, harder problem.

The risk for novices isn't skill erosion. It's that they never build the skill at all.

The employment data supports the concern. Research cited in a 2025 arXiv paper reviewing AI labour market effects documents a 16% relative employment decline among workers aged 22-25 in AI-automatable roles. That's the cohort who would normally be accumulating foundational experience on the job. If the entry-level work disappears before they've done enough of it to internalise the underlying skills, the pipeline for expert practitioners narrows.

16%

Employment drop in AI-automatable roles, ages 22-25

Brynjolfsson et al. 2025 via arXiv

The sequencing matters enormously, which is what the teams building skills faster have figured out.

The Rebound Problem in Practice

Software developers are the clearest case study here. Early adoption data showed real speed gains, particularly for junior developers handling boilerplate and routine implementations. The productivity numbers were encouraging enough that many teams restructured workflows around AI code generation by default. Then came the wall.

Developers working on complex, legacy codebases, or on problems that required deep architectural reasoning, found that their AI-assisted habits didn't transfer. Worse, some found their own reasoning had atrophied. They'd been accepting AI suggestions for long enough that they'd lost the habit of working through problems from first principles. The speed gain on shallow tasks had come at a cost to depth on hard ones.

This is the AI rebound effect in practice: short-term throughput gains that mask a growing deficit in the kind of understanding you need when things get complicated. The theoretical model from Xu (2025) captures the organisational version of this dynamic. As generative AI improves and hallucination rates fall, firms rationally reduce the knowledge requirements for roles, hiring less experienced workers and compensating with AI tooling. That works until it doesn't, and the signal that it's stopped working often arrives at the worst possible moment.

What the Faster-Learning Teams Do Differently

Active note-taking while using AI tools to maintain engagement
Active note-taking while using AI tools to maintain engagement

The teams that come out ahead aren't the ones who use AI least. They're the ones who treat it as a sparring partner rather than a ghostwriter.

The distinction shows up in how they structure the work. Instead of asking AI to produce a solution, they produce a rough version themselves first, then use AI to stress-test it. Instead of accepting AI-generated explanations, they ask the tool to explain its reasoning, then argue with it. The output might end up similar, but the cognitive process is entirely different. One builds the mental model. The other outsources it.

The Kabir et al. (2025) hospital study from Bangladesh shows what this looks like in a high-stakes setting. Clinicians used AI to synthesise fragmented patient records into medication reconciliation lists, cutting completion time by 24% and reducing severe potential adverse drug events. The design kept clinicians in the adjudication role. AI organised the information; humans made the judgement call. The time savings came from removing grunt work, not from removing reasoning. That's the right division.

in practice·Clinical team at a tertiary academic hospital in Bangladesh

what they did

Used generative AI to synthesise unstructured patient records into medication reconciliation lists, with clinicians reviewing and approving every final decision rather than delegating judgement to the tool

outcome

24% reduction in completion time; odds ratio of 0.69 for severe potential adverse drug events, with no reported erosion of clinician judgement

The Canadian Journal of Administrative Sciences (2026) review adds a structural point: high professional engagement during AI-assisted job transformation reduces deskilling risk. Professionals who stay actively involved in the work, who interrogate outputs rather than just accepting them, preserve the skills that passive acceptance erodes.

AI handles the repetitive surface. You stay in the reasoning. That division protects the skill.

A Practical Standard to Apply Now

Here's the test worth running on your own workflow: for each task where you use AI, ask whether you could explain the output's reasoning to a sceptical colleague without referring back to the tool. If you can, the skill is intact. If you're not sure, that's the warning sign.

For teams managing early-career staff, the never-skilling risk makes sequencing urgent. Junior professionals need enough unassisted reps on foundational work to build genuine competence before AI assistance becomes the default. That might mean deliberately withholding AI access on certain task types for the first six months of a role. Not as a punishment, but as an investment in the mental model they'll need when things get hard.

For individual contributors, the practical shift is to change when in the process you bring in AI. If you reach for it first, before you've formed your own view, you're training yourself to be dependent. If you reach for it second, to pressure-test what you've already worked out, you're using it to get better.

Pick one task you've been doing on AI autopilot. Do the next version yourself first. See what you've forgotten, what you still have, and what questions you couldn't answer. That gap is the skill work worth doing.

verdict

Deskilling from AI is a genuine risk but it's a design choice, not a side effect. Teams that structure AI use around human reasoning rather than replacing it are building faster and retaining depth. The teams that don't will hit the wall, and they won't see it coming until the complex problem lands.

Alec Chambers, founder of ToolsForHumans

Alec Chambers

Founder, ToolsForHumans

I've been building things online since I was 12 — 18 years of shipping products, picking tools, and finding out what actually works after the launch noise dies down. ToolsForHumans started as the research I kept needing: what practitioners are still recommending months after launch, and whether the search data backs it up. Since 2022 it's helped 600,000+ people find software that actually fits how they work.