tools for
humans
data & research6 min read24 march 2026

AI Job Postings Up 340%—But Here's What Jobs Are Actually Being Created

AI job postings are rising fast, but the roles being created look nothing like traditional tech jobs — here's what the data actually shows and what to do about it.

AI Job Postings Up 340%—But Here's What Jobs Are Actually Being Created

tl;dr

The headline growth in AI job postings is real, but the roles driving it are shifting away from pure engineering toward human oversight, judgement, and collaboration with AI systems. The talent supply is nowhere near keeping pace. If you're making a career move, the opportunity is in the gap between what AI can do and what humans still have to own.

The 340% jump in AI-related job postings is striking, but the number that should get your attention is the supply gap. According to LinkedIn data aggregated by Second Talent, there are currently 1.6 million open AI-related positions globally against roughly 518,000 qualified candidates — a 3.2:1 demand-to-supply ratio. That's a structural shortage, and it's reshaping which skills actually command a premium.

3.2:1

AI job demand vs. qualified candidates globally

LinkedIn Global Talent Insights 2026

The more important question is what those 1.6 million roles actually are, because the answer has changed significantly in the past 18 months.

The shift away from pure engineering

The conventional picture of an AI job is a machine learning engineer or data scientist writing model code. That picture is becoming outdated. What's growing fastest is a different category: roles that sit at the boundary between AI systems and human decision-making.

The Anthropic Economic Index (March 2026) found that almost 49% of jobs now have at least a quarter of their tasks performed using AI. That figure covers a huge range of occupations, not just tech roles. AI augmentation has moved well beyond the engineering department and into the operational core of most organisations. The new jobs being created reflect that reality.

The fastest-growing AI roles aren't about building models — they're about supervising, directing, and correcting them.

Prompt engineering, AI product management, LLM evaluation, and AI safety operations are all roles that require domain knowledge plus a working understanding of how AI systems fail. They require people who understand the subject matter well enough to know when the AI is wrong, and who can communicate that clearly enough to fix the process. That combination is rare, which is why the supply gap is so severe.

This pattern extends to fields far outside tech. How marketing roles are being augmented by AI is one concrete example: the marketer who knows how to brief, evaluate, and iterate on AI-generated content has a fundamentally different job than one who either ignores AI or delegates to it entirely.

What "human-agent collaboration" actually means for job titles

Worker switching between AI tool and manual notes during a single task
Worker switching between AI tool and manual notes during a single task

The category that labour economists are starting to call "human-agent collaboration" covers a cluster of emerging roles: AI workflow designers, model oversight specialists, AI trainers, and what some job descriptions now call "AI delegation managers." These roles don't require a computer science degree. They require sharp judgement, domain expertise, and the ability to think clearly about where AI adds value and where it introduces risk.

The European Central Bank's analysis of firm-level hiring data found that AI-intensive firms are 4% more likely to hire additional staff, while only 15% of AI users cite labour cost reduction as a primary motivation. That runs against the popular narrative that AI adoption is purely a headcount reduction play. Companies that are deepening AI use are, on average, adding people, specifically people who can make those systems work reliably within their specific context.

The World Economic Forum's Future of Jobs Report 2025 projects a net gain of 78 million jobs by 2030 (170 million created, 92 million eliminated). The net positive sounds reassuring, but the distribution matters more than the total. The jobs being created cluster around AI oversight, data curation, and human-facing roles that AI handles poorly. The ones being eliminated cluster around routine information processing and certain categories of software development that are now automatable.

A net gain of 78 million jobs by 2030 is only good news if you're positioned in the category that's growing, not the one that's shrinking.

The practical implications for career decisions

If you're a traditional software engineer, the picture is mixed. Coding tasks are migrating toward API automation and AI-assisted development, which the Anthropic Economic Index explicitly identifies as a high-exposure category. That doesn't mean software engineering jobs disappear. It means the value shifts from writing code toward systems thinking, architecture decisions, and knowing which problems are worth solving. Engineers who treat AI as a collaborator are extending their value. Those who compete with it on raw output will lose.

If you're in a non-technical domain, the opportunity is more straightforward. The 49% task-automation figure means your job already includes AI whether you've opted in or not. The question is whether you're directing that AI work deliberately or just reacting to it. People who develop a structured approach to AI delegation in their own role, knowing what to assign, how to review it, and where to intervene, are accumulating the exact skills employers are now paying a premium for.

in practice·Financial services firm (archetypal mid-size team)

what they did

Restructured a compliance review team to include two dedicated "AI output reviewers" whose job was to audit LLM-assisted document summaries against source material, flag errors, and feed corrections back into prompt templates. The team reduced review cycle time by roughly 40% while maintaining audit trail requirements.

outcome

Headcount stayed flat, but two existing analysts were retitled and upskilled into oversight roles, commanding higher salaries within 12 months.

The supply gap is the career opportunity. There are three times as many open AI roles as qualified people to fill them, and the roles that are hardest to fill combine domain knowledge with AI fluency, not pure ML engineering. If you're in finance, law, healthcare, marketing, or operations, you already have the domain side. Adding structured AI fluency on top of that is a more defensible position than a new computer science graduate who understands models but not the underlying business problems.

verdict

The 340% growth in AI job postings is real, but it's being driven by a category of work that most people haven't prepared for: human oversight, judgement-intensive AI collaboration, and domain-specific AI application. The engineers who are seeing demand soften are the ones whose work AI can now approximate. The professionals who are seeing demand surge are the ones AI still can't replace: people who know enough about a domain to catch AI mistakes and build better processes around them. Bet on that combination.

What to do this week

Audit your current role against the Anthropic 49% finding. List the tasks you do repeatedly and ask, for each one, whether an AI system could do a first draft, a summary, or a recommendation. If the answer is yes for more than a quarter of your workload, you're in a high-exposure category, and that's an advantage if you act on it now. Pick one of those tasks, build a documented AI-assisted process for it, and treat that documentation as a portfolio asset. That's the kind of specific, demonstrable AI fluency that's showing up in job descriptions and salary conversations right now. It's also what separates the 518,000 qualified candidates from the 1.6 million unfilled roles.

Alec Chambers, founder of ToolsForHumans

Alec Chambers

Founder, ToolsForHumans

I've been building things online since I was 12 — 18 years of shipping products, picking tools, and finding out what actually works after the launch noise dies down. ToolsForHumans started as the research I kept needing: what practitioners are still recommending months after launch, and whether the search data backs it up. Since 2022 it's helped 600,000+ people find software that actually fits how they work.