When Meta announced significant layoffs in early 2025 and framed some of them around performance and the capabilities of its AI tools, it added something new to an old script. It wasn’t just “we’re restructuring” or “economic headwinds.” It was, in effect: some of what you do, our models can now do better.
That logic — cutting people while pointing at AI — is becoming a recognisable pattern across large technology companies. And it raises a question that doesn’t get asked directly enough: when AI becomes the benchmark for what “good enough” looks like at work, what happens to the humans who are just average?
The traditional justification for a layoff is economic. Revenue is down, the market shifted, the product failed. These are reasons employees can process even when they’re painful. But the AI-era layoff has a different texture. It implies that the problem wasn’t the business — it was the person’s output relative to what a machine can now produce.
This is worth taking seriously, not because it’s always accurate, but because it changes the psychological contract between employer and employee. The employment relationship has always been transactional, but the terms were largely implicit. AI makes them explicit in a way that’s harder to ignore.
Performance reviews have always carried subjectivity and risk. But when a company can plausibly argue that AI can generate first drafts, summarise meetings, write basic code, and handle routine queries, the bar for human contribution shifts. The question is no longer “are you useful?” but “are you useful enough to justify the cost of hiring you over expanding our AI investment?”
The uncomfortable answer is: for some roles, and in some companies, that calculus is starting to tip.
What this means practically for workers isn’t panic — it’s clarity. The roles that survive this transition won’t be the ones that produce the most output. They’ll be the ones where output alone isn’t the point. Judgment, stakeholder management, creative decisions with real accountability, novel problem-solving in ambiguous contexts — these are harder to automate not because AI can’t approximate them, but because the cost of getting them wrong is too high to trust to a model.
The phrase “staying useful” is already circulating in professional advice circles. But there’s a more precise version: staying irreplaceable for reasons that are visible and understood by the people who make hiring decisions. That means making your judgment, not just your output, legible.
Meta’s layoff framing may or may not accurately represent what drove those specific cuts. But the logic it signals — that AI has become a comparison point in performance evaluations — is real, and it’s accelerating. Workers who treat that as background noise rather than a genuine signal are likely to be caught off guard.
The new workplace question isn’t whether AI will replace your job. It’s whether your contribution is clear enough to survive the comparison.