
](https://substackcdn.com/image/fetch/$s_!o567!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05be6e17-7d21-4103-a171-581da5f17b3c_1536x1024.png)
The Economist recently celebrated AI creating new jobs. They buried the real story: the entire employment floor just collapsed.
The article cheerfully notes that data annotators now need expertise in finance, law, or medicine and earn $90/hour. That’s not job creation-that’s the extinction of entry-level work. You can’t just tag images anymore. The bar moved up.
Two years ago, you could be a mediocre React programmer or a competent button-pusher in some enterprise app and earn $80K-120K. Those jobs are vanishing. AI can generate boilerplate code, follow standard procedures, and execute routine tasks faster and cheaper than humans. The people who were “good enough” at technical execution are now competing with algorithms that never sleep, never ask for raises, and cost pennies per task.
Here’s what actually happened: AI didn’t steal jobs. It revealed which “jobs” were never really work-they were just expensive pattern matching.
What AI Actually Does
Large language models are turbocharged associative memory engines. They retrieve and recombine patterns from training data with superhuman speed and accuracy. They’re not perfect-they hallucinate, make mistakes, miss context-but they’revery goodat recall and pattern matching.
Here’s the uncomfortable truth:**humans also have imperfect recall.**That was never our advantage. We just forgot that because for decades, being slightly better at memorization and procedure-following than other humans was economically valuable.
Now that advantage is gone. Permanently.
AI can:
Recall facts and procedures instantly
Apply standard frameworks consistently
Generate code from established patterns
Follow documented processes without deviation
Reproduce correct answers to well-defined problems
If your job was primarily about any of those capabilities, you’re now competing in an arena where you will always lose.
What Humans Are Actually Good At
But there’s something humans are naturally equipped to do that creates durable economic value:thinking.
Not mystical. Not hand-wavy. Specific cognitive operations that humans evolved to do:
Critical thinking- evaluating evidence quality, detecting logical fallacies and incentive distortions, distinguishing correlation from causation, updating beliefs under new information. This isn’t a “soft skill”-it’s cognitive labor that resists automation because it requires meta-cognition and handles non-deterministic problems.
Judgment in ambiguous situations- making decisions when requirements conflict, information is incomplete, and there’s no clear right answer. This is model selection under uncertainty: choosing which framework applies when multiple could work.
Understanding stakeholder politics and incentives- reading between the lines of what people say to understand what they actually need. This is error correction when incentives distort signals: recognizing when stated goals conflict with actual interests.
Contextual reasoning- knowing which rules to break, when standards don’t apply, how to navigate exceptions. This is value tradeoff resolution: deciding which principle takes precedence when they conflict.
Problem reframing- recognizing that the stated problem isn’t the real problem, seeing that the obvious solution won’t actually work. This is goal clarification when objectives conflict: understanding what you’re really trying to accomplish.
These capabilities share a critical property:AI can simulate them by recognizing patterns, but it cannot own the consequences of judgment or resolve value conflicts without external authority.
When requirements are ambiguous, stakeholders disagree, or objectives conflict, someone must decide which risks are acceptable, which values take precedence, which rules can be bent. AI can suggest options based on training patterns. It cannot bear responsibility for choosing among them.
**The React programmer example proves this:**AI can write competent boilerplate code. What it can’t do is figure out that the feature request doesn’t make sense, decide it solves the wrong problem, determine that the stakeholder actually needs something different, and conclude that the right solution is to not build anything at all-then own that call when stakeholders push back. That requires judgment with accountability.
The Real Distinction: Procedural vs. Judgmental Skills
The useful axis isn’t “hard skills” vs. “soft skills.” It’s procedural vs. judgmental:
Procedural skills:
Can be specified in steps
Can be automated
Have clear right answers
Scale cheaply
Judgmental skills:
Require interpretation
Resist full automation
Have defensible answers
Scale slowly
AI excels at procedural skills. Humans earn their keep with judgmental ones.
What schools call “soft skills” is actually the hardest economic constraint in modern work: sound judgment under ambiguity.
The Emerging Middle Class
There’s also a middle tier forming: people who don’t just think abstractly but direct AI systems, evaluate outputs, catch failures, and translate ambiguity into constraints. Call them cognitive foremen. They’re not pure philosophers and they’re not just executing procedures-they’re orchestrating AI tools while maintaining responsibility for results.
These people blend procedural competence with judgment: they know how to use AI effectively, but more importantly, they know when AI outputs are wrong, incomplete, or solving the wrong problem. They’re the quality control layer between AI execution and human consequences.
The Education Crisis
Here’s where this gets dark:A small minority of students learn to think in school. The vast majority are trained to be replaced by AI.
Most education operates on a simple model:
Memorize facts and procedures
Regurgitate them accurately on tests
Apply standard formulas to familiar problems
Reproduce correct answers
That’s literally the training process for a large language model. We’ve been training humans the same way we train AI systems. And now students are competing against machines that do memorization and recall infinitely better.
The educational infrastructure optimized for producing exactly the skill set that AI just commoditized:
Recall information accurately
Follow established procedures
Apply standard frameworks
Execute routine tasks without error
Meanwhile, a small minority get an education that actually builds thinking:
Research programs where problems aren’t well-defined
Project-based work where you encounter messy requirements
Creative disciplines where you must make judgment calls
Apprenticeships where you learn through doing
Those students are developing critical thinking, judgment, contextual reasoning, and problem-solving. The others are learning to be outperformed by ChatGPT.
**This isn’t just a labor shift. It’s a legitimacy crisis.**Organizations, schools, and credentialing systems are built to certify procedural competence. AI just made that cheap and abundant. We have no scalable way to certify judgment, sense-making, or problem framing-so institutions will lag reality, badly.
What Judgment-Based Evaluation Actually Looks Like
Here’s a concrete example of how this plays out in education. In courses that permit AI use (because in the real world, everyone has access to the same tools):
75-80% of grades measure procedural competence:
Following constraints
Meeting technical requirements
Using tools correctly (including AI)
Producing complete, coherent work on time
Everyone can use AI here. There’s no advantage in just having access to tools.
The remaining 20-25% measures judgment:
Did you understand the actual problem, not just the stated one?
What decisions did you make under ambiguity?
How did you handle tradeoffs when constraints conflicted?
Would this work survive contact with real stakeholders?
Would I trust you with greater responsibility?
This portion cannot be fully reduced to rubrics. It requires expert evaluation using the same lens applied in professional settings: promotions, performance reviews, design approvals, go/no-go decisions.
The discomfort students feel here is the same discomfort workers feel when they realize following instructions is no longer sufficient. AI stripped away the value of procedural compliance. What remains is judgment.
Following the rules keeps you in the room. Thinking well earns trust.
What This Means for Work
The Economist noted that Forward-Deployed Engineers need to be “developer, consultant, and salesman” who understand “human-facing domains.” They quoted an executive saying “your personality is where your premium is.”
That’s not a feel-good story about new jobs. That’s confirmation that pure technical execution is no longer valuable. You need to bring judgment:
Understanding what stakeholders actually need (not what they say they need)
Making architectural decisions in ambiguous situations
Knowing when to deviate from best practices
Recognizing that the problem is different than stated
The same pattern appears everywhere AI touches work:
**Waymo’s “guy in the sky”**doesn’t just troubleshoot technology-they make judgment calls about safety while managing frazzled passengers
AI risk specialistsaren’t implementing compliance checklists-they’re making contextual decisions about acceptable risk
Chief AI Officersaren’t managing AI models-they’re making strategic bets with incomplete information across competing vendor claims
Every role the article celebrates requires human judgment, not just knowledge.
The Brutal Implication
The employment crisis isn’t that AI is eliminating jobs. It’s that AI revealed how many “jobs” never required thinking at all. They required reliable execution of learned patterns-and humans are no longer competitive at that.
The people who could get by on decent technical skills, solid procedure-following, and consistent execution are discovering those capabilities have zero marginal value. The premium is entirely on judgment.
This isn’t a temporary disruption. The floor isn’t coming back. Fast, accurate recall and pattern-matching will only get cheaper. The only durable economic value is the ability to navigate ambiguity, make sound decisions, understand humans, and see problems clearly-then own the consequences of those calls.
If you’re not building those capabilities-in yourself, in your students, in your organization-you’re training people to compete in an arena they’ve already lost.
The job apocalypse isn’t coming. It’s here. It just looks different than we expected. It’s not robots taking jobs. It’s algorithms revealing which work never required human thought at all.
The question isn’t whether you can use AI. Everyone can.
The question is: can you decide what matters when the rules are unclear, own that decision, and defend it when challenged?
That’s not a soft skill. That’s the hardest work there is.
Thanks for reading Nik's Substack! Subscribe for free to receive new posts and support my work.