The Ladder They Pulled Up: Tacit Knowledge and the Collapse of the Entry-Level Job
When automation eliminates the apprenticeship before the apprentice arrives, we must ask who education is actually preparing people for.
There is a theory of learning so old it predates the university: you learn by doing, and the doing happens alongside someone who already knows. The medieval guild called it apprenticeship. The law firm called it associate work. The tech company called it the junior engineer role. The names changed; the structure didn’t. You began at the bottom, performing tasks that were beneath the senior people but essential to the organization, and in doing so you acquired something no curriculum could teach. You acquired judgment. You acquired the particular, embodied knowledge of what it actually feels like when a system breaks at 2 AM, when a client changes their mind three hours before a deadline, when the elegant solution from the textbook fails on contact with the messy reality of production code.
Tacit knowledge, the philosophers call it. The kind you can’t look up.
Block, Inc. eliminated 4,000 jobs last Thursday. Entry-level developers, QA engineers, junior analysts, HR coordinators—the roles most directly in the path of their Goose framework, which can now search codebases, write and test code, manage tickets, and diagnose bugs without continuous human supervision. The company’s stock surged 24 percent. Jack Dorsey called it honesty. He said most companies would follow within the year.
He was probably right. And if he is, we have a problem that no certification program is equipped to solve.
What the Entry-Level Job Actually Was
The entry-level job was never really about the work. Or rather, it was about the work only in the way an apprenticeship is about the specific cabinet being built—the cabinet matters, but it’s not the point. The point is the ten thousand micro-decisions embedded in the process: when to ask for help, how to read the room in a client meeting, what “done” means in a context where the definition shifts weekly, how to fail without catastrophizing and succeed without becoming complacent.
Economists studying the AI-labor intersection have begun distinguishing between two categories of knowledge. Codifiable knowledge is the kind that lives in textbooks, documentation, and training data—structured, transferable, learnable at scale. Tacit knowledge is the residue left after you subtract everything that can be written down. It accumulates through experience and cannot be efficiently transmitted, only lived. The problem AI creates for education is precise: it has automated codifiable tasks with extraordinary speed, but it has not touched the process by which tacit knowledge is acquired. It has simply eliminated the jobs where that acquisition happened.
The data on what this means for new graduates is stark. Entry-level tech job postings in the US have fallen to a point where they represent barely 2.5 percent of all job listings. UK graduate roles in tech dropped 46 percent in 2024 and are projected to fall further. The Big Four accounting firms reduced graduate intake by roughly 29 percent between 2022 and 2024. These are not marginal adjustments. They are the structural collapse of the pipeline.
A generation of students is now overqualified for the tasks that have been automated and under-experienced for the roles that remain. They have been prepared, with increasing sophistication, for a first rung that is no longer there.
The Paradox the Classroom Cannot Solve
Here is what makes this genuinely hard, as opposed to merely difficult: the solution that seems obvious—teach students to use AI tools—is also the solution most likely to deepen the problem.
When students use AI to complete the struggle, they bypass the struggle. And the struggle is the point. Not as hazing, not as arbitrary gatekeeping, but as the mechanism by which the brain builds the neural infrastructure for genuine expertise. The “productive struggle” in learning science refers to the specific cognitive work of attempting a problem at the edge of your competence, failing, revising, and attempting again. This is how tacit knowledge is formed in academic settings—not through content absorption but through repeated cycles of applied judgment under uncertainty.
Researchers studying what they call “cognitive offloading” have found that students who rely on AI to bypass this struggle do not merely skip a difficult experience. They fail to develop the high-order thinking required to oversee AI in professional contexts. The very competency that the new labor market demands—the ability to govern agentic systems, to catch the confident wrong answer, to know when the output is plausible but not true—is precisely the competency eroded by AI-dependent learning.
This is the paradox. The tools students need to be competitive in a Goose-era labor market are the same tools that, improperly used, will make them incapable of competing in it. Mastery of AI requires deep domain knowledge in order to evaluate AI. Deep domain knowledge requires the kind of struggle that AI, by design, eliminates. The student who graduates fluent in prompt engineering but innocent of the underlying domain is not a junior AI orchestrator. They are a very sophisticated autocomplete operator who cannot tell when the autocomplete is wrong.
Universities have not yet faced this honestly.
What Needs to Change, and Why It Won’t Happen Easily
The standard institutional response to labor market disruption is curriculum revision. Add a course on AI tools. Create a certificate program in machine learning. Launch a “future of work” initiative. These responses are not wrong exactly, but they mistake the nature of the problem. The crisis is not that students lack technical skills. It is that they lack a venue in which to develop judgment, and no course can substitute for a venue.
What education actually needs to do is recreate the conditions of the entry-level job without the entry-level job. This is harder than it sounds and more expensive than most institutions are willing to admit.
It requires what researchers are calling apprenticeship-intensive pedagogy: extended, project-based engagements with real organizations facing real problems, where students make consequential decisions and experience the consequences. Not case studies, which are cleaned-up retrospectives with predetermined lessons. Not simulations, which lack the irreversibility that makes experience formative. Actual work, with actual stakes, supervised by people who have enough domain expertise to help students understand not just what went wrong but why, and what that should change in their future judgment.
Elite institutions are quietly doubling down on exactly this model. The residential, mentorship-intensive university—where the education happens as much in office hours and lab meetings and informal conversations as in formal instruction—is preserving its value precisely because it cannot be automated. The “Little Ivy” model, built on small seminars and close faculty relationships, is producing graduates with the tacit knowledge that agentic AI cannot replicate, because the transmission of that knowledge requires human proximity and genuine intellectual relationship.
The problem is that this model is expensive. It scales poorly. It is available to a narrow slice of the students who need it. Meanwhile, the mass-market educational institutions—the ones serving the students with the least margin for error—are increasingly delegating instruction to automated tutors and AI-graded assessments, producing graduates who are fluent in codifiable knowledge and strangers to the tacit kind.
This is not a technical problem. It is a distribution problem, and it will compound.
The Question Education Must Answer
The $5.5 trillion figure cited by IDC for global losses from the IT skills shortage is useful not as a precise forecast but as a signal about the shape of the mismatch. There are roughly 250,000 people globally capable of governing advanced AI applications at senior levels. The demand is vastly larger. The pipeline to produce more such people has been severed at its base.
Every year that passes without entry-level jobs is a year in which a cohort of graduates does not acquire the tacit knowledge required to fill the senior roles that will be open in five years. The math is not complicated. It is just uncomfortable.
Universities must ask themselves a question they have generally avoided: what are we actually for, now that information is free and codifiable tasks are automated? The answer, I think, is tacit knowledge transmission at scale—which is to say, the thing they were always for, but which they must now do more deliberately, more expensively, and for more people than the current model can support.
The students arriving on campuses today will spend their careers governing systems they did not build, making judgment calls in domains where the AI is confident and occasionally wrong. They will need to know the difference. The only way to know the difference is to have been wrong yourself, to have felt the specific texture of a mistake in a real context with real consequences, and to have had someone nearby who could help you understand what it meant.
That is what the entry-level job provided. It is what the university must now figure out how to replace.
The ladder was pulled up. We did not ask for permission to pull it up. We did not offer a replacement on the way down.
That is the thing we must now reckon with.
Tags: tacit knowledge automation, entry-level job collapse AI, higher education workforce pipeline, AI apprenticeship pedagogy, cognitive offloading students
