← Back to Blog

The Slow Theft

On Van Damme & Fadel's Technologies Smarter, Humans "Dumber"? (2026)

·8 min read

There is a king in Plato’s Phaedrus who refuses a gift. Theuth, the Egyptian god of invention, arrives bearing writing—the technology of memory, the externalization of thought—and presents it to King Thamus as a cure for forgetfulness. Thamus declines. Writing, he says, will produce not memory but its simulacrum. People will trust the sign rather than the substance. They will have the appearance of wisdom without the substance.

Van Damme and Fadel, in their 2026 paper for the Center for Curriculum Redesign, cite this myth as evidence that cognitive anxiety about new technologies is ancient, perennial, and therefore—the inference is quiet but present—probably manageable. Socrates worried about writing. We survived writing. Perhaps we will survive AI.

I find myself returning to Thamus. Not because the parallel is wrong, but because it is more right than the paper’s argument requires. Writing did weaken internal memory. The paper’s own historical analysis documents the loss: “oral virtuosity,” the traditions of recitation and embodied recall that entire civilizations organized their knowledge around. These weren’t preserved. They were displaced. The fact that we adapted doesn’t mean nothing was lost. It means the loss was eventually absorbed—at cost, over generations, with educational institutions that had to be rebuilt around the new epistemic order.

This is what it means for a historical analogy to do work it isn’t entitled to.


The Distinction That Carries the Argument

The paper’s genuine intellectual contribution arrives not in its historical sweep but in a conceptual distinction buried in Section 3b, attributed to Dutch educational psychologist Paul Kirschner in a January 2026 blog post. The distinction is between cognitive offloading and cognitive outsourcing. “With offloading, you still think, and the artefact supports you. With outsourcing, the system thinks, and you consume the result.”

That the distinction derives from a blog post rather than peer review matters—it does more argumentative weight-bearing than its sourcing can support. But the distinction itself is precise and empirically tractable. It names a real threshold: the point at which technological delegation stops extending human cognition and begins replacing it.

What makes the threshold difficult to locate is explained by Risko and Gilbert’s (2016) metacognitive cost-benefit model. People offload when the environment can store information more efficiently than their minds. The decision is rational in each local instance. But if the environment continuously improves at storage, retrieval, and now synthesis—and if the human is never given occasion to exercise internal equivalents—the cost-benefit calculation shifts permanently. Not by choice. By default. What begins as delegation becomes, over time, atrophy. The person has not decided to outsource their thinking. They have simply stopped being asked to internalize.

This is the paper’s most important implication, more consequential than the historical synthesis and more urgent than the framework. The cognitive risk of AI is not dramatic displacement. It is slow, invisible recalibration—the gradual narrowing of what we mean when we say we thought something through.


Five Layers and Their Internal Tension

The five-layer framework is the paper’s primary intellectual architecture. It is worth naming precisely, because the layers do real work and deserve more than a passing summary.

Layer 1 covers foundational capacities: attention regulation, memory formation, spatial and causal reasoning. These are biologically grounded, developmentally acquired, and the most vulnerable to technological displacement. They are also non-delegable in the deepest sense—not because technology can’t substitute for them in specific tasks, but because they underpin error detection, judgment, and autonomous action when systems fail.

Layer 2 is procedural: the routinized techniques of calculation, transcription, classification. These are the historical targets of automation, and the paper’s most reassuring argument concerns them. Delegation is appropriate here—but only when conceptual understanding is already in place. The risk is premature offloading, surrendering the procedure before the concept is secured.

Layer 3 is conceptual understanding: grasping underlying principles, causal mechanisms, systems, models. This is where the paper places its greatest confidence. Technologies historically expand conceptual competence when education systems adapt—the printing press forcing literacy, the calculator forcing algebraic thinking over arithmetic drill.

Layer 4 is integrative: cross-domain transfer, recognition of deep patterns across disciplines, the capacity to catch category errors that domain-specific AI cannot. Generative systems produce plausible outputs within domains. They do not integrate across them with human judgment.

Layer 5 is epistemic meta-competence: evaluating knowledge claims, assessing sources, detecting error and bias, calibrating trust in AI outputs. The paper calls this the “governor” of adaptation—and in the AI age, a core civic competence.

The framework’s central thesis is that technological change systematically displaces activity in lower layers while increasing the importance of higher ones, and that education must respond by shifting emphasis accordingly. The prescription is correct as far as it goes. But there is a structural tension the paper acknowledges only partially.

Shifting emphasis upward assumes that higher-layer capacities can be developed without robust lower-layer foundations. This assumption is contested by cognitive load theory—which the paper itself cites approvingly. Kirschner and de Bruyckere (2017), listed in the footnotes, argue explicitly that learners cannot engage in higher-order reasoning before procedural fluency is established. You cannot exercise Layer 5 epistemic judgment about a domain you don’t understand at Layers 2 and 3. The layers are not independent. They are scaffolded.

The paper threads this carefully: lower-layer practice should be retained where it builds conceptual understanding, delegated where it doesn’t. “Solving equations fluently before using a calculator” is offered as an example of intelligent retention. This is pedagogically sensible. But it generates a curriculum design problem the paper doesn’t solve: how do you identify, in advance and across domains, which lower-layer procedures are necessary scaffolding and which are safely automatable? The answer is domain-specific, contested among experts, and changes as AI capabilities improve. The framework provides vocabulary for the question. It does not provide criteria for the answer.


The Admission That Deserves a Spotlight

The paper’s most important sentence appears in the AI section of its Annex, rendered without fanfare: “It is still very early to assess the impact of AI on human cognitive behavior and competencies. Research on this topic is only beginning, so what we can learn from the research literature on gains and losses remains preliminary.”

Read this carefully. The five-layer framework’s prescriptions for the AI age, the warning about epistemic outsourcing, the call to cultivate Layer 5 meta-competence as a civic necessity—these rest on a research base the paper itself describes as preliminary. The confidence of the prescriptions exceeds the confidence of the evidence.

This is not grounds for dismissal. The precautionary logic is sound: historical patterns show consistently that foundational capacities erode unless education deliberately compensates. The precautionary case for protecting Layer 1 attention and Layer 2 procedural foundations stands on historical grounds alone.

But “here is what the evidence shows about AI” overstates what twenty qualitative case studies of fire, agriculture, writing, and printing can establish about generative AI. The paper cites a 2025 ArXiv preprint—Kosmyna et al., “Your Brain on ChatGPT”—suggesting that AI assistance during essay writing accumulates as cognitive debt rather than benefit. If that finding survives peer review, it is genuinely significant. As of this writing, it has not. The paper’s citation practices do not distinguish between peer-reviewed literature and preprints. This matters when strong claims about cognitive harm rest on recent, unpublished work.

The paper is intellectually honest in ways that most policy-facing literature is not. It explicitly repudiates the “23 minutes to recover attention after an interruption” statistic—a number so widely cited it has achieved folkloric status—noting it cannot be traced to published research. This act of self-correction is rarer than it should be. But the same epistemic hygiene that catches the 23-minute myth should be applied, with equal force, to the AI claims the paper is more invested in.


What Thamus Actually Knew

There is a paradox at the heart of this paper that neither confirms nor undermines its value. A document arguing for the cognitive costs of AI offloading was produced, by the authors’ own admission, with the assistance of AI tools—ChatGPT 5.2, Elicit, Grammarly. The authors claim they retained “cognitive effort” throughout. They may be right. But the paper provides no way to verify this, and the claim itself illustrates exactly the threshold problem the offloading/outsourcing distinction is meant to identify. Where was the line? They don’t say. They may not know.

I don’t raise this to discredit the paper. I raise it because it is the paper’s most honest moment, hiding in a methodological footnote. The authors are inside the phenomenon they are studying. So are we all.

The five-layer framework, whatever its evidential limitations, offers something rare: categories precise enough to generate testable hypotheses and productive enough to guide institutional deliberation. The distinction between what education should protect at Layer 1, retain strategically at Layer 2, deepen at Layer 3, broaden at Layer 4, and teach as explicit judgment at Layer 5 gives curriculum designers language for decisions they are already being forced to make without vocabulary adequate to make them well.

That is not a small contribution. It is not a settled theory. It is a working vocabulary for an unsettled problem.

Thamus rejected the gift. We accepted it. Now, 2,500 years later, we are being offered another gift—one that externalizes not memory but explanation, not storage but synthesis. We are trying to decide, with incomplete evidence and real urgency, whether we can accept it without becoming what we are afraid of becoming. The question is not whether to use the tool. The question is whether using it changes who does the using.

That question the paper asks clearly. It cannot answer it.

Neither can we. Not yet.


Tags: Van Damme Fadel cognitive framework 2026, epistemic offloading outsourcing distinction, five-layer curriculum design AI, cognitive cost AI tools education, Baldwin ezsay technology cognition


Source: Van Damme, J. & Fadel, C. (2026). Technologies Smarter, Humans “Dumber”? Center for Curriculum Redesign.

Nik Bear Brown Poet and Songwriter