← Back to Blog

Teaching with AI: A Practical Guide to a New Era of Human Learning

·11 min read

The Architecture of Adaptation: How “Teaching with AI” Builds a Framework for Educational Survival

The Opening Machinery

The book opens with Marie Curie’s call to understand rather than fear—a choice that reveals the authors’ fundamental philosophy. They don’t pretend AI will be contained or controlled. Instead, they position understanding as the only viable response to technological inevitability.

This framing does something clever: it converts faculty anxiety into intellectual curiosity. The comparison to the internet’s transformation of knowledge scarcity into knowledge abundance establishes a pattern—we’ve survived fundamental disruption before. But here’s what they’re actually saying: the internet changed our relationship with knowledge. AI will change our relationship with thinking.

That’s not a minor upgrade. That’s a categorical shift.

Part I: The Technical Foundation They Actually Need

Chapter 1: AI Basics

They spend 16 pages explaining GPT, transformers, and neural networks. Many readers will skip this. The authors acknowledge this explicitly: “If you really don’t care what GPT stands for and why it matters, you can skip this bit.”

But here’s what Chapter 1 actually accomplishes: it builds the vocabulary for every subsequent argument. When they later discuss why AI hallucinates or why detection software fails, readers who understood Chapter 1 grasp why these aren’t bugs but features. The architecture of GPT—generative, pre-trained, transformers—explains everything from bias (baked into pre-training) to creativity (emergent from generation) to unreliability (inherent in probabilistic processing).

The chapter moves from expert systems (rule-based, limited) to machine learning (pattern-based, adaptive) to foundational models (massive scale, emergent capabilities). Each step increases both power and unpredictability. They don’t hide this trade-off. They make it central.

Key technical insight: Parameters aren’t just numbers—they’re possibility spaces. GPT-3’s 175 billion parameters versus GPT-4’s 1.76 trillion parameters isn’t just “bigger is better.” It’s the difference between a chess player who’s seen 1,000 games versus one who’s seen a million. The patterns available multiply exponentially.

Chapter 2: A New Era of Work

Here’s where theory hits practice. They document how AI is already transforming white-collar work—doctors saving 2-3 hours daily, lawyers offloading “heavy lifting” to CoCounsel, economists gaining 10-20% productivity.

But watch the careful positioning: they present this as fait accompli. Not “AI might change work” but “your graduates will compete with AI-assisted workers.” This shifts the burden of proof. Faculty who resist AI aren’t defending academic values—they’re potentially handicapping students.

The most devastating statistic: Kahneman’s finding that underwriter decisions vary by 55% (versus the 10% executives predicted). If humans are this inconsistent, what’s the moral argument for preserving human judgment over AI reliability?

Design philosophy revealed: They chose to emphasize job change over job loss. This isn’t optimism—it’s strategic framing. Job loss triggers defensive reactions. Job change creates space for adaptation.

Chapter 3: AI Literacy

Problem formation, better questions, the liberal arts—this chapter argues that AI makes traditional liberal arts education more valuable, not less. When AI can execute any solution, the ability to frame the right question becomes the scarce resource.

They introduce the four-part prompt framework (task, format, voice, context) not as technical instruction but as communication training. Students who can clearly articulate what they need—from AI, from colleagues, from managers—have the essential workplace skill.

The hidden curriculum: Every prompt improvement technique they teach is also a thinking technique. “Break this down” isn’t just better AI prompting—it’s problem decomposition. “What else do you need?” isn’t just iteration—it’s diagnostic questioning.

Chapter 4: Reimagining Creativity

AlphaGo’s move 37—the move that shouldn’t work according to human Go masters—becomes their central case for AI creativity. The machine wasn’t constrained by human cultural knowledge of “bad moves.” This freedom from expertise is both AI’s creative advantage and its danger.

They acknowledge the paradox directly: hallucinations that make AI unreliable for facts make it valuable for ideation. The same mechanism produces both misinformation and innovation.

But here’s the sophisticated move: They don’t argue AI is more creative than humans. They argue AI plus humans exceeds either alone. “The question as you grade may be: In what ways has the student moved above and beyond what AI produced for them.”

Part II: The Institutional Response

Chapter 5: AI-Assisted Faculty

This chapter works as both demonstration and permission. By showing what AI can already do for research, course design, and student interaction, they’re giving faculty permission to experiment. The prompts are specific enough to copy-paste, removing the activation energy barrier.

Notice the strategic sequencing: they show AI assisting with tasks faculty already hate (grading, formatting citations, organizing student data) before suggesting AI for creative work. Start with pain relief, then move to enhancement.

Chapter 6: Cheating and Detection

The “Dragnet” framing—a student falsely accused because GPTZero and CopyLeaks disagreed—isn’t just narrative device. It’s a warning shot. They’re showing faculty that detection software creates legal liability, not security.

They methodically dismantle detection reliability:

  • GPTZero: 18% false positives

  • Detection accuracy varies from 27% to 99% across tools

  • Non-native English writers flagged disproportionately

  • Students with $20/month can bypass most detection

The real argument: Detection is expensive, inequitable, unreliable, and ultimately futile. The $12-21 billion cheating industry will outspend and outpace academic detection forever. This isn’t defeatism—it’s resource allocation. Stop fighting an arms race you’ll lose.

Chapter 7: Policies

They present the Russell Group’s five principles not as model policy but as minimum viable response. Universities must: support AI literacy, equip faculty, adapt teaching, maintain integrity, share best practice.

But look at what they don’t mandate: specific rules about AI use. They push policy down to individual faculty with the warning that students need consistency. This creates pressure on departments to coordinate without requiring institutional bureaucracy to move first.

The embedded message: If your institution hasn’t acted, you still can. If your institution has acted, you still have flexibility. Either way, inaction isn’t an option.

Chapter 8: Grading and (Re-)Defining Quality

This chapter drops the bomb: “AI is the new C work.”

They don’t argue for new standards—they argue C work has lost all market value. If AI can do it, why would anyone hire a human to do it? The rubric that marks AI-level work as 50% = F isn’t punitive. It’s economic reality.

Here’s the sophisticated move: they redefine grade inflation as a failure to track market value. When C work was worth something (required human labor), giving Cs made sense. When C work is free and instantaneous (generated by AI), passing students with C-level skills is educational malpractice.

Part III: The Practical Revolution

Chapter 9: Feedback and Roleplaying

The Jill Watson case—students unable to distinguish AI TA from human TAs, some mistaking humans for bots—establishes that AI can already handle routine student interaction. But they don’t suggest replacing human teachers. They suggest augmenting them with AI that handles the 100th repetition of “what’s due next week?”

The feedback templates (role, task, goal, relationship, process) give students scaffolding to create personal tutors. This isn’t cheating—it’s the democratization of the advantages wealthy students always had (parents who could help with homework, money for tutors, access to resources).

The equity argument they don’t make explicitly: First-generation students have always competed with students who have professor parents. AI levels this field.

Chapter 10: Designing Assignments for Human Effort

The “I care, I can, I matter” framework converts motivation psychology into assignment design. Every assignment component maps to intrinsic motivation:

  • Purpose → I care

  • Task clarity → I can

  • Criteria/rubric → I matter

The checklists aren’t just anti-cheating measures. They’re process transparency. When students see that good work requires reading slowly, taking notes without AI, then checking understanding with AI, then synthesizing—they learn what expertise actually involves.

Chapters 11-12: Writing and Assignments

These chapters shift from theory to implementation. Every assignment category includes both the pedagogical goal and the AI-resistant mechanism:

  • Personal/local content: AI can generate generic examples, but “describe how you won an argument at Thanksgiving” requires lived experience

  • Process artifacts: Requiring tracked changes, version history, conversation transcripts makes AI use visible and educational

  • Iteration requirements: “Ask AI for 5 versions, explain why each is insufficient, improve the best one” turns AI into a thinking partner, not a replacement

The sophisticated design pattern: Every assignment that allows AI also requires students to exceed AI output. This isn’t anti-AI—it’s pro-human-value-add.

The Bridge: From Panic to Partnership

The book’s structure itself teaches adaptation:

  1. Chapters 1-4 (Understanding): Master the machinery so fear converts to capability

  2. Chapters 5-8 (Institution): Acknowledge detection failure, raise standards, redesign grading

  3. Chapters 9-12 (Practice): Specific assignments that work with AI, not against it

Each section builds on the previous. You can’t design AI-resistant assignments (Part III) without understanding why detection fails (Part II) without knowing how AI actually works (Part I).

But here’s the hidden curriculum: by the time faculty implement the assignments in Chapters 11-12, they’ve internalized the larger philosophy. AI isn’t the enemy. Mediocrity is. AI makes mediocrity free—which means we must demand more than mediocrity.

The Literature They Build On (And Against)

Learning Science Foundations

They draw heavily on established learning research:

  • Vygotsky’s Zone of Proximal Development: AI as the “more capable peer”

  • Bandura’s self-efficacy: Mastery experiences, verbal persuasion, social modeling

  • Deci and Ryan’s self-determination: Autonomy, competence, relatedness

But they weaponize these theories against AI resistance. If students learn better with capable partners (Vygotsky), and AI is an infinitely patient capable partner, opposing AI assistance becomes pedagogically indefensible.

Technology Adoption Literature

The internet comparison isn’t just analogy—it’s historical pattern recognition:

  • Initial resistance → gradual adoption → complete integration → forgot we resisted

  • Wikipedia was “cheating” → now it’s a starting point

  • Calculators would “destroy math” → now we teach different math

The implied argument: Faculty who ban AI today will look like faculty who banned Wikipedia in 2005. You can resist, but you’ll lose, and you’ll look foolish in retrospect.

Assessment and Academic Integrity Research

They cite McCabe’s finding that most students admit to some cheating, then layer in new data:

  • 89% of students using ChatGPT (though 72% think it should be banned—stunning cognitive dissonance)

  • 75% will continue using AI even if prohibited

  • Self-reported usage varies wildly (22%-89% across studies)

What they do with this data: They don’t moralize. They document that prohibition failed before implementation. This converts the policy question from “should we allow AI?” to “how do we channel inevitable AI use toward learning?”

Missing Conversations

The authors explicitly note what they didn’t cover:

  • Deep ethics discussions (relegated to brief mentions)

  • Detailed equity analysis (acknowledged as deserving its own book)

  • Long-term societal implications (mentioned but not explored)

This isn’t accidental. They chose practical urgency over comprehensive analysis. The book optimizes for “faculty can implement this Monday” not “faculty can write papers about this.”

The trade-off: Depth sacrificed for accessibility. Philosophy deferred to practice. They made the book faculty would actually read (short, actionable) rather than the book scholars might prefer (thorough, theoretical).

What the Book Actually Accomplishes

The Honesty Move

They don’t pretend to have answers. Mollick’s work appears throughout—they’re building on others’ experiments. Student interviews inform their claims but aren’t systematic research. They say this explicitly.

Why this works: Faculty trust honesty more than authority. Admitting uncertainty creates space for reader experimentation. “We tried this, it worked, maybe it’ll work for you” is more persuasive than “Here’s the solution.”

The Inevitability Argument

Every chapter reinforces: AI is here, improving, and inevitable. Not through repetition but through different evidence:

  • Chapter 2: Already transforming jobs

  • Chapter 3: Already creating new job categories ($300k prompt engineers)

  • Chapter 6: Already surpassing detection

  • Chapter 8: Already producing C+ work

This compound argument feels overwhelming—which is the point. Resistance becomes exhausting. Adaptation becomes relief.

The Permission Structure

Notice how they sequence permission:

  1. AI can help YOU (Chapter 5: faculty assistance)

  2. AI can help THEM (Chapter 9: student feedback)

  3. AI SHOULD help them (Chapters 11-12: required AI use)

Each step normalizes the next. Once faculty use AI for course design, prohibiting student use feels hypocritical.

The Design Philosophy: Optimizing for Faculty Action

Every choice reveals what they prioritized:

Short over comprehensive: 240 pages instead of 600

  • Optimizes for: Busy faculty will actually finish it

  • Sacrifices: Depth, nuance, complete treatment of ethics

Prompts over theory: Dozens of copy-pasteable prompts throughout

  • Optimizes for: Immediate implementation Monday morning

  • Sacrifices: Deep understanding of why prompts work

Examples over abstraction: Specific assignments across disciplines

  • Optimizes for: “I can adapt this for my class”

  • Sacrifices: Universal principles that transfer perfectly

Practical over political: Minimal stance on AI ethics/regulation

  • Optimizes for: Faculty from different ideological positions can use it

  • Sacrifices: Taking clear positions on contested questions

The Unstated Stakes

The epilogue acknowledges massive unresolved questions: What happens to PhD preparation if AI TAs work better? What happens to academic publishing if AI can write acceptable papers? What happens to tenure requirements when AI can produce research faster?

They don’t answer these questions. They say: these conversations are coming, and faculty who understand AI will lead them. Faculty who don’t won’t just lose the argument—they’ll be absent from the conversation.

The final message: Adaptation isn’t optional. The choice is between participating in how AI reshapes education versus being reshaped by it.

The Meta-Lesson

The book itself demonstrates its thesis. Bowen and Watson clearly used AI extensively in writing this. The breadth of examples, the variety of prompts, the speed of publication (2024 publication for technology that went public November 2022)—this feels AI-assisted.

And that’s the point.

They practice what they preach: AI as thinking partner, not replacement. The human contribution—the framework, the sequencing, the restraint, the strategic choices about what to include—is what makes it valuable. The AI probably helped generate examples, find sources, draft sections.

The value-add is which examples to keep, which arguments to emphasize, which trade-offs to make.

That’s the future they’re describing. That’s the skill they’re modeling.

Not “can you write without AI?” but “can you make AI-assisted work better than AI-alone work?”

The students who master this will replace the students who don’t. The faculty who model this will prepare students for the world that’s coming.

The faculty who don’t will become increasingly irrelevant—not because AI replaced them, but because they refused to evolve while their students and colleagues did.

The book succeeds because it optimizes for faculty survival, not faculty comfort.

Nik Bear Brown Poet and Songwriter