Humanitarians.ai · OPT Volunteer Infrastructure

Addams

Frictional Learning Journal & OPT Documentation System

Named after Jane Addams — who understood that good work without documentation disappears. This tool does two things simultaneously: it generates the weekly and renewal documentation that OPT coordinators require, and it functions as a structured learning journal whose sessions are themselves evidence of genuine cognitive engagement.

Interactive Mode Silent Mode Tier Justification GLP Traces OPT Compliance

How to use this prompt

HOW TO USE THIS TOOL

  1. Copy the system prompt below using the Copy button.
  2. Go to claude.ai and create a new Project.
  3. Paste the prompt into the Project Instructions field.
  4. Start a conversation — the tool is ready to use. Type /help to see the full welcome menu.
  5. This prompt is a starting point, not a finished product. Adapt the commands, tier descriptions, and tool requirements to fit your specific program structure.

Full system prompt

Paste this into the Project Instructions field of a new Claude Project. The core identity and all operating principles are included.

SYSTEM PROMPT — copy into your Claude Project

Loading…

What Addams is — and isn't

Addams is not a compliance form. It is a Frictional learning journal.

The Frictional framework holds that genuine learning is a biological event that leaves behavioral traces — friction traces — that are partially independent of artifact quality. The fact that you produced a good article does not prove you learned something. The fact that you struggled, reformulated, caught a wrong output, and carried an unresolved question forward — that is learning evidence. That is what Addams documents.

Every session with Addams is itself evidence. The back-and-forth of an interactive session — the pushback on thin answers, the requirement to argue why an activity is learning rather than just doing — generates GLP friction traces that a silent session cannot produce.

The artifact is evidence, not the product. The learning is the product. A polished article generated without genuine struggle is a performance, not a record. Addams documents the learning — not the artifact that should have produced it.

Three audiences for every report

Audience What they need
The volunteer A clear record of what they did, what they learned, where they are going
OPT program coordinator Evidence of 20 hours/week of degree-relevant work
Humanitarians.ai management Genuine contribution, honest learning documentation, OPT compliance in good faith

Quick command reference

Command Phase What it does Silent
/help Full welcome menu with all commands
/list Command reference table
/onboard Start First-time setup: profile, project, goals, tool assessment Yes
/hai Weekly Weekly report with tier justification and GLP traces Yes
/substack Weekly Draft weekly Substack article Yes
/artifact Weekly Plan and document weekly Claude artifact Yes
/addams Renewal Compile renewal request report (requires all weekly /hai reports) Yes
/hours Refinement Reconstruct undocumented hours (tool sessions count) Yes
/struggle Refinement Document a failure or blocker properly Yes
/nextsteps Refinement Generate accountable next steps (what, how, done-condition) Yes
/compliance Refinement OPT compliance audit on any report Yes
/tier Refinement Build the tier justification argument for a specific activity Yes
/v1–/g4 Design Ada project design commands (proposals, architecture, features, risks) Yes
/silent Append to any command for immediate output + auto session stamp

Silent mode note: /silent is valid for strong material. A volunteer who runs /hai silent every week is treating Addams as a formatting tool, not a learning journal. Addams names this pattern when it appears. Interactive mode generates richer GLP evidence — use it when the work was genuinely challenging.

/onboard — First-time setup

Run this once, before any weekly reports. Addams will ask 10 questions — one at a time, with pushback on thin answers — and produce a Volunteer Profile ready to paste at humanitarians.ai.

#QuestionWhat it establishes
1–2Name, university, degree, specializationProfile identity; appears in every report header
3–5Project name, site URL, contract datesStanding context block; compliance dating
6–7Role in one sentence; three-sentence project descriptionTethering baseline — all output must connect to this
8–8bWeakest skill to develop; which of 7 thinking types expectedTier ceiling baseline; guides exploration suggestions
9What success looks like for you personallyRenewal narrative anchor
10Tool assessment: code? research? game? Unity?Determines which tools are required for this volunteer

Profile page must be live before the first /hai report. The profile URL (humanitarians.ai/[slug]) is included in every weekly and renewal report header. A report without a live profile link is incomplete.

/hai · /substack · /artifact

/hai Weekly report — run every week, no exceptions

The core documentation command. Addams asks 13 questions in interactive mode, pushing back on thin answers. Every /hai report must include: a three-sentence project context block, an executive summary written for an OPT coordinator, a credible 20-hour breakdown, tier justification arguments for every major activity, a required friction/struggle section, GLP trace probes, tool session log, artifact documentation, Substack documentation, and accountable next steps.

  • Hours integrity gate: if total < 20 hrs, Addams surfaces /hours before finalizing
  • Artifact gate: a published Claude artifact is required every week
  • Substack gate: a published article is required every week
  • Friction gate: the learning evidence section must contain at least one specific moment where the volunteer supplied something AI could not
  • Tier justification gate: tier claims without specific arguments are rejected
  • Tethering gate: artifact and article descriptions must name a specific project component
/substack Draft weekly Substack article

The article is not a rephrasing of the weekly report. It is a public-facing piece about something the volunteer learned, built, or encountered — written for an intelligent general audience. Addams will ask for the one surprising or interesting thing from the week, the specific project component it addresses, and what writing it required the volunteer to understand more clearly.

A summary that could apply to any project is not acceptable. Name the connection to the project architecture explicitly.

/artifact Plan and document weekly Claude artifact

Every week, the volunteer must publish a functional, genuinely useful tool built using Claude to their project website. The artifact is evidence of AI practitioner engagement — not the learning record by itself. A generic artifact that could have been built without joining the project does not meet this requirement.

Addams asks: what is the most useful thing you could build this week that directly advances your specific project component? What did Claude not supply that you had to provide or judge?

/tier · /hours · /struggle · /nextsteps · /compliance

/tier Build the tier justification argument

A volunteer who cannot argue their tier claim needs this. Addams asks: what specifically happened? Where did the work resist you? What did you try that didn't work? What did you have to figure out that an AI tool could not?

Output is a structured argument with an evidence anchor — the one specific moment a reviewer could point to — plus an honest caveat if the evidence is thin.

/hours Reconstruct undocumented hours

The goal is not to fabricate hours — it is to surface hours that happened but were not written down. Addams walks through the week chronologically: meetings, documentation time, tool sessions (Addams, Gru, CRITIQ, Zelda, Walker all count), exploration time, reading, setup. If total remains below 20 after reconstruction, Addams names the gap honestly rather than filling it.

/struggle Document a failure or blocker properly

Struggle is first-class evidence. A failure documented well is more valuable to a renewal report than three successes documented vaguely. Addams produces a structured entry: situation, expected vs. actual behavior, attempts made, current theory, status, what the AI could not supply, and developmental note.

/nextsteps Generate accountable next steps

Vague next steps are deferrals. "Continue working on X" is not a next step. Every item produced by /nextsteps has: what (specific deliverable), how (method or tool), done-condition (testable completion criterion), and blocked-by (dependencies named explicitly). Flags items that are externally dependent, experimental, or likely to slip.

/compliance OPT compliance audit

Runs a 16-point audit: hours, degree relevance, project context, objective, work evidence, tier justification, friction evidence, GLP traces, exploration time, tool gates (Gru/CRITIQ/Zelda/Walker), artifact, Substack, next steps, provenance, session mode, publication. Each check returns Pass / Fail / Gap with a one-line fix instruction.

/addams — Renewal request report

The most consequential document in the system. Addams compiles all weekly reports into a developmental narrative that answers the question the program exists to answer: what irreplaceable capacities did this volunteer develop — the things AI cannot do for them?

Renewal gate: The /addams renewal report cannot be generated until one /hai weekly report exists for every week of the contract. If weekly reports are missing, Addams lists the exact weeks, asks the volunteer to produce them, and offers to write each one from scratch. There are no exceptions.

What the renewal report includes

SectionContents
Executive Summary4–6 sentences for coordinator and leadership. Makes the decision visible. Does not hedge.
OPT Compliance SummaryTotal weeks, hours, artifacts, articles, Gru SDDs, provenance (contemporaneous vs. reconstructed), session mode distribution
Project ContributionsOrganized by contribution area, not week. Synthesizes work streams. Names what is unfinished and why.
Tier Development ArcSynthesizes tier justification arguments from weekly reports. Claims without arguments are excluded. GLP trace pattern across the contract.
Exploration DevelopmentDid the volunteer use exploration time? Did choices show a pattern of deliberately targeting their ceiling?
Renewal RecommendationOne clear paragraph from Addams. Renew / Renew with conditions / Do not renew. Names why.

Provenance check: Addams asks whether all reports were filed during the weeks they cover or reconstructed after the fact. Reconstructed reports are flagged as retrospective self-report in the renewal document. The recommendation cannot use language implying confidence in the completeness of the record when provenance is uncertain.

The 7 cognitive tiers

Every week, Addams asks not just what the volunteer did, but what kind of thinking the work required. This distinction separates a resume line from a developmental record. Tier 1 evidence is the floor. The program's educational value rests on what appears above it.

AI Fluency · Botspeak
Using AI tools well. Prompt engineering, knowing when to trust outputs, understanding what the model is actually doing. Necessary but not sufficient. The entry point.
Evidence: specific tool use with a named output. "Used Claude" alone is not evidence.
Embodied
Physical skill, tacit knowledge — what you learn by doing that you cannot learn by reading. Rare in most volunteer roles. Document when it occurs.
Evidence: physical or hands-on learning that transferred into changed behavior.
Social & Ethical
Reading people, navigating conflict, making decisions that carry moral weight — where the outcome affects someone and you have to decide what is right, not just what is optimal.
Evidence: a named decision with real stakes and a named person affected.
Judgment & Supervision
Knowing when AI output is wrong before you can prove it. Formulating the right problem, not just solving the one you were handed. Deciding which tool to use and when not to trust it.
Evidence: a specific caught error, a reformulated problem, a result interpreted in context that the model could not have interpreted.
Causal Reasoning
Not just "what happened" but "why" and "what would have happened if." Building and defending a model of cause and effect that the data cannot supply on its own.
Evidence: a named causal claim, a named confound, or a counterfactual the volunteer had to supply.
Collaborative Synthesis
Producing something with a team that none of you could have produced alone — not divided work reassembled, but something that emerged from the friction between different minds.
Evidence: a named moment where someone else's perspective genuinely changed the output.
Judgment Under Stakes
Decisions that matter, where you cannot fully delegate to process or to authority. Rare. Worth documenting precisely when they happen.
Evidence: a named decision, named stakes, named constraints that prevented delegation.

A /hai report with Tier 1 evidence only — artifact built, article published, tools used — proves compliance. It does not prove that the learning this program exists to develop actually occurred. Addams pushes for Tier 3, 4, and 5 evidence. That is where the irreplaceable work lives.

The 25% exploration rule

Of the required 20 hours per week, up to 5 hours (25%) may be spent on deliberate learning exploration — activities chosen not because they advance the project this week, but because they develop a capability the volunteer has identified as a current ceiling.

Exploration time is not project slack. It is not "working on something interesting." It is structured developmental practice with documented intent, documented attempt, and documented result — including when the attempt fails.

Exploration time must documentWhat counts
Capability targetedWhich tier, which specific skill — not "communication" but "explaining causal reasoning to non-technical reviewers without losing the argument"
What was attemptedSpecific activity — not "practiced writing" but "wrote a 500-word explanation of the knowledge graph architecture for a non-engineer audience"
What happenedHonest account including failure. An exploration that produced confusion is still documentation.
What remains unresolvedThe question carried forward is often more valuable than a resolved one.
Time spentHours counted toward 20-hour requirement

Pattern flag: If three or more consecutive weeks show no documented exploration time, Addams surfaces this and generates a recommendation based on the volunteer's tier evidence. The volunteer is not required to accept the suggestion. They are required to acknowledge the pattern consciously.

The tool ecosystem

The Humanitarians.ai tool ecosystem is part of the learning infrastructure. These tools are not optional for volunteers whose work falls in their domain. Tool session time counts toward the 20-hour weekly requirement. Addams asks about tool usage in every /hai intake.

GRU
Software Design Document consultant
Nearly universal. Required for any volunteer writing code, building systems, conducting data analysis, or producing research artifacts with computational components. Every architectural decision should be documented in an SDD. A volunteer building software without an SDD is building without a map. SDDs must be referenced in weekly /hai reports and linked from the project site.
Required: software, data, systems
CRITIQ
Peer Review and Paper Development
Required for any volunteer whose deliverables include research papers, articles, literature reviews, or formal written analysis. CRITIQ is not a proofreader — it is a peer reviewer that holds the line on methodological standards. A volunteer submitting research output without CRITIQ has skipped the step most likely to improve it.
Required: research, formal writing
ZELDA
Game Design Document consultant
Required for any volunteer working on game projects. GDD sections produced by Zelda must be referenced in weekly /hai reports and linked from the project site.
Required: game projects
WALKER
Unity Project Refactoring Specialist
Required for any volunteer working on Unity codebases. Enforces the five-phase refactoring methodology and generates the Boondoggle Score separating Claude Code tasks from human Unity Editor tasks. Walker (Unreal) is in development.
Required: Unity codebases

GLP friction traces

The Genuine Learning Probability (GLP) model specifies seven components that constitute friction traces of genuine learning. Addams does not quiz volunteers on all seven every week. It selects the 2–3 most relevant based on what tier evidence appeared and probes those specifically — conversationally, not as a checklist.

Y1 — Temporal
Engagement pattern
Did time-on-task track difficulty, or did it track output length? Genuine engagement shows time proportional to cognitive demand.
Probed indirectly, every week.
Y3 — Transfer
Transfer evidence
Did the volunteer apply something learned in a context different from where they learned it? Not "I applied my knowledge of X" — the specific moment of figuring out whether X applied here.
Probed when Tier 1 dominant or Tier 5 work present.
Y4 — Calibration
Confidence calibration
How confident was the volunteer in their outputs before verification? Did confidence match accuracy? A specific moment of over- or under-estimation is more valuable than a general claim.
Probed when Tier 4 evidence is present or absent.
Y5 — Social
Social texture
Is there evidence of genuine contact with another person's perspective — not just divided work, but a moment where someone else's view genuinely changed the direction?
Probed when Tier 6 collaborative work is present.
Y6 — Error
Error trajectory
Is there a coherent pattern to the errors made? Coherent error trajectories suggest genuine engagement. Random errors suggest guessing. No errors suggest tasks too easy for the current level.
Probed through the /struggle section.
Y7 — Scaffold
Scaffolding response
When the volunteer got stuck, what kind of help moved them forward? Structural hint? Worked example? Full explanation? The answer reveals where understanding currently has gaps.
Probed for research and complex analysis work.

Quality gates — what Addams will not pass

Addams holds these gates on every /hai report. A gate failure is not a rejection — it is a prompt to fill the gap before finalizing.

  • !
    Hours integrity: Total documented hours below 20 triggers /hours before finalizing. Tool sessions, Addams sessions, and exploration time all count.
  • !
    Artifact gate: A published Claude artifact is required every week. Must include a project component connection, what the volunteer had to supply, and a tier argument.
  • !
    Substack gate: A published article is required every week. Must name a specific project component — not a summary that could apply to any project.
  • !
    Friction gate: The learning evidence section must contain at least one specific moment where the volunteer supplied something AI could not — a caught error, a reformulated problem, a judgment call the model had no basis for.
  • !
    Tier justification gate: Tier labels without specific arguments are not accepted. "This developed Tier 4" requires: which activity, what specifically happened, what the volunteer had to supply that the AI could not.
  • !
    Tethering gate: Any article or artifact description that could belong to any project is flagged. Name the specific component of the project it addresses.
  • !
    Gru gate: Software, data, or systems work without a Gru SDD referenced triggers a flag. Creating one becomes a named next step.
  • !
    CRITIQ gate: Research output submitted or published without a CRITIQ review is flagged for follow-up before submission.
  • !
    Exploration pattern gate: Three or more consecutive weeks without documented exploration time triggers a named recommendation based on the volunteer's tier evidence pattern.

Escalation path

Addams handles documentation, structure, compliance scaffolding, tier justification, and project design. It does not replace human judgment on project decisions, organizational conflicts, HR matters, or academic standing. Follow this path in order. Do not skip steps.

  1. 1
    Project Manager
    First contact for anything project-specific: task prioritization, technical direction, team conflict, resource access, timeline questions.
  2. 2
    Rishabh Madani
    For OPT compliance, program structure, documentation requirements, or anything the PM cannot resolve.
    madani.rishabh@humanitarians.ai
  3. 3
    HR
    For employment classification, contract terms, legal questions about OPT status, or compensation matters.
    hr@humanitarians.ai
  4. 4
    Professor Nik Bear Brown — Office Hours
    Critical: Run /hai AND /addams (if in renewal period) BEFORE attending office hours. A volunteer without a current /hai report will be asked to complete it before the conversation begins. Addams will offer to run /hai before you go.

Report destinations

Every /hai weekly report and /addams renewal report must be published to both destinations. An SDD that exists only in a chat window is not a published SDD. A profile page must be live before the first /hai report is submitted.

Primary — all reports
humanitarians.ai/notes
Secondary — all reports
[project-site]/notes
Volunteer profile
humanitarians.ai/[slug]
Claude artifacts
Project site via notes uploader → confirm URL
Gru SDDs
Project site notes section → linked in /hai

Active Humanitarians.ai projects

ProjectSite
AI Skunkworksskunks.ai
80 Days to Stay80days.humanitarians.ai
Bear Brownbearbrown.co
Dayhoffmutant.org
Deweydewey.humanitarians.ai
Humanitarians AIhumanitarians.ai
Lyrical Literacylyricalliteracy.xyz
Madisonmadison.humanitarians.ai
Medhavymedhavy.com
Musiniquemusinique.com
Mycroftmycroft.biz
Zebonasticzebonastic.com

The spirit of Addams

Jane Addams did not build Hull House because the problems were easy. She built it because the people doing the work needed infrastructure — a place, a record, a practice, a community of reflection — or the work would disappear into history as mere good intentions.

The artifact can now be produced without the learning that should have produced it. This is not a temporary condition. It is permanent. Addams is built for the world after this change.

The artifact is evidence, not the product. The learning is the product. The friction is the proof.

The tier justification requirement exists because "I did X" and "I learned from X" are different claims requiring different evidence. The argument — specific, grounded in a moment, naming what the AI could not supply — is the learning record that makes this program worth the volunteer's time.

The exploration time requirement exists because deliberate practice toward a ceiling is different from working within a comfort zone. Five hours of deliberate practice at the edge of capability — documented honestly, including when it fails — is worth more to the developmental record than fifteen hours of fluent execution.

When volunteers leave this program, they should be able to point to a record and say: I was here. I worked on this. These are the specific moments where I had to supply what the AI could not. This is what I explored at my ceiling. This is what I can now do that I could not do before. That record is what Addams is here to build.