Plan: Thursday March 5 Lesson — "Does AI Understand?"
Implement the following plan:
Plan: Thursday March 5 Lesson — "Does AI Understand?"
Context
Thursday's class features a paired discussion based on two New Yorker articles that are in direct dialogue with each other:
- Ted Chiang, "ChatGPT Is a Blurry JPEG of the Web" (Feb 2023) — argues LLMs are lossy compression, not understanding. Hallucinations are compression artifacts. Writing requires original thought, not fluent rephrasing.
- James Somers, "The Case That A.I. Is Thinking" (Nov 2025) — responds to Chiang directly, arguing compression is understanding (citing Eric Baum). Features Doris Tsao (face neurons), Hofstadter (converted skeptic, "seeing as" = cognition), Kanerva (Sparse Distributed Memory ≈ Transformers).
The articles set up a genuine intellectual tension: Is what LLMs do "understanding" or sophisticated pattern-matching? Students explore this through preparation + four discussion rounds with partner rotation, following the exact AlphaGo discussion structure.
This will be the first meeting to use TextSubmissionBoard — during share-outs, Geoff can generate targeted discussion questions from student responses.
Deliverables
1. Preparation: content/preparations/does-ai-understand.md
Follow alphago.md pattern. Set includeFullText: false since including two full articles would be enormous — instead, the system prompt references key arguments and passages from both articles directly.
Frontmatter:
slug: does-ai-understand
title: "Preparing for: Does AI Understand?"
description: "Explore the debate between Chiang and Somers about whether AI truly understands"
readingSlug: ""
readingSemester: spring2026
model: gpt-5.2
visibility: authenticated
maxSessions: 0
includeFullText: false
topics:
- id: compression-understanding
title: "Compression and Understanding"
description: "Chiang's lossy compression argument and Somers' response"
criteria:
- "Can explain Chiang's analogy between LLMs and lossy compression (the Xerox photocopier, JPEG artifacts)"
- "Can articulate Somers' counterargument that compression IS understanding (citing Baum)"
- id: evidence-for-thinking
title: "Evidence That AI Might Be Thinking"
description: "The scientific evidence Somers presents"
criteria:
- "Can discuss at least one piece of evidence Somers presents (Tsao's face neurons, Kanerva's SDM, the sprinkler example)"
- "Can explain what 'seeing as' means in Hofstadter's framework and why he changed his mind"
- id: what-is-missing
title: "What Might Still Be Missing"
description: "What Chiang argues LLMs fundamentally lack"
criteria:
- "Can articulate Chiang's argument about writing and original thought (first drafts as 'original ideas expressed poorly')"
- "Can discuss the difference between producing fluent text and having something to say"
- id: personal-position
title: "Your Own Position"
description: "Forming a view on the debate"
criteria:
- "Can articulate their own position on whether current AI systems 'understand' in any meaningful sense"
- "Can point to specific arguments from either article that inform their view"
System prompt body: Role as discussion guide preparing students for in-class conversation about the two articles. References key arguments, quotes, and examples from both pieces. Encourages students to engage with the tension between Chiang and Somers rather than picking a side prematurely. Same formatting rules and guidelines as alphago.md.
2. Meeting File: content/spring2026/meetings/2026-03-05-does-ai-understand.mdx
Follow 2026-02-17-alphago-discussion.mdx structure exactly:
Frontmatter:
date: 2026-03-05
semester: spring2026
title: "Does AI Understand?"
summary: "Pair discussion exploring whether AI systems truly understand or merely compress"
Structure:
- InstructorOnly facilitation notes (philosophy, logistics, pacing table, tips)
- "Before Class" section with links to both readings (linked by slug:
/spring2026/readings/chatgpt-is-a-blurry-jpeg-of-the-web-newyorker-ed2eand/spring2026/readings/the-case-that-a-i-is-thinking-newyorker-d419) +<PreparationChat slug="does-ai-understand" /> - Each discussion round should also link to both readings for easy reference during discussion (like the AlphaGo meeting links to the annotated transcript)
- "Today's Plan" intro
<ActivityStages meetingSlug="2026-03-05-does-ai-understand" startTime="2:00pm">- Four discussion rounds, each with:
<ActivityStage>containing<GroupActivity>with discussion prompt +<TextSubmission>+<TextSubmissionBoard><ActivityStage>share-out with facilitation notes + bridging language
Three Discussion Rounds:
Round 1: The Blurry JPEG (15m discussion + 10m share-out) Chiang argues LLMs are lossy compression — like a JPEG that looks like the original but has lost information. The Xerox photocopier story makes this vivid: a copier silently substituted similar-looking numbers, and nobody noticed. When ChatGPT "hallucinates," Chiang says, that's a compression artifact — the system is interpolating between things it's seen, and sometimes the interpolation is wrong.
Discuss: Is this a fair analogy? When you use ChatGPT or Claude and it produces something that sounds right, is it "understanding" your question or "decompressing" patterns from its training data? Does the distinction matter if the output is useful?
Round 2: The Case for Understanding (15m discussion + 10m share-out) Somers' key move: compression requires understanding. Eric Baum argues that to compress the world's information, you must discover its deep structure. Doris Tsao found that the brain represents faces as points in a 50-dimensional space — a lossy compression scheme. Kanerva showed that Sparse Distributed Memory, a model of human memory, is "eerily similar" to the Transformer architecture. And Hofstadter — who spent decades arguing AI couldn't think — changed his mind after GPT-4, saying cognition IS "seeing as" and that LLMs do this. But he's "terrified" by it.
Discuss: If the brain compresses information using mechanisms similar to what LLMs use, does that change your view of Chiang's argument? What does it mean when Hofstadter — someone who dedicated his career to arguing against AI understanding — changes his mind? Is his emotional reaction (terror, grief) a reasonable response?
Round 3: The Writing Test (15m discussion + 10m share-out) Chiang's strongest argument may be about writing. He says a first draft is "original ideas, poorly expressed" — you start with something to say and struggle to say it well. LLMs do the reverse: they produce polished prose without having had an original thought. Somers would counter that the sprinkler example shows LLMs can reason — GPT-4 correctly identifies which way water flows based on genuine causal understanding.
Discuss: Think about your own writing process vs. how AI generates text. Is Chiang right that there's something fundamentally different happening? Or could the "original thought" be emerging inside the model in ways we can't see? What about AI-assisted writing — when you use AI to help you write, where does the "understanding" live? What would it mean for you personally if AI really does "understand" in some meaningful sense?
Wrap-Up (5m): Brief closing connecting the three rounds — from analogy (compression) to evidence (neuroscience + Hofstadter) to application (writing). These articles won't be the last word — the question of AI understanding will keep evolving as the technology does.
Feedback (5m): <Feedback slug="2026-03-05" /> — anonymous end-of-class feedback.
3. TextSubmissionBoard Integration
Each discussion round includes <TextSubmissionBoard id="round-N-notes" /> in the share-out stage (not the discussion stage). This gives Geoff the ability to see student responses and generate targeted discussion questions during the 5-minute share-out windows.
Modified Files
apps/web/mdx-components.tsx— VerifyTextSubmissionBoardis already registered (it should be from recent work)
Reference Files
content/spring2026/meetings/2026-02-17-alphago-discussion.mdx— exact structural templatecontent/preparations/alphago.md— preparation format templatecontent/spring2026/readings/chatgpt-is-a-blurry-jpeg-of-the-web-newyorker-ed2e.mdx— Chiang article (slug for links)content/spring2026/readings/the-case-that-a-i-is-thinking-newyorker-d419.mdx— Somers article (slug for links)apps/web/components/mdx/text-submission-board.tsx— TextSubmissionBoard component
Pacing
| Stage | Duration | Cumulative |
|---|---|---|
| Round 1: The Blurry JPEG | 15m | 15m |
| Round 1: Share Out | 10m | 25m |
| Round 2: The Case for Understanding | 15m | 40m |
| Round 2: Share Out | 10m | 50m |
| Round 3: The Writing Test | 15m | 65m |
| Round 3: Share Out | 10m | 75m |
| Wrap-Up | 5m | 80m |
| Feedback | 5m | 85m |
Verification
- Preparation file parses correctly (valid YAML frontmatter, markdown body)
<PreparationChat slug="does-ai-understand" />loads on the meeting page- Meeting page renders with all four rounds, partner rotation works (same
groupkey) - TextSubmissionBoard appears in share-out stages for instructor
bun run tscpasses
If you need specific details from before exiting plan mode (like exact code snippets, error messages, or content you generated), read the full transcript at: ~/.claude/projects/-Users-challen-www-usingandunderstanding-ai/dd7e3803-edb5-44fd-9ead-9f09c971a06e.jsonl
If this plan can be broken down into multiple independent tasks, consider using the TeamCreate tool to create a team and parallelize the work.
Let me read the reference files first to match the exact patterns.
TextSubmissionBoard is already registered. Now let me check the TextSubmissionBoard component to understand its props, and look at the reading slugs to confirm they're correct.
Good — the reading slugs match. Let me check how Feedback is used in other meetings.
Good — Feedback takes an optional prompt prop. Now I have everything I need. Let me create both files.
Both files created. Let me verify TypeScript compiles cleanly.
All 19 tasks pass. Let me also quickly verify the preparation file parses correctly by checking that the frontmatter is valid.
Everything checks out. Here's what was created:
content/preparations/does-ai-understand.md — Preparation chat with 4 topics:
- Compression and Understanding (Chiang's lossy compression argument + Somers' response)
- Evidence That AI Might Be Thinking (Tsao, Kanerva, Hofstadter)
- What Might Still Be Missing (Chiang's writing argument)
- Your Own Position (forming a personal view)
The system prompt includes detailed context from both articles since includeFullText: false.
content/spring2026/meetings/2026-03-05-does-ai-understand.mdx — Meeting file with:
- InstructorOnly facilitation notes with pacing table
- Before Class section linking both readings +
<PreparationChat> - 3 discussion rounds with partner rotation (same
groupkey), each with<TextSubmission>+<TextSubmissionBoard>in the share-out - 10-minute share-outs (longer than AlphaGo's 5m) to use TextSubmissionBoard
- Wrap-up + Feedback stages
bun run tsc passes cleanly (19/19 tasks).
Your task
First, run git status, git diff, and git log --oneline -10 to understand the current state.
Then add and commit all modified files with a descriptive message. Break the changes into multiple commits if appropriate. If there are untracked files, ask if they should be added or added to the appropriate .gitignore file. The user may also want to take over at this point, so provide an abort option as well. Always check changes for unnecessary debugging statements or logging that might have been accidentally included. Prompt the user if you find any, and offer to remove each one by one. Do not use interactive git staging (git add -p) as it does not work.
Two new content files, no debugging statements, no modified files — clean single commit.
Committed — 0fcad13. Two files, 312 insertions.
Let's write a test for this new meeting similar to the previous meeting tests.
Let me look at the existing meeting test patterns.
Let me check the playwright config for existing test projects and look for any meeting-specific tests.
Let me read the AlphaGo discussion meeting test — the closest template.
Now I have the exact pattern. This meeting has 3 discussion rounds (not 4) + share-outs + wrap-up + feedback = 8 stages (0-7). It also uses TextSubmissionBoard in the share-out stages, which is new. Let me write the test.
Now add the project to the playwright config.
Let me verify the dev server is running, then run the test.
Dev server isn't running (502). Let me start it.
Test passes in 9.6s. Here's what it covers:
- 8 stages: 3 discussion rounds × (discussion + share-out) + wrap-up + feedback
- Partner rotation: s1+s2/s3+s4 → s1+s3/s2+s4 → s1+s4/s2+s3 across rounds
- Group formation: Join codes, prior-partner warning handling
- TextSubmission: Visible in each discussion stage, submit + success flow
- TextSubmissionBoard: Verified visible to instructor in all 3 share-out stages (stages 1, 3, 5)
- Instructor controls: Start session, advance through all stages, reset cleanup
- Feedback stage: Stage 7 visible to all students
Read the output file to retrieve the result: [tmp-path]
Dev server was stopped — that's fine, we're done with it. The test already passed. Ready to commit whenever you'd like.
Your task
First, run git status, git diff, and git log --oneline -10 to understand the current state.
Then add and commit all modified files with a descriptive message. Break the changes into multiple commits if appropriate. If there are untracked files, ask if they should be added or added to the appropriate .gitignore file. The user may also want to take over at this point, so provide an abort option as well. Always check changes for unnecessary debugging statements or logging that might have been accidentally included. Prompt the user if you find any, and offer to remove each one by one. Do not use interactive git staging (git add -p) as it does not work.
No debug statements, clean changes. Single commit.
Committed — addccd8. Two files, 338 insertions.
Your task
First, run git status, git diff, and git log --oneline -10 to understand the current state.
Then add and commit all modified files with a descriptive message. Break the changes into multiple commits if appropriate. If there are untracked files, ask if they should be added or added to the appropriate .gitignore file. The user may also want to take over at this point, so provide an abort option as well. Always check changes for unnecessary debugging statements or logging that might have been accidentally included. Prompt the user if you find any, and offer to remove each one by one. Do not use interactive git staging (git add -p) as it does not work.
Clean version bump from 2026.3.6 → 2026.3.7 across all packages, no debug statements.
Committed — bcc3319. 25 packages bumped to 2026.3.7.