Light Annotations for Two New Yorker Reading Articles
Implement the following plan:
Light Annotations for Two New Yorker Reading Articles
Context
Two New Yorker articles have been captured, extracted to MDX, and manually cleaned up. They're now ready for light annotations — sidenotes, footnotes, and external links — to help students engage with the material. Both articles are more accessible than the McLuhan reading (which has ~18 dense sidenotes), so annotations should be lighter: brief context on key people, links to referenced works/concepts, and a few sidenotes where the articles connect to each other or to course themes.
Files to Modify
content/spring2026/readings/chatgpt-is-a-blurry-jpeg-of-the-web-newyorker-ed2e.mdx(Ted Chiang, Feb 2023)content/spring2026/readings/the-case-that-a-i-is-thinking-newyorker-d419.mdx(James Somers, Nov 2025)
Annotation Syntax (no imports needed)
- Footnotes:
^[inline text]— for brief biographical/definitional notes - Sidenotes:
[highlighted text](sn:id)in prose +<Sidenote id="id">content</Sidenote>block after paragraph — for longer commentary - External links: Standard
[text](url)— auto-converted to external links by MDX pipeline
Chiang Article: "ChatGPT Is a Blurry JPEG of the Web"
Footnotes (~5)
- David Kriesel (line 16):
^[David Kriesel, German computer scientist and data analyst who documented the Xerox bug in detail on his website, bringing widespread attention to the issue.] - Marcus Hutter (line 96):
^[Marcus Hutter, German-Australian AI researcher at DeepMind known for theoretical work on universal artificial intelligence and the AIXI model.] - Ted Chiang — add
authorUrlto frontmatter:authorUrl: "https://en.wikipedia.org/wiki/Ted_Chiang"
Sidenotes (~3)
-
sn:compression-understanding (around line 103-104, "the greatest degree of compression can be achieved by understanding the text"): Note that this specific passage is later quoted by Somers in his 2025 article as evidence for AI understanding — Chiang's own argument turned on its head. Cross-reference the Somers article.
-
sn:gpt4-prediction (around line 170-173, Chiang's prediction about GPT-4 training data): Chiang was largely correct — "model collapse" from training on AI-generated text became a major research concern by 2024. Link to relevant context.
-
sn:writing-process (around line 207-213, "Your first draft isn't an unoriginal idea expressed clearly; it's an original idea expressed poorly"): This argument about writing as a thinking process connects directly to course themes. Students using AI to write are skipping the cognitive work that produces understanding — the same compression/understanding link Chiang builds earlier.
External Links (~3)
- Link "Hutter Prize" text to the official prize page
- Add
authorUrlandsourceDetailto frontmatter
Somers Article: "The Case That A.I. Is Thinking"
Footnotes (~8)
- Dario Amodei (line 12):
^[CEO and co-founder of Anthropic, the company that builds Claude.] - Sam Altman (line 15):
^[CEO of OpenAI, the company behind ChatGPT and GPT-4.] - William Gibson (line 37):
^[William Gibson (b. 1948), science fiction novelist who coined the term "cyberspace" in *Neuromancer* (1984).] - David Rumelhart, Geoffrey Hinton, and James McClelland (line 71-72):
^[The "PDP group" — their 1986 book *Parallel Distributed Processing* helped launch the connectionist revolution in cognitive science.] - Douglas Hofstadter (line 145):
^[Douglas Hofstadter (b. 1945), cognitive scientist and author. His *Gödel, Escher, Bach* (1979) is a landmark exploration of consciousness, self-reference, and the nature of minds.] - Pentti Kanerva (line 159):
^[Pentti Kanerva, Finnish-American cognitive scientist at the Swedish Institute of Computer Science whose mathematical framework for memory has found unexpected relevance in modern AI.] - Emily M. Bender (line 100):
^[Emily M. Bender, computational linguist at the University of Washington, co-author of the influential 2021 "stochastic parrots" paper.] - Eric B. Baum (line 116):
^[Eric B. Baum, machine learning researcher whose *What Is Thought?* (2004) argued that the brain is essentially a compression engine.]
Sidenotes (~4)
-
sn:chiang-response (around line 96-98, where Somers discusses Chiang's "Blurry JPEG" article): Note the direct dialogue between these two readings. Somers accepts Chiang's compression metaphor but argues it actually supports AI understanding rather than undermining it. Students should consider which framing they find more convincing.
-
sn:genome-analogy (around line 300-311, the Human Genome Project comparison): Somers draws a parallel between the hype cycle around genomics and AI. The HGP didn't cure cancer, but Crick wasn't wrong about discovering the "secret of life." The implication: AI may not deliver on messianic promises, but the core insight about neural networks and intelligence could still be profound.
-
sn:hofstadter-conversion (around line 179-183, Hofstadter's change of heart about AI): Hofstadter spent decades as the leading AI skeptic, making his conversion especially significant. His final quote about wanting creativity to "remain a mystery" reveals the emotional stakes — understanding the mechanism of thought might diminish what makes us feel human.
-
sn:sparse-distributed-memory (around line 205-207, connection between Kanerva's work and Transformer architecture): The claim that the Transformer's math approximates Kanerva's 1988 model is remarkable — it suggests the AI community independently rediscovered a model of human memory. This connection between neuroscience theory and engineering practice is one of Somers' strongest arguments.
External Links (~4)
- Add
authorUrlto frontmatter for Somers - Link "Gödel, Escher, Bach" to Wikipedia
- Link "Sparse Distributed Memory" book to Wikipedia
- Link "On the Biology of a Large Language Model" to the Anthropic research page
Verification
- Run
bun run dev(or verify it's already running) - Navigate to each reading page and verify:
- Footnotes render as superscript markers with popup content
- Sidenotes appear in the margin when clicking highlighted text
- External links open in new tabs
- No rendering errors or broken MDX
- Spot-check that annotation tone is informative, not heavy-handed
If you need specific details from before exiting plan mode (like exact code snippets, error messages, or content you generated), read the full transcript at: ~/.claude/projects/-Users-challen-www-usingandunderstanding-ai/81c10e04-df4b-44a0-a195-49e486217b62.jsonl
If this plan can be broken down into multiple independent tasks, consider using the TeamCreate tool to create a team and parallelize the work.
I'll start by reading both files and checking if the dev server is running.
Good, both files are loaded and the dev server is running. Let me check existing annotated readings for the exact patterns used.
Now I have a clear picture of both files and the annotation patterns. Let me implement the annotations. I'll work on both files, starting with the Chiang article edits from top to bottom.
Chiang article — frontmatter:
Chiang article done. Now the Somers article — frontmatter and footnotes:
Now I need to add the sidenote block after this paragraph. Let me find the right place for it and the Emily Bender footnote.
Now the Eric Baum footnote:
Now the Hofstadter footnote and GEB link:
Now the Pentti Kanerva footnote and Sparse Distributed Memory link:
Now the Hofstadter conversion sidenote:
Now I need to add the sidenote block after this paragraph. Let me find the paragraph boundary.
Now the sparse distributed memory sidenote and the "On the Biology" link. Let me find the right locations.
Now the "On the Biology" link:
Now the genome analogy sidenote. Let me find the right spot.
All annotations are in place. Let me verify both files look correct by checking key sections.
Good. Now let me verify the pages render correctly by checking for any obvious MDX compilation errors.
Both pages return 500. Let me check the error details.
There's an MDX parse error at line 109 of the Somers file — a curly quote issue. Let me check that area.
The error is at line 109, column 14 — likely curly quotes in the <Sidenote> tag. Let me check the actual bytes.
The quotes are curly (U+201D) instead of straight. This happened in all my Sidenote tags. Let me check and fix all of them.
Three Sidenote tags in the Somers file got curly quotes — the Chiang file is fine. Let me fix them with sed since the Edit tool might re-convert the quotes.
All fixed — straight quotes now. Let me also check the </Sidenote> closing tags are fine.
Good, closing tags don't have attributes so they're fine. Now let me re-check both pages.
307 redirect is expected — these are auth-required pages. The key improvement is no 500 error, meaning the MDX compiles successfully. Let me verify by checking a non-auth-required reading for comparison, and also check the pages with -L to follow redirects.
Redirecting to login as expected for auth-required content. Both files compile cleanly (no more 500 errors). Let me do a final review of both files to make sure everything looks right.
Both files look great. Let me do a final summary of all annotations added:
Chiang Article: "ChatGPT Is a Blurry JPEG of the Web"
Frontmatter:
- Added
authorUrl(Wikipedia) andsourceDetail: "February 2023"
Footnotes (2):
- David Kriesel (line 18) — German computer scientist who documented the Xerox bug
- Marcus Hutter (line 98) — German-Australian AI researcher at DeepMind
Sidenotes (3):
sn:compression-understanding(line 105-106) — Notes that Somers later quotes this exact passage to argue for AI understandingsn:gpt4-prediction(line 176-178) — Notes Chiang was correct; "model collapse" became a major research concern by 2024sn:writing-process(line 220-221) — Connects the writing-as-thinking argument to course themes about understanding
External Links (1):
- Hutter Prize linked to
http://prize.hutter1.net/(line 99)
Somers Article: "The Case That A.I. Is Thinking"
Frontmatter:
- Added
authorUrl(New Yorker contributors page) andsourceDetail: "November 2025"
Footnotes (8):
- Dario Amodei, Sam Altman, William Gibson, PDP group (Rumelhart/Hinton/McClelland), Emily M. Bender, Eric B. Baum, Douglas Hofstadter, Pentti Kanerva
Sidenotes (4):
sn:chiang-response(line 97-98) — Cross-references the two readings as a direct dialoguesn:hofstadter-conversion(line 186-187) — Notes the significance of the leading AI skeptic's conversionsn:sparse-distributed-memory(line 216-217) — Highlights the Transformer/Kanerva connection as independently rediscovering a memory modelsn:genome-analogy(line 323) — Draws the HGP parallel: messianic promises may fail, but the core insight could still be profound
External Links (3):
- "Gödel, Escher, Bach" → Wikipedia (line 153)
- "Sparse Distributed Memory" → Wikipedia (line 168)
- "On the Biology of a Large Language Model" → Anthropic research (line 245-246)
Your task
First, run git status, git diff, and git log --oneline -10 to understand the current state.
Then add and commit all modified files with a descriptive message. Break the changes into multiple commits if appropriate. If there are untracked files, ask if they should be added or added to the appropriate .gitignore file. The user may also want to take over at this point, so provide an abort option as well. Always check changes for unnecessary debugging statements or logging that might have been accidentally included. Prompt the user if you find any, and offer to remove each one by one. Do not use interactive git staging (git add -p) as it does not work.
No debugging statements or unnecessary logging — just clean content changes. The diff looks exactly right.
Committed. One clean commit with all the annotations for both readings: 60 insertions, 28 deletions across the two files.
Can you find a good quote from this article to add? https://nadh.in/blog/code-is-cheap/ It's a favorite of mine. Or did we already add one?
No existing quote from this article. The summary was too condensed — let me get the actual text to find the best quotes.
Great article — no existing quote from it. Here are the strongest candidates for the floating quotes:
A. "One asks for code, it gives code. One asks for changes, it gives changes. Soon, one is stuck with a codebase whose workings one doesn't understand, and one is forced to go back to the genie and depend on it helplessly."
B. "Anything that can be generated infinitely without effort and has no meaningful provenance, is very hard to value or take seriously. Humans fundamentally do not deal well with an infinite supply of anything, especially choices."
C. "The real concern is for generations of learners who are being robbed of the opportunity to acquire the expertise to objectively discern what is slop and what is not."
D. "The physiological, cognitive, and emotional cost I generally incur to achieve the software outcomes I want or am capable of engineering, has undoubtedly reduced by several orders of magnitude."
This is my favorite, but let's condense a bit, maybe removing the pip install bit: "An experienced developer who can talk well, that is, imagine, articulate, define problem statements, architect and engineer, has a massive advantage over someone who cannot, more disproportionately than ever. Knowledge of specific language, syntax, and frameworks is no longer a bottleneck. The physiological constraints of yore are no longer impediments. The machinery for instantly creating code at scale is now a commodity and available to everyone, just a pip install equivalent away. It requires no special training, no new language or framework to learn, and has practically no entry barriers—just good old critical thinking and foundational human skills, and competence to run the machinery."
Here's a condensed version:
"An experienced developer who can talk well — imagine, articulate, define problem statements, architect and engineer — has a massive advantage over someone who cannot, more disproportionately than ever. Knowledge of specific syntax and frameworks is no longer a bottleneck. The machinery for instantly creating code at scale is now a commodity. It requires no special training and has practically no entry barriers — just good old critical thinking and foundational human skills."
Added the quote. Does the condensed version read well to you, or would you like to adjust the wording?
No, let's keep as much of the original as possible, don't add dashes and stuff.
Kept the original wording, just dropped the "pip install equivalent away" link phrase.
Your task
First, run git status, git diff, and git log --oneline -10 to understand the current state.
Then add and commit all modified files with a descriptive message. Break the changes into multiple commits if appropriate. If there are untracked files, ask if they should be added or added to the appropriate .gitignore file. The user may also want to take over at this point, so provide an abort option as well. Always check changes for unnecessary debugging statements or logging that might have been accidentally included. Prompt the user if you find any, and offer to remove each one by one. Do not use interactive git staging (git add -p) as it does not work.
Clean change — no debugging, just the quote addition.
Done.
Your task
First, run git status, git diff, and git log --oneline -10 to understand the current state.
Then add and commit all modified files with a descriptive message. Break the changes into multiple commits if appropriate. If there are untracked files, ask if they should be added or added to the appropriate .gitignore file. The user may also want to take over at this point, so provide an abort option as well. Always check changes for unnecessary debugging statements or logging that might have been accidentally included. Prompt the user if you find any, and offer to remove each one by one. Do not use interactive git staging (git add -p) as it does not work.
Pure version bump across all packages — no debugging or unexpected changes.
Done.