Plan: Study Guide Lab — Feb 26, 2026
Implement the following plan:
Plan: Study Guide Lab — Feb 26, 2026
Context
Midterms are approaching for students' other courses. This activity gives students practical study time while teaching them effective AI-assisted study techniques. Students bring materials from another course, pair with someone studying a different subject (so the partner serves as a naive skeptic who can't be fooled by confident-sounding AI), and work through three rounds of AI-assisted studying — each introducing a different technique grounded in learning science.
Key design decisions from brainstorming:
- Students study for their other courses, not this one
- Partners should be in different courses (naive skeptic model)
- Students bring their own materials (notes, slides, textbooks)
- Use external AI tools (ChatGPT, Claude, etc.) — not the course chat
- Each round: pair kickoff (~3-5 min) → solo study (~10-12 min) → share-out (~10 min)
- Each round suggests a technique but students can choose a different approach
- Each round includes a learning science nugget explaining why the technique works
- Brief text reflection after each share-out
- 75 minutes total (3 rounds × 25 min)
File to Create
content/spring2026/meetings/2026-02-26-study-guide-lab.mdx
Use the same MDX format as 2026-02-24-how-llms-work.mdx — YAML frontmatter, InstructorOnly facilitation notes, ActivityStages with 6 stages.
Frontmatter
date: 2026-02-26
semester: spring2026
title: "Study Guide Lab"
summary: "Use AI to build study materials for your other courses while learning evidence-based study techniques"
Facilitation Notes (InstructorOnly)
Include:
- Philosophy: Students get real study value for their midterms while learning when and how AI helps (and doesn't help) with studying. The three rounds introduce techniques backed by learning science research. Partners are ideally in different courses — the "naive skeptic" can catch moments where the studier just accepts AI output without understanding it.
- Logistics: Ask students to bring course materials (notes, slides, textbooks) for another class. Announce this ahead of time if possible. Pair students in different courses. No formal roles — pairing is for kickoff discussion and share-out, studying is individual.
- Pacing table:
| # | Stage | Duration | Cumulative |
|---|---|---|---|
| 1 | Round 1: Active Recall | 15m | 15m |
| 2 | Round 1: Share Out | 10m | 25m |
| 3 | Round 2: Practice Testing | 15m | 40m |
| 4 | Round 2: Share Out | 10m | 50m |
| 5 | Round 3: Build a Study Aid | 15m | 65m |
| 6 | Round 3: Share Out | 10m | 75m |
Intro Text (before ActivityStages)
Brief paragraph: Midterms are coming — today you'll spend class time studying for your other courses, using AI as a study partner. Each round introduces a different study technique backed by learning science research. You'll work with a partner who's studying for a different course, which is intentional — they'll help you stay honest about whether you actually understand the material or whether AI is just making you feel like you do.
Instruction to have materials ready (notes, slides, textbook) for another course they need to study for.
Stage Structure (6 stages, no GroupActivity component)
Pairing is informal (sit with a partner), no enforced roles needed.
Stage 1: "Round 1: Active Recall" (15m)
Learning science nugget: Research consistently shows that trying to recall information from memory is one of the most effective ways to strengthen learning — far more effective than re-reading notes. This is called the testing effect: the act of struggling to remember something makes the memory stronger when you do find the answer.
Instructions:
- Pair kickoff (~3 min): Tell your partner what course and topic you're studying for. Discuss: what's your plan for using active recall with AI?
- Solo study (~12 min): Work individually with AI using active recall.
Suggested approach: Write down everything you know about a topic from memory — no notes, no AI. Then share your brain dump with AI and ask it to identify what you're missing or got wrong. Pay attention to whether the gaps AI finds are things you genuinely didn't know vs. things you just forgot to write down.
But you could also: ask AI to quiz you verbally with open-ended questions and answer from memory; use AI as a flashcard generator and test yourself; or try any other approach that starts with recall from memory before using AI.
Stage 2: "Round 1: Share Out" (10m)
FacilitationNotes: Ask 3-4 students what happened. Good prompts: "What did AI find that surprised you?" "Did AI identify any gaps that were actually wrong — things it said you missed but you actually know?" "Did anyone catch AI confabulating about your course material?"
TextSubmission (id: study-round1-reflection): What study technique did you use? What did you learn about how well you know this material?
Stage 3: "Round 2: Practice Testing" (15m)
Learning science nugget: Self-testing is the gold standard of study techniques. Students who test themselves consistently outperform those who spend the same time re-reading material — even when the re-readers feel more prepared. That false sense of readiness is called the illusion of competence: material feels familiar when you read it, but familiarity isn't the same as being able to produce an answer on an exam.
Instructions:
- Pair kickoff (~3 min): How did round 1 go? What topic are you focusing on now? How are you planning to use AI for practice testing?
- Solo study (~12 min): Work individually with AI using practice testing.
Suggested approach: Share your course materials with AI. Ask it to generate practice exam questions at the level your professor would ask. Answer them without looking at your notes or AI. Then check yourself. Focus on the questions you got wrong — those are your study priorities.
But you could also: ask AI to generate questions in different formats (multiple choice, short answer, explain-a-concept, true/false with justification); have AI role-play as a strict professor giving an oral exam; ask AI to generate questions that target common misconceptions; or try any other approach centered on testing yourself.
Stage 4: "Round 2: Share Out" (10m)
FacilitationNotes: Prompts: "Were AI-generated questions good? Too easy, too hard, or about right?" "How did your confidence compare to your actual performance?" "Did anyone find that AI was bad at writing certain kinds of questions?" This is a good moment to reinforce the illusion of competence — many students will report feeling like they knew the material but struggling on specific questions.
TextSubmission (id: study-round2-reflection): What study technique did you use? What did you learn about your own confidence vs. actual understanding?
Stage 5: "Round 3: Build a Study Aid" (15m)
Learning science nugget: Creating organized study materials — concept maps, cheat sheets, summary notes — forces you to make decisions about what matters most and how ideas connect. This deep processing strengthens understanding. But there's a trap: if AI creates the study aid for you, you skip the cognitive work that makes it valuable. A beautiful, clear AI-generated summary can make you feel like you understand the material when you've really just read something that's easy to follow.
Instructions:
- Pair kickoff (~3 min): What are your weakest areas from rounds 1 and 2? What kind of study aid would be most useful? How do you plan to keep yourself doing the thinking rather than outsourcing it to AI?
- Solo study (~12 min): Work individually with AI to build a study aid for your weakest area.
Suggested approach: Pick your weakest area (revealed by the first two rounds). Start by outlining the key concepts yourself, then use AI to help you fill in details, check accuracy, and improve the organization. The goal: a study aid where you did the thinking and AI helped with the polish.
But you could also: ask AI to explain a concept three different ways and pick the one that clicks; have AI generate analogies for difficult concepts; create a "misconception sheet" (common wrong answers with corrections); build practice problems with worked solutions; or try any other approach that produces something you'll use to study later.
Stage 6: "Round 3: Share Out" (10m)
FacilitationNotes: Prompts: "Show your partner your study aid. Can they quiz you on it — and can you answer without looking?" "How much of the study aid came from your understanding vs. AI's generation?" "Which of the three rounds helped you learn the most? Why?" Use this final share-out to connect all three rounds back to the key theme: AI is a powerful study tool, but only if you use it in ways that make your brain do the work.
TextSubmission (id: study-round3-reflection): What study technique did you use? Across all three rounds, which approach helped you learn the most and why?
Files Summary
| Action | File |
|---|---|
| Create | content/spring2026/meetings/2026-02-26-study-guide-lab.mdx |
Verification
- Start dev server (or confirm it's running):
curl -sk -o /dev/null -w "%{http_code}" https://[local-hostname]/ - Navigate to
https://[local-hostname]/spring2026/meetings/2026-02-26-study-guide-laband verify the page renders - Check that all 6 stages appear in the activity stages component
- Verify InstructorOnly content is visible when logged in as instructor
- Verify TextSubmission components render in share-out stages
If you need specific details from before exiting plan mode (like exact code snippets, error messages, or content you generated), read the full transcript at: ~/.claude/projects/-Users-challen-www-usingandunderstanding-ai/a195b5ab-fe9f-459c-8ace-4cae7ac7eaa3.jsonl
Now I'll create the MDX file following the exact format from the reference.
Let's also create an e2e test for this activity, like the others, opt-in only using the same naming convention.
Let me look at existing E2E tests for meetings/activities to understand the patterns.
The MDX file doesn't exist yet — I need to create both files. Let me first explore the E2E test patterns and check for the reference MDX.
Let me read an existing meeting E2E test and the meeting-2026-02-24 test for the closest reference.
Now let me look at one more meeting test for comparison, and check the meeting-2026-02-03 test too since it may be simpler.
Now I have all the patterns. Let me create both files and update the config.
Now create the E2E test. This meeting is simpler than the others — no GroupActivity, just stages with TextSubmission in the share-out stages.
Now add the project entry to the Playwright config, following the request-only pattern (no dependencies: deps).
Now let me verify the page renders.
The page returns 200. Let me also do a quick content check to verify the stages are rendering.
All 6 stages render. Let me also verify the TextSubmission components appear in the share-out stages.
All three TextSubmission components are rendering. Let me also verify the InstructorOnly and FacilitationNotes content is present in the HTML (it will be there but hidden for non-instructors).
Everything is working. Here's a summary of what was created:
Created files:
-
content/spring2026/meetings/2026-02-26-study-guide-lab.mdx— The meeting content with:- YAML frontmatter (date, semester, title, summary)
- InstructorOnly facilitation notes with philosophy, logistics, and pacing table
- Intro text about studying for other courses with a partner in a different course
- 6 ActivityStages (3 rounds × study + share-out):
- Stage 0: Active Recall (learning science nugget about the testing effect)
- Stage 1: Share Out + TextSubmission (
study-round1-reflection) - Stage 2: Practice Testing (learning science nugget about illusion of competence)
- Stage 3: Share Out + TextSubmission (
study-round2-reflection) - Stage 4: Build a Study Aid (learning science nugget about deep processing vs outsourcing)
- Stage 5: Share Out + TextSubmission (
study-round3-reflection)
- Each round has FacilitationNotes with discussion prompts
- Each round offers a suggested approach plus alternatives
-
testing/e2e/tests/meeting-2026-02-26.spec.ts— E2E test covering:- Instructor starts/advances through all 6 stages
- Two students navigate and check in to each stage
- TextSubmission visibility verified in all 3 share-out stages
- Students submit reflections; instructor sees submission dashboard
- Session cleanup at the end
-
testing/e2e/playwright.config.ts— Addedmeeting-2026-02-26as a request-only project (no dependencies, run with--project=meeting-2026-02-26)
Verification: Page returns 200, all 6 stages render, all 3 TextSubmission components present.
When I run the tests they fail: 130 | // Verify stage content is visible 131 | for (const page of [s1, s2]) { > 132 | await expect(page.getByTestId("activity-stage-2-content")).toBeVisible({ timeout: 5000 }); | ^ 133 | } 134 | 135 | // === STAGE 3: Round 2 Share Out (has TextSubmission) === at ~/www/usingandunderstanding.ai/testing/e2e/tests/meeting-2026-02-26.spec.ts:132:66
attachment #1: screenshot (image/png) ──────────────────────────────────────────────────────────
testing/e2e/test-results/meeting-2026-02-26-Meeting-64e8b-issions-in-share-out-stages-meeting-2026-02-26/test-failed-2.png
────────────────────────────────────────────────────────────────────────────────────────────────
attachment #2: screenshot (image/png) ──────────────────────────────────────────────────────────
testing/e2e/test-results/meeting-2026-02-26-Meeting-64e8b-issions-in-share-out-stages-meeting-2026-02-26/test-failed-3.png
────────────────────────────────────────────────────────────────────────────────────────────────
attachment #3: screenshot (image/png) ──────────────────────────────────────────────────────────
testing/e2e/test-results/meeting-2026-02-26-Meeting-64e8b-issions-in-share-out-stages-meeting-2026-02-26/test-failed-1.png
────────────────────────────────────────────────────────────────────────────────────────────────
1 failed [meeting-2026-02-26] › testing/e2e/tests/meeting-2026-02-26.spec.ts:25:3 › Meeting 2026-02-26: Study Guide Lab › full meeting flow: three study rounds with text submissions in share-out stages 1 error was not a part of any test, see above for details
Let me look at the screenshots to understand what's happening.
The content is clearly visible on screen but the activity-stage-2-content testid isn't being found. Let me check how the ActivityStage component generates its testids.
I see the issue. Screenshot 3 shows a student where stage 2 has a "Start" button that was never clicked — the check-in didn't complete for that student. The test splits the wait-for-stage and click-start-button into separate loops, creating a race condition. Let me fix this by combining them into single loops (matching the pattern from meeting-2026-02-05) and adding explicit waits for start buttons.
The problem is that the start-button click and content check are in separate loops. When the test clicks s1's start button then s2's start button without waiting for s1's content to render first, and the page might not be ready. The working pattern from meeting-2026-02-05 combines wait + click in a single loop. Let me fix by waiting for the start button to be visible before clicking, and waiting for content after each individual click.
The key fix is the checkinToStage helper that processes each student sequentially — waiting for the stage to be visible, clicking the start button, and waiting for content to render — all within a single iteration before moving to the next student. This avoids the race condition where clicking both start buttons before either has finished rendering.
Your task
First, run git status, git diff, and git log --oneline -10 to understand the current state.
Then add and commit all modified files with a descriptive message. Break the changes into multiple commits if appropriate. If there are untracked files, ask if they should be added or added to the appropriate .gitignore file. The user may also want to take over at this point, so provide an abort option as well. Always check changes for unnecessary debugging statements or logging that might have been accidentally included. Prompt the user if you find any, and offer to remove each one by one. Do not use interactive git staging (git add -p) as it does not work.
Three files, all related — one commit makes sense. No debugging statements to worry about (MDX content and test code). All three untracked/modified files should be included.
Committed as 68ffa88 — 3 files, 351 insertions.
OK! Now, describe today's activity to me.
Study Guide Lab — students spend class time studying for their other courses using AI, while learning three evidence-based study techniques.
The setup: Students bring materials (notes, slides, textbook) for another class with midterms approaching. They pair with someone studying a different subject — the partner acts as a "naive skeptic" who can't be fooled by confident-sounding AI output because they don't know the material either.
Three rounds, each 25 minutes (pair kickoff → solo study → share-out with reflection):
-
Active Recall — Brain dump everything you know from memory first, then ask AI to find gaps. The learning science: the testing effect means struggling to remember strengthens the memory more than re-reading.
-
Practice Testing — Have AI generate exam-level questions, answer without notes, then check yourself. The learning science: the illusion of competence means material feels familiar when you read it, but familiarity isn't the same as being able to produce an answer.
-
Build a Study Aid — Create a cheat sheet or concept map for your weakest area (revealed by rounds 1-2), using AI for polish not generation. The learning science: deep processing from organizing material is what makes study aids valuable — if AI does the organizing, you skip the cognitive work.
Key design choices:
- Each round suggests a technique but students can choose their own approach
- External AI tools (ChatGPT, Claude, etc.), not the course chat
- Partners are informal (no enforced roles or group codes)
- Text reflection after each share-out captures what they learned about their own studying
- 75 minutes total
Wait, I think we agreed that the partner wouldn't act as naive skeptic. It's more a person to discuss their ideas with before working with AI to implement them.
You're right, let me update the MDX to reframe the partner role — someone to discuss plans and share findings with, not a skeptic checking your work.
Also need to remove the "different courses" requirement from the logistics note.
Updated three spots — removed the "naive skeptic" framing, the "different courses" requirement, and reframed the partner role as a discussion buddy for kickoff and share-out.
Wait, sorry. I want enforced pairing, just without the roles.
Now I need to add GroupActivity components to the study round stages (0, 2, 4) so pairing is enforced. Let me look at how other meetings use GroupActivity without roles.
Good — I'll use the same pattern: GroupActivity with size={2} and autoComplete={false}, using a single group key so partners rotate each round. I need to wrap the content in each study stage (0, 2, 4) with GroupActivity.
Now I'll wrap each study round stage (0, 2, 4) with GroupActivity. Using a single group key study-guide-2026-02-26 so partners rotate each round.
Now I need to update the E2E test to handle GroupActivity pairing instead of individual check-in for the study stages (0, 2, 4).
Let me verify the page still renders correctly with the GroupActivity components.
Page renders with GroupActivity components present. Updated both files:
- MDX: Added
<GroupActivity>wrappers (withsize={2},autoComplete={false}, shared group keystudy-guide-2026-02-26) to all three study round stages so pairing is enforced - E2E test: Updated study stages to use
formPairInStagefor GroupActivity pairing instead of individual check-in; share-out stages still use individual check-in withcheckinToStage
Please run the tests since they are failing.
Failing at stage 2 pairing. Let me check the screenshots.
Screenshot 3 shows the problem clearly: "You just worked with E2E Test User student1! Please find a different partner." — With only 2 students using the same group key, round 2 rejects the pairing because they were already partners in round 1. The formPairInStage helper handles the prior-partner warning but not this outright rejection.
Looking at how the meeting-2026-02-17 test handles this — it uses 4 students so pairs can rotate. But with only 2 students, they'll always get the prior-partner block. I need to handle the prior-partner warning flow correctly. Let me check the screenshot again — it says "find a different partner" which is the warning state, not a hard block.
The issue is clear: with only 2 students sharing the same group key, round 2 triggers the rank-1 prior-partner warning ("You just worked with..."). The formPairInStage helper checks for the warning too early — before the server has responded — so it misses it and then fails waiting for content that never appears.
The fix: wait for either the content or the warning to appear before checking which one we got.
Tests pass in 5.8s. The fix was waiting for either the group content or the prior-partner warning to appear before checking which one we got — that way the two-step confirm flow for repeated partners is handled correctly.
Your task
First, run git status, git diff, and git log --oneline -10 to understand the current state.
Then add and commit all modified files with a descriptive message. Break the changes into multiple commits if appropriate. If there are untracked files, ask if they should be added or added to the appropriate .gitignore file. The user may also want to take over at this point, so provide an abort option as well. Always check changes for unnecessary debugging statements or logging that might have been accidentally included. Prompt the user if you find any, and offer to remove each one by one. Do not use interactive git staging (git add -p) as it does not work.
Two modified files, all related — no debug statements. One commit.
The git status showed relative paths — need to use them from the repo root.
Committed as 8998dd6 — 2 files, 125 insertions and 25 deletions.
Your task
First, run git status, git diff, and git log --oneline -10 to understand the current state.
Then add and commit all modified files with a descriptive message. Break the changes into multiple commits if appropriate. If there are untracked files, ask if they should be added or added to the appropriate .gitignore file. The user may also want to take over at this point, so provide an abort option as well. Always check changes for unnecessary debugging statements or logging that might have been accidentally included. Prompt the user if you find any, and offer to remove each one by one. Do not use interactive git staging (git add -p) as it does not work.
All 23 package.json files changed — looks like a version bump. Let me check what changed.
Version bump from 2026.2.32 → 2026.2.33 across all packages. No debug statements — clean change.
Committed as f896cee — version bump to 2026.2.33 across all 23 packages.
OK! Next, I'd like to create a Feedback component that we can use to gather per-activity feedback going forward. We should have done this earlier but I forgot: oops. This should probably also count toward the activity time in a configurable way. This is always an individual component.
Good exploration. Let me clarify a few things before designing the plan.
Good — free text, in its own ActivityStage with configurable estimatedTime. Now let me design the implementation.
Good plan from the agent. Let me read the key reference files to validate a few details, then write the final plan.
Now I have everything I need. Let me write the plan.