Add Suggested Timing to Instructor Session Status
Implement the following plan:
Add Suggested Timing to Instructor Session Status
Overview
Show estimatedTime from MDX stage props in the live instructor session display, alongside the existing elapsed timer. Add an overall "ahead/behind schedule" indicator.
Currently, estimatedTimeSeconds is collected from MDX props into stageMetadata and used only by the pre-session ScheduleTable. During a live session, the StageRow component shows elapsed time and soft/hard caps but has no concept of suggested duration.
Modified File
apps/web/components/mdx/activity-stage-instructor.tsx
Changes
1. Pass estimatedTimeSeconds into StageRow
Add estimatedTimeSeconds to StageRowProps and pass it from the stageMetadata map when rendering each StageRow (around line 482-497):
<StageRow
...existing props...
estimatedTimeSeconds={stageMetadata.get(stage.stageIndex)?.estimatedTimeSeconds ?? null}
/>
2. Show suggested time in each StageRow
In the timer display area (lines 660-685), after the elapsed time, show the estimated duration when available. Format: 2:30 / 5:00 (elapsed / suggested). This replaces the current display of just 2:30.
When the stage has an estimated time:
- Show
elapsed / estimatedin the timer (e.g.2:30 / 5:00) - Timer color logic stays the same (soft/hard cap coloring takes priority)
When no estimated time exists, keep current behavior (just elapsed).
3. Add overall session timing indicator
In the session status area (around line 440-446, after the "Stage X of Y" progress line), add an ahead/behind schedule indicator that computes:
- Expected elapsed: sum of
estimatedTimeSecondsfor all stages up to and including the current one - Actual elapsed: total wall-clock time since session start (minus paused time), using
data.startedAtandtotalPausedMs - Delta: actual - expected (positive = behind, negative = ahead)
Display format examples:
2m ahead(green text)3m behind(amber text)on schedule(muted text, when within ±30s)
Only show when at least one stage has estimatedTimeSeconds defined. Compute using a simple formula:
expectedSeconds = sum of estimatedTimeSeconds for stages 0..currentStageIndex
actualSeconds = (now - sessionStartedAt - totalPausedMs) / 1000
delta = actualSeconds - expectedSeconds
This uses a 1-second update interval, piggybacking on the existing StageRow timer mechanism. The simplest approach: compute it directly in the render of the session status section, using a small useState/useEffect pair for the ticking clock (same pattern as StageRow).
4. Handle pause/stop states
- When paused: freeze the "ahead/behind" calculation (use
pausedAtinstead ofnow) - When stopped: freeze using
stoppedAt - Same logic already used by
StageRowelapsed timers
Verification
bun run tsc— no type errorsbun run lint— no lint errors- Manual verification: visit a meeting page with
estimatedTimeprops as instructor, start a session, observe:- Each stage row shows
elapsed / estimatedformat - Overall session shows ahead/behind indicator
- Indicator updates live and respects pause/stop
- Each stage row shows
If you need specific details from before exiting plan mode (like exact code snippets, error messages, or content you generated), read the full transcript at: ~/.claude/projects/-Users-challen-www-usingandunderstanding-ai/8a0570cf-58ec-4c3e-b0f7-dfbe037c6590.jsonl
I'll start by reading the file that needs to be modified to understand the current implementation.
Now I have a clear picture of the file. Let me create the task list and implement the changes.
Now let me implement all three code changes. I'll start with adding estimatedTimeSeconds to the StageRowProps and passing it.
Now pass the prop from the render site:
Now update the timer display in StageRow to show elapsed / estimated format:
Now add the overall session ahead/behind schedule indicator. This needs a ticking clock, so I'll add a small component. Let me add it after the "Stage X of Y" progress line:
Now I need to add the SessionTimingIndicator component. I'll add it before the ScheduleTable component:
Now let me run type checking and linting in parallel.
Both pass cleanly.
Can we add or augment an exiting e2e test to confirm this?
Let me explore the existing E2E tests related to activity stages to understand what's already tested.
Let me read the timing test file and also check what meeting fixtures define estimatedTime.
Now I need to check if the test component defines estimatedTime on stages, since the tests need stages with that prop.
Let me read the test page to see the exact code.
Now I need to check how estimatedTime is parsed to know the correct prop format.
Now I need to check how ActivityStages extracts and parses estimatedTime from children.
Good. The prop is estimatedTime with formats like "10m", "30m". Now I have everything I need. Let me:
- Add
estimatedTimeprops to the test page stages - Add E2E tests for the new features
First, add estimatedTime to the test stages:
Now let me look at the existing "timer displays elapsed time" test more closely — the regex timerText!.match(/^(\d+):(\d+)$/) will break now that the timer text can contain elapsed / estimated. I need to update that test too.
Now let me check all the places in the test file that parse timer text with the ^(\d+):(\d+)$ pattern, since those will need updating to account for the new elapsed / estimated format.
Three places parse the instructor timer text with ^(\d+):(\d+)$. Now that the timer shows 0:30 / 3:00 for stages with estimated time, the $ anchor won't match. I need to update the regex to handle both formats. The elapsed time is always at the start, so I can use ^(\d+):(\d+) without the $ anchor.
Now let me add the new test cases. I'll add a new describe block for the estimated time and session timing features at the end of the file, before the closing });.
Now let me also check the playwright config to make sure the activity-stage-timing project exists.
Good, the project exists. Let me verify everything compiles correctly.
Let me also check that the dev server is running so we can run the E2E tests.
Dev server is running. Let me run the new E2E tests.
All 6 new tests pass. Let me also verify the existing tests in this project still pass with the regex change and the estimatedTime additions.
All 33 tests pass — both the 27 existing tests and the 6 new ones.
Here's a summary of the changes:
Test page (apps/web/app/testing/components/activity-stages/page.tsx):
- Added
estimatedTimeprops to all 4 stages:"3m","2m","10m","5m"
E2E tests (testing/e2e/tests/activity-stage-timing.spec.ts):
- Fixed 3 existing timer text regexes from
^(\d+):(\d+)$to^(\d+):(\d+)— the$anchor no longer works since the timer now showselapsed / estimatedformat - Added 6 new tests in an "Estimated Time and Session Timing" describe block:
- Timer shows elapsed / estimated format — verifies the
" / "separator and"3:00"estimated time appear in the timer text - Shows "on schedule" initially — right after starting, with near-zero elapsed and 180s budget, confirms ahead indicator
- Shows "behind" when over estimated time — advances 240s without advancing past stage 0 (180s estimate)
- Shows "on schedule" near estimated time — advances exactly 180s, matching stage 0's estimate
- Freezes when paused — pauses at on-schedule, advances 120s while paused, confirms still on-schedule after
- Accumulates across stages — advances stage early (60s of 180s), confirms cumulative ahead indicator on stage 2
- Timer shows elapsed / estimated format — verifies the
Your task
First, run git status, git diff, and git log --oneline -10 to understand the current state.
Then add and commit all modified files with a descriptive message. Break the changes into multiple commits if appropriate. If there are untracked files, ask if they should be added or added to the appropriate .gitignore file. The user may also want to take over at this point, so provide an abort option as well. Always check changes for unnecessary debugging statements or logging that might have been accidentally included. Prompt the user if you find any, and offer to remove each one by one. Do not use interactive git staging (git add -p) as it does not work.
No untracked files, no debugging statements or logging in the diff. All changes are related — this is a single logical feature. I'll commit as one.
Tell me about tomorrow's activity.
(no content)
Tomorrow's meeting is Creative Media Lab — a hands-on session where students create images, video, and music with AI tools.
Structure (7 stages, ~69 minutes total):
| Stage | Est. Time | What happens |
|---|---|---|
| Create Images | 15m | Partners agree on a concept, each independently generate an AI image, submit their best |
| Image Gallery | 5m | Instructor projects submissions, discusses differences |
| Create Video | 12m | New partners, same pattern with AI video tools (Runway, Pika, Luma) |
| Video Gallery | 5m | Project and discuss video results |
| Create Music | 12m | New partners again, use Suno/Udio to generate music |
| Music Gallery | 5m | Play submissions, guess which pairs had the same concept |
| Discussion | 15m | Closing discussion on creative control, surprises, human creators, energy cost |
Core idea: Partners agree on a concept then work independently. Comparing results shows that AI output depends heavily on the person prompting, not just the prompt. Students rotate to new partners each round (3 GroupActivity blocks sharing the same creative-media group key).
Logistics notes: Students need devices with access to AI tools. Different students using different tools is intentional — it reinforces the theme. Video tools are slower so students may only get 1-2 attempts. Free tiers may be limited.
This meeting already has estimatedTime on all stages and a startTime="2:05pm", so the new timing indicator you just built will be active for it.
We have 80 minutes, so we can do 20 20 20 20 with each 15+5.
So 4 rounds of 20 minutes each (15m create + 5m gallery), dropping the separate discussion stage. That totals 80 minutes. Currently there are 3 create rounds (images, video, music) — a 4th round is needed. What should the 4th medium be, or would you rather split one of the existing rounds differently?
Actually, re-reading your message: 20/20/20/20 with each being 15+5 could also mean keeping 3 create+gallery rounds at 20m each (60m) and giving 20m to the discussion. Let me just update the timings to be uniform at 15+5 across the three rounds and bump discussion to 20m.
Updated the timings to 4 x 20-minute blocks:
| Block | Create | Gallery | Total |
|---|---|---|---|
| Images | 15m | 5m | 20m |
| Video | 15m (was 12m) | 5m | 20m |
| Music | 15m (was 12m) | 5m | 20m |
| Discussion | 20m (was 15m) | — | 20m |
Total: 80 minutes.
Nice! Do the suggested tools also discuss pricing? Students may want to use free tiers whenever possible.
(no content)
The facilitation notes mention that "free tiers are limited" for video tools, but the student-facing content doesn't mention pricing at all. Let me check what the current free tier situation looks like for these tools.
(no content)
The research turned up some important findings. Several tools the meeting suggests are effectively unusable on free tiers:
- Runway: one-time 125 credits (~10 seconds of video total), then done
- Luma Dream Machine: no free video generation at all anymore
- Ideogram: only 10 images/week, and all creations are forced public
- ChatGPT/DALL-E: only 2-3 images/day
The viable free options are Google Gemini (images), Pika (video), and Suno (music). Let me update both the facilitation notes and student-facing content accordingly.
Now update the student-facing instructions in each round with concrete free tool guidance.
Also update the facilitation notes for the video stage since the tool landscape has changed:
Done. Here's what changed:
Facilitation notes — replaced the generic tool list with specific free tier details: Gemini is best for images, Pika is the only viable free video tool, Suno is best for music. Added a note to steer students toward free-friendly options.
Student-facing instructions — each round now names the best free tools with brief context:
- Images: Google Gemini (most generous), ChatGPT (higher quality but 2-3/day), Copilot
- Video: Pika (best free option, 80 credits/month). Dropped the Runway/Pika/Luma list since two of the three are essentially paywalled.
- Music: Suno (~10 songs/day), Udio as backup (3 songs/day, possible download issues)
Your task
First, run git status, git diff, and git log --oneline -10 to understand the current state.
Then add and commit all modified files with a descriptive message. Break the changes into multiple commits if appropriate. If there are untracked files, ask if they should be added or added to the appropriate .gitignore file. The user may also want to take over at this point, so provide an abort option as well. Always check changes for unnecessary debugging statements or logging that might have been accidentally included. Prompt the user if you find any, and offer to remove each one by one. Do not use interactive git staging (git add -p) as it does not work.
The changes fall into three logical groups:
- Creative media meeting updates (timing adjustments + free tier tool guidance)
- Auto-formatting (prettier ran on some files)
- Version bumps (package.json files)
No debugging statements or unnecessary logging. Let me commit these separately:
OK. Now, I have a colleague from the philosophy department coming in next Tuesday while I'm traveling to lead a discussion on the ethics and philosophy of AI. He'd like to assign the first chapter of Understanding Media. That's an important text, but a difficult one. So I'd like to do a few things. First, I'd like to provide it to logged-in users on the course website in an appropriate area, maybe under a /readings area? I have an ePUB that I purchased that I can provide for you to extract the text from into Markdown. It's in this directory as BOOK.epub. Please add a note about providing it through academic fair use; I think there's language in ../societyand.technology. Next, I'd like you to annotate the text using our side notes to help students work their way through it. Add notes that aid in understanding and also connect to the course themes, highlighting connections with modern-day generative AI.
(no content)
Let me explore the codebase to understand the current readings setup, side notes component, and fair use language.
I thought sidenotes would be an appropriate way to do this, but feel free to suggest others.
(no content)
Let me explore what annotation components exist and the other resources you mentioned.
Good — there are already three relevant components registered in MDX: <Sidenote>, <Footnote>, and <Details>. Let me look at the ePUB and the existing reading files to understand the content patterns before designing the approach.
(no content)
Let me read the ePUB and the annotation components to design the best approach.
Now let me check the readings route pattern and the LoggedInOnly component.
Now let me look at how the syllabus page works, since readings would follow a similar single-page pattern.
Now let me check how LoggedInOnly works — this is key for gating the reading content.
Good. Now let me also check how navigation works to see whether readings need a nav link.
I now have a thorough understanding of the codebase. Before I write the plan, I have a few questions.
Good. Now let me write the plan.