Plan: Dual Session Timing Indicators (Pace + Now)
Implement the following plan:
Plan: Dual Session Timing Indicators (Pace + Now)
Context
The SessionTimingIndicator in the activity stage instructor view currently shows a single ahead/behind indicator with a flawed calculation — it can't accurately compute per-stage effective times because totalPausedMs is tracked at the session level, not per-stage. The instructor wants TWO indicators:
- Pace (projected): "If this stage finishes on time, how far ahead/behind am I?" Stable within a stage; only changes when advancing or overrunning the current stage's estimate.
- Now (live): "Right now, how does my total elapsed compare to my budget through this stage?" Ticks every second.
The two indicators converge when the current stage hits its estimated time or is overrunning, and diverge when the current stage is under its estimate (now > pace because budget remains for the current stage).
Changes
1. Schema: add totalPausedMsAtStart column
File: packages/db/src/schema/activity-stages.ts
Add to activityStageTimings table:
totalPausedMsAtStart: integer("total_paused_ms_at_start").default(0).notNull(),
Then run bun run db:generate to create migration (never hand-write migrations).
2. API: set totalPausedMsAtStart on start and advance
File: apps/api/src/routes/admin/activity-stages.ts
- Start session (~line 90): When inserting stage timings, stage 0 gets
totalPausedMsAtStart: 0(the default handles this, but be explicit). - Advance stage (~line 164): When updating the new stage's
startedAt, also settotalPausedMsAtStartto the session's currenttotalPausedMs. - Status response (~line 237): Include
totalPausedMsAtStartin each stage's data in the response.
3. Frontend type: add field to StageStatus
File: apps/web/components/mdx/activity-stage-instructor.tsx (line 20)
Add to StageStatus interface:
totalPausedMsAtStart: number;
4. Frontend: pass stages to SessionTimingIndicator
File: apps/web/components/mdx/activity-stage-instructor.tsx (~line 472)
Add stages={stages} prop to the SessionTimingIndicator call. Update the props interface to accept stages: StageStatus[].
5. Frontend: rewrite SessionTimingIndicator calculation
File: apps/web/components/mdx/activity-stage-instructor.tsx (~line 843)
Replace the single bufferSeconds state with two: paceBuffer and nowBuffer.
Pace buffer (projected, stable within a stage):
currentStageStartedAt = stages[currentStageIndex].startedAt (wall-clock)
pausesSinceStageStart = totalPausedMs - stages[currentStageIndex].totalPausedMsAtStart
currentStageElapsed = (now - currentStageStartedAt - pausesSinceStageStart) / 1000
currentStageEstimate = stageMetadata.get(currentStageIndex)?.estimatedTimeSeconds ?? 0
completedActual = (currentStageStartedAt - sessionStart - stages[currentStageIndex].totalPausedMsAtStart) / 1000
completedEstimate = sum(estimates[0..currentStageIndex-1])
pace = completedEstimate - completedActual + min(0, currentStageEstimate - currentStageElapsed)
Now buffer (live, ticks every second):
totalElapsed = (now - sessionStart - totalPausedMs) / 1000
budgetThroughCurrentStage = sum(estimates[0..currentStageIndex])
now = budgetThroughCurrentStage - totalElapsed
6. Frontend: render two pills
File: apps/web/components/mdx/activity-stage-instructor.tsx (~line 936)
Render both indicators side by side with equal visual weight:
· pace: 3m ahead · now: 2m ahead
Each pill uses the same color logic (green for ahead, amber for behind, muted for on-schedule within ±30s). Both get data-testid attributes:
session-timing-pace-ahead,session-timing-pace-behind,session-timing-pace-on-schedulesession-timing-now-ahead,session-timing-now-behind,session-timing-now-on-schedule
7. Update E2E tests
File: testing/e2e/tests/activity-stage-timing.spec.ts
Update the 6 timing indicator assertions to use the new data-testid names (replacing session-timing-ahead etc. with the pace/now variants). The existing tests' expectations map to:
| Test | Pace | Now |
|---|---|---|
| "on schedule initially" (time=0) | on-schedule | ahead (full budget remaining) |
| "behind when over estimated" (240s on 180s stage) | behind | behind (converged, overrunning) |
| "on schedule near estimated" (180s on 180s stage) | on-schedule | on-schedule (converged) |
| "freezes when paused" | on-schedule | on-schedule |
| "accumulates across stages" (60s on 180s stage, then advance) | ahead | ahead |
Files Summary
| Action | File |
|---|---|
| Modify | packages/db/src/schema/activity-stages.ts — add totalPausedMsAtStart column |
| Generate | packages/db/drizzle/XXXX_*.sql — auto-generated migration |
| Modify | apps/api/src/routes/admin/activity-stages.ts — set field on start/advance, include in status |
| Modify | apps/web/components/mdx/activity-stage-instructor.tsx — dual indicator logic + render |
| Modify | testing/e2e/tests/activity-stage-timing.spec.ts — update timing test assertions |
Verification
bun run db:generate # Generate migration
bun run db:migrate # Apply locally
bun run tsc # Type check
# Run timing E2E tests
cd testing/e2e && CI=true bunx playwright test --project=activity-stage-timing
If you need specific details from before exiting plan mode (like exact code snippets, error messages, or content you generated), read the full transcript at: ~/.claude/projects/-Users-challen-www-usingandunderstanding-ai/a8914d39-1432-4d3b-aeef-41307021a9c9.jsonl
I'll start by reading all the files I need to modify in parallel.
Let me read the E2E test file fully to understand the timing tests.
Now I have a complete picture. Let me create tasks and implement everything.
Step 1: Add totalPausedMsAtStart column to schema
Step 2: Update API — start, advance, and status endpoints
Start endpoint — set totalPausedMsAtStart: 0 explicitly for stage 0:
Advance endpoint — set totalPausedMsAtStart to session's current totalPausedMs:
Status endpoint — include totalPausedMsAtStart in stage response:
Step 3: Rewrite frontend — StageStatus type, props, and SessionTimingIndicator
Add totalPausedMsAtStart to StageStatus interface:
Pass stages to SessionTimingIndicator:
Rewrite SessionTimingIndicator — props, calculation, and rendering:
Step 4: Update E2E tests
Now let me read the full test section I need to update to see the exact test code around the timing assertions.
Now I need to update each timing test assertion. Let me apply them one by one:
Test 1: "on schedule initially" — pace=on-schedule, now=ahead (full budget remaining)
Test 2: "behind when over estimated" — both pace and now are behind (converged, overrunning)
Test 3: "on schedule near estimated" — both on-schedule (converged at exact estimate)
Test 4: "freezes when paused" — both on-schedule before pause and after pause
Test 5: "accumulates across stages" — both ahead
Step 5: Generate migration, type check, and run tests
Good, migration generated. Now apply it locally and run type check in parallel.
Type check passes. Now let me check if the dev server is running and then run the E2E tests.
All 33 tests pass, including all 5 timing indicator tests.
All done. Here's a summary of the changes:
Schema (packages/db/src/schema/activity-stages.ts): Added totalPausedMsAtStart integer column to activityStageTimings table, with default 0.
Migration: Auto-generated drizzle/0017_tranquil_thanos.sql.
API (apps/api/src/routes/admin/activity-stages.ts):
- Start endpoint: explicitly sets
totalPausedMsAtStart: 0for all stages - Advance endpoint: sets
totalPausedMsAtStartto session's currenttotalPausedMswhen starting a new stage - Status endpoint: includes
totalPausedMsAtStartin each stage's response data
Frontend (apps/web/components/mdx/activity-stage-instructor.tsx):
- Added
totalPausedMsAtStarttoStageStatusinterface - Passes
stagesarray toSessionTimingIndicator - Replaced single
bufferSecondswith dualpaceBuffer/nowBuffercalculations:- Pace: Uses per-stage
totalPausedMsAtStartfor accurate completed-stage timing; only counts overrun in current stage - Now: Simple
budgetThroughCurrentStage - totalElapsed
- Pace: Uses per-stage
- Extracted
TimingPillcomponent for rendering each indicator with· pace: Xm ahead/· now: Xm aheadformat
E2E tests (testing/e2e/tests/activity-stage-timing.spec.ts): Updated all 5 timing assertions to use session-timing-pace-* and session-timing-now-* data-testids. All 33 tests pass.
Your task
First, run git status, git diff, and git log --oneline -10 to understand the current state.
Then add and commit all modified files with a descriptive message. Break the changes into multiple commits if appropriate. If there are untracked files, ask if they should be added or added to the appropriate .gitignore file. The user may also want to take over at this point, so provide an abort option as well. Always check changes for unnecessary debugging statements or logging that might have been accidentally included. Prompt the user if you find any, and offer to remove each one by one. Do not use interactive git staging (git add -p) as it does not work.
The diff looks clean — no debugging statements or stray logging. The two untracked files (0017_tranquil_thanos.sql and 0017_snapshot.json) are the generated migration and should be committed with everything else.
This all belongs in one commit:
Committed as 912cca1. One commit covering all 7 files:
packages/db/src/schema/activity-stages.ts— newtotalPausedMsAtStartcolumnpackages/db/drizzle/0017_tranquil_thanos.sql+meta/— generated migrationapps/api/src/routes/admin/activity-stages.ts— set field on start/advance, expose in statusapps/web/components/mdx/activity-stage-instructor.tsx— dualpace/nowindicator logictesting/e2e/tests/activity-stage-timing.spec.ts— updated assertions tosession-timing-pace-*/session-timing-now-*
OK. Is there any way to do manual testing of this locally? Could the dev server deliver a script that communicates with [local-hostname], perhaps?
Good question. Let me check what testing infrastructure already exists.
There's already a test page at /testing/components/activity-stages that you can visit directly. If you're logged in as an instructor on [local-hostname], you'll see the instructor controls with the timing indicators.
The question is whether you're currently authenticated as an instructor when you visit that page. Let me check how auth works for the testing pages.
So the test page works in dev mode. You can just go to:
https://[local-hostname]/testing/components/activity-stages
If you're logged in as an instructor, you'll see the instructor controls and can:
- Start a session
- Watch the pace and now pills appear
- Wait / advance stages to see them diverge and converge
- Pause/resume to verify pause handling
The stage estimates are 3m, 2m, 10m, 5m — so you can advance early to see "ahead" or wait past the estimate to see "behind". No extra script needed — that page is purpose-built for this kind of manual testing.
Ah, sorry. I was talking about the new capture functionality.
Ah, the page capture system! Let me understand what's there.
The capture system already supports local testing. The Tampermonkey script at /api/admin/capture-tokens/script.user.js is dynamically generated with the API URL from your environment, so when running locally it'll point to https://[local-hostname].
The flow is:
- Visit
https://[local-hostname]/profileas an instructor - Generate a capture token
- Click "Install Capture Script" — Tampermonkey picks it up with the local API URL baked in
- Browse any page, click "Capture This Page" from the Tampermonkey menu
- View captures at
https://[local-hostname]/capture
Were you asking about testing without Tampermonkey? For example, a bookmarklet version or a simple test page that lets you submit a URL/content directly?
Your task
First, run git status, git diff, and git log --oneline -10 to understand the current state.
Then add and commit all modified files with a descriptive message. Break the changes into multiple commits if appropriate. If there are untracked files, ask if they should be added or added to the appropriate .gitignore file. The user may also want to take over at this point, so provide an abort option as well. Always check changes for unnecessary debugging statements or logging that might have been accidentally included. Prompt the user if you find any, and offer to remove each one by one. Do not use interactive git staging (git add -p) as it does not work.
The changes are entirely formatting/whitespace — a formatter pass touched profile/client.tsx, activity-stage-instructor.tsx, captures.spec.ts, and reformatted the Drizzle snapshot JSON (arrays condensed to single lines) plus added trailing newlines to the JSON files. No functional changes, no debug statements.
One clean commit:
Done — 4e56288. All formatting, nothing functional.
Your task
First, run git status, git diff, and git log --oneline -10 to understand the current state.
Then add and commit all modified files with a descriptive message. Break the changes into multiple commits if appropriate. If there are untracked files, ask if they should be added or added to the appropriate .gitignore file. The user may also want to take over at this point, so provide an abort option as well. Always check changes for unnecessary debugging statements or logging that might have been accidentally included. Prompt the user if you find any, and offer to remove each one by one. Do not use interactive git staging (git add -p) as it does not work.
Clean — just a turbo bump from 2.8.10 → 2.8.11 in package.json and the corresponding bun.lock update. No debug statements, no untracked files.
Done — 3bf2d92.
It would be nice to have an e2e test of some kind for the capture functionality. We could use a publicly-available page to avoid paywall issues. Could you design that? Can Puppeteer install Greasemonkey and Greasemonkey scripts?
Good question about Playwright (which this project uses) and userscript managers. Let me explore the existing capture tests and infrastructure first, and research the Playwright + userscript angle.
Good research findings. The short answer: no, you can't practically load Tampermonkey in Playwright (requires persistent context + headed mode). But we don't need to — the userscript is just a thin wrapper around a POST to /api/captures?token=.... The real value is testing the full flow: generate token, capture page content, verify it appears in the UI.
Let me design a plan.
Now let me check what data-testids exist on the capture display page for the capture items.
The capture list items don't have individual data-testid attributes yet — I'll need to add some for the test to verify content. Now let me look at the E2E test helpers to understand how API calls are typically made from tests.
Good. Now I have a clear picture. Let me write the plan.
Now let me check how getContext works to understand what's available for making authenticated API calls.
I see — getContext returns { context, page }. The context has cookies set, and I can use context.request or page.request to make API calls that carry those cookies. But for the capture API endpoint, we need the token (not the session cookie) — it's token-based auth. Let me also check the getSessionCookie helper.
Now I have the full picture. The flow will be:
- Use
page.request(which carries the session cookie) to generate a capture token viaPOST /api/admin/capture-tokens - Use a standalone
fetch(no cookies needed) to POST to/api/captures?token=... - Reload
/captureand check the result
Let me write the plan.