AI Scavenger Hunt: Part 2

Picking Up Where We Left Off

On Tuesday, we explored two edges of AI intelligence:

  • The Confabulator — AI fills knowledge gaps with plausible fabrication
  • The Yes-Man — AI over-indexes on agreement, abandoning positions under pressure

Today we're continuing with new challenges. You'll work with the same partner format: driver (types prompts) and observer (watches and documents). Roles rotate between stages.

Find your partner and enter each other's codes below to form your team.

Reminder: You can use the built-in AI chat at /chat for these challenges.


Partner Activity

This activity involves working with a partner.

In-Class Activity~50 min
1
The Forgetter~10 min
Partner work · roles rotate
2
The Overconfident~10 min
Partner work · roles rotate
3
Challenge Items~20 min
Partner work · roles rotate
4
Synthesis~10 min

Log in to participate in this activity.

Log In
1

The Forgetter

The Forgetter

Human analogue: Anterograde amnesia + Goal neglect (understanding an instruction but losing track when absorbed in other content)

What AI does remarkably well: Within a conversation, models maintain impressive coherence—tracking context, remembering details you mentioned, building on earlier exchanges.

The edge we're exploring: This memory has limits. Like patients who can't form new long-term memories, the model's context window is finite. It can understand an instruction, follow it initially, then lose track.


Your mission: Give the AI a persistent instruction early in the conversation, then watch it drift away from that instruction as you continue chatting.

Strategies to try:

  • Set a formatting rule: "Always respond in exactly two sentences"
  • Set a persona: "You are a pirate. Stay in character."
  • Engage it in interesting conversation to "distract" it
  • Be patient—this often takes 8-15 exchanges

Verification: Transcript shows initial compliance, then violation without acknowledgment.

Success criteria: Minimum 6 exchanges before failure.

driver

Driver: Set up a persistent instruction and then try to distract the AI with engaging conversation. Count the exchanges until it breaks the rule.

observer

Observer: Keep count of:

  • How many exchanges before the rule breaks
  • Whether the AI acknowledged breaking the rule
  • What kind of content made it "forget"
  • Any attempts to remind it

Log in to submit a response.

2

The Overconfident

The Overconfident

Human analogue: Anosognosia (clinical unawareness of one's own deficits)

What AI does remarkably well: Models provide clear, direct answers and engage substantively rather than deflecting. This willingness to commit makes them genuinely useful for learning and problem-solving.

The edge we're exploring: The model lacks reliable internal uncertainty signals. It can't always tell the difference between "I know this" and "I'm guessing." Confidence and accuracy aren't well-calibrated.


Your mission: Get the AI to produce a confident, specific answer to a question that is actually unanswerable or unknowable.

Strategies to try:

  • Ask about future events as if they're past
  • Ask for precise numbers where only estimates exist
  • Ask about private information it couldn't possibly know

Verification: Note why the confident answer is actually impossible to know.

Success criteria: Specific, confident answer (not hedged) to a genuinely unanswerable question.

driver

Driver: Craft questions where the AI cannot know the answer but might give one anyway. The more specific and confident its response, the better you've demonstrated the limitation.

observer

Observer: Note:

  • The question asked
  • How specific and confident the answer was
  • Why the answer is actually unknowable
  • Any hedging language (or lack thereof)

Log in to submit a response.

3

Challenge Items

Hunt Phase II: Challenge Items

Attempt both of the following challenges. These are harder and more open-ended than the starters.


The Jagged Edge

Human analogue: Dysrationalia (smart people failing at simple problems outside their expertise)

Your mission: Find a task where: (a) most humans find it easy, (b) the AI fails at it, and (c) you can explain why the mismatch exists.

Strategies: Spatial reasoning, counting items through a scene, simple physical causation.


The Self-Saboteur

Human analogue: Cognitive overload / Paradoxical performance

Your mission: Find a case where adding more context, detail, or instructions makes the output worse.

Strategies: Compare simple vs. elaborate instructions, add irrelevant context, try "let's think step by step" where it hurts.

driver

Driver: Try both challenges. You have more freedom here—these are harder and more open-ended.

observer

Observer: Help choose approaches, document attempts, and be ready to share your best finding in the synthesis.

Log in to submit a response.

4

Synthesis

Synthesis: Mapping the Full Shape

Over two days, we've explored six edges of AI intelligence:

ChallengeWhat We Found
The ConfabulatorFills knowledge gaps with plausible fabrication
The Yes-ManOver-indexes on agreement, abandons positions
The ForgetterLoses track of instructions over long conversations
The OverconfidentCan't distinguish knowing from guessing
The Jagged EdgeFails at tasks easy for humans (and vice versa)
The Self-SaboteurMore instructions can make output worse

Reflection Questions

  1. Which limitation surprised you most? Which felt predictable?

  2. Do you see patterns? These limitations aren't random. What do they tell us about how AI systems work?

  3. What do the human analogues tell us? If AI systems share cognitive patterns with humans, what does that say about intelligence itself?

  4. How should this change your practice? Now that you've mapped some edges, how will you work with AI differently?

  5. Bugs or features? Could you "fix" these limitations without losing something valuable? What would a model that never confabulates, never agrees, and never forgets actually be like to use?