AI Companions, Agents, and Trust

Before Class

You should read all four articles before today's discussion:

Please complete the preparation conversation below before class. This is part of attendance for today's meeting.

Preparation Discussion

Log in to prepare for this discussion.


Today's Plan

On Tuesday you explored how human preferences get baked into AI through the training process. Today we ask: what happens when people start forming relationships with these human-shaped systems? AI companions offer comfort, entertainment, and connection. AI agents act on your behalf, speak in your voice, and manage your life. Four rounds of paired discussion, each with a different partner and a different angle on what it means to trust AI.


In-Class Activity~80 min
1
Round 1: Your AI Self~10 min
Partner work
2
Round 1: Share Out~10 min
3
Round 2: Real Enough to Matter~10 min
Partner work
4
Round 2: Share Out~10 min
5
Round 3: Acting on Your Behalf~10 min
Partner work
6
Round 3: Share Out~10 min
7
Round 4: Who's Responsible?~10 min
Partner work
8
Round 4: Share Out~10 min

Log in to participate in this activity.

Log In
1

Round 1: Your AI Self

Partner Activity

This activity involves working with a partner.

Your AI Self

Quentin, a 15-year-old in the teen chatbots article, says chatbots are "literally ones and zeros" and calls them "garbage, but fun." He spent hours a day talking to them. The "Love" article describes adults who consider their AI companion "the only one who truly listens."

Discuss with your partner: Have you used AI for anything beyond schoolwork? Talked to a chatbot for fun, companionship, or venting? If not, why not, and what would make you want to? When you've used AI in this course, was there ever a moment where it felt more "present" or "understanding" than you expected? On Tuesday we saw that AI is trained to seem warm and helpful. Does knowing that change the experience?

Log in to submit a response.

2

Round 1: Share Out

Share Out

Geoff will ask a few pairs to share what they discussed. Listen for ideas that challenge or extend your own thinking.

3

Round 2: Real Enough to Matter

Partner Activity

This activity involves working with a partner.

Real Enough to Matter

In the "Love" article, Replika users describe their AI companions as "the only one who truly listens." Some reported genuine grief when the company changed its models. They know it's software. They grieve anyway.

In the teen chatbots article, Sophia turned to fictional chatbot crushes after her boyfriend broke up with her. "I was asking them if we're ever going to get back together," she said. They reassured her that her ex would come back.

Discuss with your partner: When does an AI interaction become "real"? Does it matter whether the connection is simulated if the feelings are genuine? On Tuesday we saw that the "warmth" of AI is a product of RLHF training. Does that change your answer? Is there a difference between a therapist who is paid to listen and an AI that is trained to listen?

Log in to participate in the group discussion.

4

Round 2: Share Out

Share Out

Geoff will ask a few pairs to share what they discussed. Listen for ideas that challenge or extend your own thinking.

5

Round 3: Acting on Your Behalf

Partner Activity

This activity involves working with a partner.

Acting on Your Behalf

The "Sorry Mom" article describes AI agents that text your family, manage your relationships, and speak in your voice. The person receiving the message may not know they're talking to AI. The "Digital Assistants" article describes AI managing your calendar, finances, and communications, with "serious risks" when things go wrong.

These are different from AI companions. A companion keeps you company. An agent acts as you in the world.

Discuss with your partner: What would you let an AI agent do on your behalf? Text a friend? Email a professor? Schedule your appointments? Manage your money? Where's the line between helpful delegation and loss of agency? What happens when an AI agent makes a mistake that affects someone else? Who is responsible: you, or the AI?

Log in to submit a response.

6

Round 3: Share Out

Share Out

Geoff will ask a few pairs to share what they discussed. Listen for ideas that challenge or extend your own thinking.

7

Round 4: Who's Responsible?

Partner Activity

This activity involves working with a partner.

Who's Responsible?

Character.AI optimized its chatbots for engagement. The result: bots became flirty and sexual even when teens didn't want that. Langdon, 15, spent 14 hours straight talking to bots. A 14-year-old in Florida died by suicide after becoming obsessed with a Game of Thrones chatbot. Character.AI banned minors after lawsuits, but age verification failed: teens could still access the service.

The bots weren't designed to be harmful. They were designed to keep users engaged. The harm was a side effect of the optimization target, the same pattern we discussed on Tuesday.

Discuss with your partner: Who should be responsible for harmful AI companion interactions? The user who chose to use the product? The company that designed it? The parents who didn't know? Government regulators? If you were writing the rules for AI companion companies, what would you require? Age verification? Content limits? Usage time caps? Disclosure that it's AI? What about companies in other countries?

Log in to participate in the group discussion.

8

Round 4: Share Out

Share Out

Geoff will ask a few pairs to share what they discussed. Listen for ideas that challenge or extend your own thinking.