Embeddings and Knowledge

How Does AI Represent Meaning?

When you type a word into ChatGPT, the model doesn't see letters. It sees a point in a high-dimensional space, a list of hundreds or thousands of numbers that encode what that word means based on every context the model has seen it in.

Words that appear in similar contexts end up near each other in this space. "King" is near "queen." "Dog" is near "cat." But the relationships go deeper than simple similarity: the direction from "king" to "queen" is roughly the same as the direction from "man" to "woman." The geometry of the space encodes relationships, not just categories.

Today you'll explore this space directly, discover relationships that surprise you, and think about what it means for AI to represent human knowledge as geometry.


Today's Plan

  1. Guided exploration (individual): Experiment with an embedding explorer to discover how AI organizes meaning
  2. Question generation (individual): Write down what surprises you and what questions it raises
  3. Group discussion (pairs): Share discoveries with a partner
  4. Agent-guided interaction (small groups): Discuss your observations with an AI facilitator to deepen your understanding
  5. Class synthesis: Geoff connects themes from across the room

In-Class Activity~70 min
1
Guided Exploration~15 min
2
What Surprised You?~5 min
3
Paired Sharing~10 min
Partner work
4
Group Discussion~15 min
Partner work
5
Class Synthesis~15 min
6
Wrap-Up~5 min
7
Feedback~5 min

Log in to participate in this activity.

Log In
1

Guided Exploration

Explore Embedding Space

Use the embedding explorer below to investigate how AI represents meaning. Try things like:

  • Similar words: Pick a word and see what's nearby. Are the neighbors what you expected?
  • Analogies: If "king" minus "man" plus "woman" gives you "queen," what other analogies work? What breaks?
  • Categories: Do words from the same domain cluster together? How tight are the clusters?
  • Surprises: Find a relationship that doesn't make sense to you. Why might the model have learned it?
  • Bias: Look for gendered, racial, or cultural associations. What do they reveal about the training data?

Don't worry about understanding the math. Focus on building intuitions: what patterns do you notice in how the model organizes meaning?

Embedding Explorer
2

What Surprised You?

Capture Your Observations

Write down one or two things that surprised you during the exploration. What did you expect to find, and what did you actually find?

Log in to submit a response.

3

Paired Sharing

Partner Activity

This activity involves working with a partner.

Share With Your Partner

Take turns sharing what you discovered during the exploration:

  • What surprised you most?
  • Did you find any relationships that seem wrong or biased?
  • What do you think this reveals about how AI "knows" things?

Listen for observations that are different from your own. Your partner may have explored a completely different part of the space.

4

Group Discussion

Partner Activity

This activity involves working with a partner.

Group Discussion

Discuss your embedding discoveries with your group. An AI facilitator will guide the conversation, asking follow-up questions and connecting your observations.

This stage uses agent-guided interaction: instead of writing a summary, your group completes this stage by demonstrating to the facilitator that you've engaged meaningfully with what you discovered. The facilitator will let you know when you're done.

Log in to participate in the group discussion.

5

Class Synthesis

Class Discussion

Geoff will draw on your observations and group conversations to connect themes across the room.

Some questions to consider:

  • If AI represents meaning as geometry, is that "understanding" or just pattern storage?
  • The same embedding space powers the course assistant on this website. When it answers your questions, it's finding content that's geometrically "near" your question. Does knowing that change how you think about AI assistants?
  • What kinds of knowledge can't be represented as proximity in a vector space?

Log in to view discussion questions.

6

Wrap-Up

Looking Ahead

Today you explored how AI turns meaning into geometry. Words become points in space, and relationships become directions. This is powerful enough to find relevant documents, answer questions, and generate coherent text. But it also encodes every bias and pattern in the training data.

Next time, we'll look at where that training data comes from and what it costs.

7

Feedback

Log in to submit feedback.