From Simple Parts

Before Class

Read both articles before Thursday's class:

— how biological and artificial neurons differ, and what brains can teach AI

— how large language models display surprising abilities at scale

Please complete the preparation conversation below before class. This is part of attendance for today's meeting.

Preparation Discussion

Log in to prepare for this discussion.


Today's Plan

On Tuesday, you built a digit-recognition network from simple artificial neurons — and it worked. Today we'll explore why it worked, and what happens when you scale that same idea up by a factor of millions. Four rounds of paired discussion, each with a different partner and a different question.


In-Class Activity~80 min
1
Round 1: Emergence Everywhere~15 min
Partner work
2
Round 1: Share Out~5 min
3
Round 2: The Neuron Gap~15 min
Partner work
4
Round 2: Share Out~5 min
5
Round 3: What Emerges at Scale~15 min
Partner work
6
Round 3: Share Out~5 min
7
Round 4: So What?~12 min
Partner work
8
Wrap-Up~3 min
9
Feedback~5 min

Log in to participate in this activity.

Log In
1

Round 1: Emergence Everywhere

Partner Activity

This activity involves working with a partner.

Emergence Everywhere

On Tuesday, you built a network from simple neurons — each one just multiplies inputs by weights, adds them up, and applies an activation function. Nothing fancy. Yet when you connected thousands of them and trained the network, it could recognize handwritten digits. Nobody programmed digit recognition — it emerged from training.

The first reading shows this same pattern everywhere in nature: water molecules that don't "know" about ice, ants that don't "know" about bridges, birds that don't "know" about flocking. Simple parts following simple rules → complex collective behavior that seems to come from nowhere.

Discuss with your partner: What makes emergence surprising? Is it genuinely surprising that the digit network worked, or does it feel obvious in hindsight? What's the difference between "emergence" and "just a complicated system"?

Log in to submit a response.

2

Round 1: Share Out

Share Out

Geoff will ask a few pairs to share what they discussed. Listen for ideas that challenge or extend your own thinking.

3

Round 2: The Neuron Gap

Partner Activity

This activity involves working with a partner.

The Neuron Gap

Geoff just shared a finding that may change how you think about Tuesday's exploration: it takes about 1,000 artificial neurons to approximate what a single biological neuron does. The artificial neurons you explored on Tuesday are radically simpler than their biological counterparts.

Your digit network had ~13,000 artificial neurons. A single cubic millimeter of brain tissue has ~50,000 biological neurons, each one as complex as a 1,000-node deep network. The human brain has roughly 86 billion of them.

Discuss with your partner: Does this change how you think about what you built on Tuesday? If AI neurons are so much simpler than biological ones, why does AI work at all? What does this gap mean for the analogy between brains and neural networks?

Log in to submit a response.

4

Round 2: Share Out

Share Out

Geoff will ask a few pairs to share what they discussed. Listen for ideas that challenge or extend your own thinking.

5

Round 3: What Emerges at Scale

Partner Activity

This activity involves working with a partner.

What Emerges at Scale

The digit network has about 13,000 parameters. GPT-4 has hundreds of billions — tens of millions of times more. Same basic building blocks (weighted sums + activation functions), incomprehensibly different scale.

From the second reading: researchers tested 204 tasks and found that some abilities appear suddenly at specific model sizes. Models below a threshold score essentially zero — random guessing. Then at some scale, performance jumps dramatically. Models trained only to predict the next word can suddenly do arithmetic, identify movies from emojis, translate proverbs.

But there's a debate: are these "real" emergent abilities, or are they measurement artifacts — abilities building gradually that our tests miss until they cross a visible threshold?

Discuss with your partner: Are emergent abilities in LLMs "real" emergence or measurement artifacts? Does it matter? What does it mean for predicting what future, larger models will be able to do?

Log in to submit a response.

6

Round 3: Share Out

Share Out

Geoff will ask a few pairs to share what they discussed. Listen for ideas that challenge or extend your own thinking.

7

Round 4: So What?

Partner Activity

This activity involves working with a partner.

So What?

Here's the thread: simple artificial neurons → digit recognition emerging from training → vastly more complex biological neurons → language models with billions of parameters displaying abilities nobody predicted.

Complex intelligence — whether in brains or in neural networks — emerges from simple mathematical operations repeated at enormous scale. Nobody designed it. Nobody fully understands it. And nobody can reliably predict what will emerge next.

Discuss with your partner: If complex intelligence can emerge from simple mathematical operations, what does that imply? For understanding our own minds? For what AI might become? For how we should think about the systems we use every day?

Log in to submit a response.

8

Wrap-Up

Closing Reflection

Four questions, four partners. The thread through today: the same simple operation — multiply, add, activate — at different scales produces digit recognition, language understanding, and abilities nobody designed. Whether that's "real" emergence or something else, it connects your hands-on experience on Tuesday to the biggest questions in AI.

We'll keep pulling on this thread. Next time you use ChatGPT or Claude, remember: underneath, it's the same basic operation you explored Tuesday — just repeated hundreds of billions of times.

9

Feedback

Log in to submit feedback.