Goals of today:
What are classical computers good at?
What are classical computers bad at?
Before we talk about why quantum systems are difficult to simulate, we should be honest about something:
Classical computers are incredibly good at many problems.
The success of modern science, engineering, and AI is not accidental. It is the result of an almost perfect match between classical computers and the structure of many physical and mathematical problems. Let’s start there.
What are classical computers good at?¶
1. Simulating classical dynamics¶
Suppose you want to simulate a classical physical system: particles moving under forces. For example billions of stars orbiting around a galaxy. What does a classical simulation actually do?
Let’s say you want to simulate
particles
time steps
At each time step:
You store the positions and velocities of all particles
You compute the forces
You update positions and velocities
You repeat
If the particles are not interacting with each other - for example they are all attracted to some central mass - then the total computational cost scales roughly like:
However, if every particle also interacts with every other particle at every time step, then it can in general can be represented by: $$ \underbrace{
}_{N \times N}
\xrightarrow{\text{interacting}} N^2 \text{ computations} $TN^2 \times T$ computations. This is still polynomial scaling.
If you double the number of particles, the computation becomes 4x harder — but not catastrophically harder.
At its core, modern computing is built on linear algebra: the repeated multiplication of matrices and vectors. And classical computers are astonishingly good at this.
The core operation of modern computation is:
If you have:
an matrix
multiplied by an (N)-dimensional vector The cost scales as:
Again: polynomial scaling.
Here’s a draft section to add. This would fit well right after you discuss matrix-vector multiplication and before you mention GPUs being good at this:
How Fast Is Your Computer?¶
Before we can understand what problems are hard, we need to understand what computers actually do—and how fast they do it.
The CPU: One Thing at a Time¶
At its core, a CPU (Central Processing Unit) is surprisingly simple. It has:
Registers: small memory slots that hold numbers (e.g., 32-bit or 64-bit values)
An ALU (Arithmetic Logic Unit): the circuit that actually adds, multiplies, etc.
A single operation looks like this:
Take two numbers from registers, do one arithmetic operation, store the result.
How fast can this happen? A modern CPU runs at roughly 3 GHz—that’s clock cycles per second. If we assume (optimistically) one operation per cycle, that gives us:
That sounds fast. But for large simulations, it’s not nearly enough.
The GPU: Many Things at Once¶
A GPU (Graphics Processing Unit) takes a different approach. Instead of one very fast ALU, it has thousands of smaller ALUs running in parallel.
A modern GPU like the NVIDIA RTX 4090 has:
Each core can do roughly operations per second. Running in parallel:
This is measured in FLOPS (Floating Point Operations Per Second):
That’s roughly 10,000 times faster than a single CPU core—but only for problems that can be split into many independent calculations.
Why This Matters¶
Remember our classical simulation:
Update particles
Each update is the same operation with different data
Perfect for parallelization
This is why GPUs dominate:
Physics simulations
Movie CGI and video games
Training neural networks (matrix multiplications)
Classical computers aren’t just “fast.” They’re fast at specific things: problems that decompose into many independent, repeated operations.
Here’s a draft section that could follow the CPU/GPU discussion:
Example: How Does an LLM Work?¶
You’ve probably used ChatGPT or Claude. These are Large Language Models (LLMs). Let’s peek under the hood and estimate how much computation goes into generating a single word.
Step 1: Words Become Vectors¶
When you type a prompt like:
“How long should I cook raw chicken in the air fryer?”
The model doesn’t see words—it sees numbers. Each word (or “token”) is converted into a vector through an encoding process.
A typical embedding dimension is around . So the word “fish” becomes a vector:
These 10,000 numbers encode the “meaning” of the word in a high-dimensional space—words with similar meanings end up as similar vectors.
Step 2: Attention—Which Words Matter?¶
Here’s where it gets interesting. The model needs to understand relationships between words.
In “How long do fish live?”, the answer depends heavily on “fish.” But in “How long do stars live?”, the same question has a completely different answer.
The attention mechanism computes weights between every pair of words:
How much should word “pay attention to” word ?
This requires comparing every word to every other word—an operation for each layer.
Step 3: Many Layers¶
The model doesn’t just do this once. A modern LLM has roughly 80 layers, each performing:
Attention calculations
Matrix multiplications
Nonlinear transformations
Each layer refines the representation, building up from simple word meanings to complex reasoning.
Step 4: Predict the Next Word¶
The final output is a vector of probabilities over the entire vocabulary—typically possible words:
The model samples from this distribution to pick the next word, then repeats the whole process.
Counting Operations¶
Let’s estimate the computational cost for generating one token:
| Step | Operations |
|---|---|
| Matrix multiply per layer | |
| Number of layers | |
| Total |
With a GPU running at TFLOPS ( ops/sec):
In practice, with memory bottlenecks and overhead:
That’s why ChatGPT can type at roughly the speed you read—it’s generating hundreds of words per second, each requiring billions of operations.
The Takeaway¶
An LLM is essentially:
Massive matrix multiplications
Repeated 80 times per token
Perfectly suited for GPUs
This is classical computing at its finest: huge linear algebra problems, embarrassingly parallel, running on hardware designed exactly for this task.
The question we’ll tackle next: Are there problems where this strategy fails?
Here’s a draft section that transitions to “what classical computers are bad at”:
Why Do We Need a Quantum Computer?¶
We’ve seen that classical computers—especially GPUs—are extraordinarily powerful. Matrix multiplications, particle simulations, neural networks: all conquered by brute-force parallelism.
But are there problems where this strategy fails completely?
The Traveling Salesman Problem¶
Imagine you’re a salesman who needs to visit 30 cities across the country. You want to find the shortest route that visits every city exactly once and returns home.
(I know this image is horrible - someone want to make a better SVG code???)
This is the famous Traveling Salesman Problem (TSP).
How hard could it be? Let’s count the possibilities.
For cities, the number of possible routes is:
Let’s make a table. Assume a CPU checking 109 routes per second:
| cities | Possible routes | Time to check all (CPU) |
|---|---|---|
| 10 | ~4 ms | |
| 20 | ~80 years | |
| 50 | ~1048 years |
Wait—1048 years? How long is that?
Cosmic Perspective¶
| Event | Time |
|---|---|
| Big Bang | 0 |
| Now | years |
| Sun engulfs Earth | years from now |
| Last stars burn out | years |
| Heat death of universe | years |
For 50 cities, the brute-force calculation would take 1048 years—longer than all stars will exist, but still far short of heat death.
By Stirling’s approximation:
This is faster than exponential. It’s not —it’s closer to .
The Problem¶
The traveling salesman problem isn’t exotic. Problems with this structure appear everywhere:
Logistics: Delivery routes, airline scheduling
Biology: Protein folding, gene sequencing
Finance: Portfolio optimization
Manufacturing: Circuit board drilling, job scheduling
These are problems where:
The number of possibilities grows as or
There’s no shortcut—you can’t decompose it into independent parallel tasks
Classical computers hit a wall, no matter how fast
This is what classical computers are bad at: problems where the search space grows exponentially (or faster) and the structure doesn’t allow for parallelization.
Here’s the revised section with the new framing:
Simulating Quantum Matter: The Spin Problem¶
So far we’ve looked at the traveling salesman—a classical optimization problem. Let’s now consider something that feels more inherently quantum: simulating actual quantum materials.
This isn’t an abstract puzzle. It’s a problem that directly maps to real physics: How do the magnetic moments in a material align? What is the lowest energy state of a chunk of iron, or a high-temperature superconductor, or a quantum magnet?
Why Do We Care About the Lowest Energy?¶
Here’s a deep fact about nature: systems tend toward their lowest energy state.
Heat up a chicken, and what happens? It eventually cools off. The thermal energy dissipates into the environment until the chicken equilibrates with room temperature.
This happens everywhere, constantly:
A hot cup of coffee cools down
A vibrating tuning fork goes silent
An excited atom emits a photon and relaxes
Nature is always “computing” the answer to the question: What is the lowest energy configuration?
When you cool a material down, the atoms and electrons rearrange themselves, settle into lower and lower energy states, and eventually approach the ground state—the configuration with the absolute minimum energy.
Nature does this effortlessly. For a classical computer trying to predict the answer? It can be impossibly hard.
Spins: Tiny Bar Magnets¶
Many particles have a property called spin. For our purposes, think of each particle as a tiny bar magnet that can point either up (↑) or down (↓).
Now imagine a line of these tiny magnets:
The Interaction: Neighbors Want to Anti-Align¶
If you’ve played with bar magnets, you know: opposite poles attract. Two neighboring magnets “want” to point in opposite directions:
If this were the only rule, the ground state would be easy to find:
Simple! Polynomial complexity—just alternate.
The Complication: External Magnetic Field¶
Now turn on an external magnetic field pointing up. This field wants all spins to align with it:
But wait—now we have competing demands:
Neighbors want to anti-align: ↑↓↑↓↑↓
External field wants all aligned: ↑↑↑↑↑↑
What configuration actually minimizes the total energy?
Why This Is Hard¶
Unlike the alternating pattern, there’s no obvious answer. The ground state depends on:
The strength of the neighbor-neighbor interaction
The strength of the external field
The geometry (1D chain? 2D grid? 3D lattice?)
To find the true minimum, we might need to check all possible configurations.
How many configurations are there?
Each spin has 2 choices: ↑ or ↓
| spins | Configurations |
|---|---|
| 1 | 2 |
| 2 | |
| 3 | |
For spins:
Let’s put numbers to this:
| spins | Configurations | Memory to store all |
|---|---|---|
| 10 | ~16 KB | |
| 30 | ~16 GB | |
| 50 | ~16,000 TB | |
| 100 | More than atoms on Earth |
Polynomial vs. Exponential¶
This is the key distinction:
| Complexity | Scaling | Example | Tractable? |
|---|---|---|---|
| Polynomial | , , etc. | Non-interacting spins | ✓ Yes |
| Exponential | , , etc. | Interacting spins | ✗ No |
For polynomial scaling, doubling makes the problem harder by a constant factor (4×, 8×, etc.).
For exponential scaling, adding one spin doubles the problem size.
This is why interacting quantum systems are fundamentally hard to simulate classically.
Nature as a Computer¶
Is there a fundamentally different way to compute?
Could we build a machine that explores many possibilities simultaneously—not by having more processors, but by exploiting the physics of superposition itself?
This is the promise of quantum computing.
Here’s the remarkable thing: while we struggle to simulate 50 spins on a classical computer, nature does it effortlessly.
Every piece of iron, every magnetic material, every superconductor contains 1023 interacting quantum particles—and nature “computes” the ground state every time the material cools down.
Nature is a simulator doing calculations we could never do.
This raises a tantalizing question: Could we harness the quantumness of nature itself as our computer?
We’ll explore that idea in the next lecture.
Homework¶
Permutations and the “seating explosion”¶
You have people and seats in a row.
(a) How many distinct seatings are possible? (Give an exact expression.)
(b) Evaluate the number for .
(c) Use Stirling’s approximation
to estimate for . (You can keep only the dominant terms.)
(d) Based on (c), is polynomial, exponential, or something else? Explain in one sentence.
“Which scaling is it?” (classification)¶
Put these in order from slowest growth to fastest growth for large :
Linear algebra costs (matrix/vector scaling)¶
Assume dense objects (no special structure), and count the number of multiply-add operations to leading order.
(a) Dot product of two length- vectors: .
(b) Multiply an matrix by a length- vector.
(c) Multiply two matrices.
For each: give big-O scaling and a one-line explanation.
Configurations of spins (classical counting)¶
A spin- has two possible measurement outcomes along : or .
(a) List all basis configurations for 4 spins along . (There should be 24 of them.)
(b) In general, how many basis configurations exist for spins? (Give an expression.)
(c) If you store one complex amplitude per configuration using 16 bytes (8 bytes real + 8 bytes imaginary), estimate the memory required to store a general state for:
“How many spins could you model?” (back-of-the-envelope)¶
Assume you have a computer with 16 GB of usable RAM.
(a) Using the same 16-bytes-per-amplitude assumption, what is the largest such that you can store all amplitudes in memory?
(b) Why is “storing the state” only the first problem? Give one additional reason simulation can be harder than memory alone.
Homework: Wedding Planning¶
This is the second part of the first homework. It is due Wed at 11:59PM with Part I in Gradescope.
The goal of this homework is to show that “hard” problems are not unique to quantum mechanics. Even in the classical world, there are problems where the number of possibilities explodes so fast that “just try them all” becomes impossible.
In this problem, you are a wedding planner trying to seat guests to minimize drama.
Imagine you have a list of guests and a set of tables. You want to create a seating chart where everyone is happy. You might have a a couple different preferences and constraints:
Preferences: Aunt May really wants to sit near the window. (This is a local bias).
Interactions: If you put Uncle John and Cousin Bob next to each other, they will fight. (This is a pair-wise penalty).
In this problem you will use Python to find the optimum, the solution solution that minimizes drama. How many guests do you think you can do on your laptop or desktop?
*Please see the GitHub page Python Resources for help getting started with Python if you are new to it. LLM’s are great for learning coding - just make sure you know what it is doing - for example don’t just put this whole problem in and ask for a solution - but it is fine if you ask “How do I make a random matrix in Python ?” **
For Homework submission, upload your python code converted to a PDF with comments.
Part A: Setup¶
To make our lives easy, we will seat guests all in a single line with positions i = 0 to N-1. We will also only assume there are “interaction” preferences - guest 1 does not want to sit next to guest 10, but not guest 1 wants to sit at seat 10.
You are seating guests in a row. Label the guests by integers:
Let’s start with guests,
Each pair of guests has a “drama score” :
means guest dislikes sitting next to guest
means guest likes sitting next to guest
means neutral
Here is your first drama matrix:
| 0 | 1 | 2 | 3 | 4 | |
|---|---|---|---|---|---|
| 0 | 0 | 3 | -1 | 2 | 0 |
| 1 | 3 | 0 | 4 | -2 | 1 |
| 2 | -1 | 4 | 0 | 5 | -3 |
| 3 | 2 | -2 | 5 | 0 | 2 |
| 4 | 0 | 1 | -3 | 2 | 0 |
| In this matrix, we assumed and . |
In Python make the matrix:
import numpy as np
# Define the matrix manually
drama_matrix = np.array([
[ 0, 3, -1, 2, 0],
[ 3, 0, 4, -2, 1],
[-1, 4, 0, 5, -3],
[ 2, -2, 5, 0, 2],
[ 0, 1, -3, 2, 0]
])
print(drama_matrix)A seating arrangement is a permutation of the guests:
where each guest appears exactly once.
Example seating for :
Only adjacent neighbors matter:
Define the total drama energy:
Your goal: Use Python to find a seating arrangement with the smallest energy (least drama).
Write a Python function that takes:
a seating list
sa drama matrix
drama_matrix
and returns the scalar energy .
Check your function on at least two different settings for .
Check the drama score for
s = [1,4,2,0,3]If you get you are ready to go to the next section. (check this and let me know if this is correct - Jon)
Part B: Brute-force search¶
Now for the example use brute force searching to find the minimum drama solution. There is one seating vector that achieves it.
How many solutions did it have to search?
Part C: 10 Guests¶
Do the same for 10 guests.
drama_matrix = np.array([
[ 0, 4, -1, 2, -5, 3, 1, -2, 0, 5],
[ 4, 0, 3, -2, 1, -4, 5, 0, -3, 2],
[-1, 3, 0, 5, -2, 4, -3, 1, 2, -4],
[ 2, -2, 5, 0, 3, -1, 4, -5, 1, -3],
[-5, 1, -2, 3, 0, 2, -4, 5, -1, 0],
[ 3, -4, 4, -1, 2, 0, -2, 3, 5, -5],
[ 1, 5, -3, 4, -4, -2, 0, 1, -5, 3],
[-2, 0, 1, -5, 5, 3, 1, 0, 4, -1],
[ 0, -3, 2, 1, -1, 5, -5, 4, 0, 2],
[ 5, 2, -4, -3, 0, -5, 3, -1, 2, 0] ])Find the minimum solution .
How many did it have to search?
Part D: The Hero Problem, 30 Guests¶
This part is a competition. Whoever gets the lowest drama score gets 5 extra points on the first exam. Please bring your and lowest drama with you to class on Thursday 1/22 . I will have a python file with the drama matrix on GitHub for you to download.
How many possible solutions are there? Brute force becomes impossible quickly.
What strategies can you come up with besides using raw compute power? Feel free to ask LLM’s for help. Use any Python library you want. This is an optimization problem, but gradient descent optimization algorithms will not help because your input is discrete.
Link to the 30 x 30 drama matrix¶
Here is the link to the 30 x 30 drama matrix. It loads with gibberish string. Change name to ``drama_matrix.csv".
Import with
# Load your drama matrix (or generate for testing)
drama_matrix = np.loadtxt('drama_matrix.csv', delimiter=',', dtype=float)Hints:¶
If you want to make your own drama matrix for testing, use this:
import numpy as np
# Set the seed so every student gets the EXACT same numbers
np.random.seed(42)
N = 10 # Number of guests
# Generate the upper triangle with -1 to 1
upper = np.triu(np.random.uniform(-1, 1, (N, N)), k=1)
# Make it symmetric (Matrix + Transpose)
# The diagonal remains 0 because k=1 in the step above
drama_matrix = upper + upper.T
print(drama_matrix)