From seating arrangements to quantum spins¶
Last lecture, you wrestled with the wedding seating problem. You discovered that even with just 30 guests, brute-force search was impossible—there were simply too many arrangements to check.
Today we’ll see that this problem was actually a physics problem in disguise. And understanding why it was hard will lead us directly to the core idea of quantum computing. ### The Explosion of Configurations
Recall the wedding problem: you had guests and needed to find the seating arrangement that minimized drama. The number of possible arrangements was:
For 30 guests, that’s about 1032 possibilities—far more than any computer could ever search.
Now let’s consider a simpler counting problem. Imagine a row of spins, each pointing either up () or down ():
We can encode this as a list of numbers: for up, for down:
How many such configurations exist for spins?
Each spin has 2 choices, and the choices are independent:
This is exponential growth:
| Context | ||
|---|---|---|
| 10 | 1,024 | Easy |
| 20 | ~1 million | Still manageable |
| 30 | ~1 billion | Getting hard |
| 50 | ~1015 | Exceeds all computers |
| 300 | ~1090 | More than atoms in universe |
The wedding problem () is even worse than , but both are intractable for large . No classical algorithm can search through all possibilities.
This is the first key insight:
The number of configurations grows exponentially with the number of particles.
In the wedding problem, you had a “drama matrix” that told you how much two guests disliked sitting together. You minimized the total drama.
Physics has exactly the same structure—but we call the cost function energy, and we call the drama matrix the Hamiltonian.
Why Energy?¶
Nature has a deep tendency to minimize energy:
Hot coffee cools down (energy flows to the environment) - Balls roll downhill (gravitational potential energy decreases)
Crystals form from liquid (atoms find low-energy arrangements)
Magnets align with external fields (magnetic energy decreases)
When a system reaches its lowest possible energy, we call that the ground state.
Finding the ground state of a physical system is exactly like finding the optimal seating arrangement: you’re searching for the configuration that minimizes a cost function.
Spins in a Magnetic Field¶
Let’s start with the simplest case: a single spin in an external magnetic field .
A spin is like a tiny compass needle. It wants to align with the magnetic field, because that’s the lowest-energy configuration.
We write the energy as:
where (up) or (down).
Let’s check the signs. If (field pointing up): - Spin up (): (negative energy, low) - Spin down (): (positive energy, high)
The spin pointing with the field has lower energy.
For spins, each feeling the same field :
The ground state is obvious: all spins point up (if ), giving energy .
This is too easy. The interesting physics comes when spins interact with each other.
Spin-Spin Interactions: The Ising Model¶
Real materials aren’t just spins in a field—neighboring spins also interact with each other. An electron’s spin on one atom can influence the spin on a neighboring atom.
We model this with the Ising model, one of the most important models in physics. For a 1D chain with nearest-neighbor interactions:
Let’s unpack each term. We still have the interaction of each spin with the external B-field. But now we also have an interaction term for interactions between spins:
This sums over adjacent pairs: , , ..., .
The coupling constant describes the interaction strength. The product is: - +1 if spins are aligned ( or ) - -1 if spins are anti-aligned ( or )
What about the sign of ?
If (ferromagnetic): - Aligned spins: (low energy ✓) - Anti-aligned spins: (high energy) - Spins want to align
If (antiferromagnetic): - Aligned spins: (high energy) - Anti-aligned spins: (low energy ✓) - Spins want to anti-align
This is exactly the drama matrix from the wedding problem! Positive means the spins “get along” (want to be the same); negative means they “fight” (want to be different).
For a 1D chain, we can visualize the coupling:
If we wanted to write this as a matrix (like the drama matrix), it would have a simple structure—only the entries just off the diagonal are nonzero:
Compare this to the drama matrix from the wedding problem—same structure, different physical interpretation.
The Competition¶
The interesting physics happens when the two terms compete.
Consider an antiferromagnet () in an external field (): - The interaction term wants neighboring spins to anti-align: - The field term wants all spins to point up:
Which wins? It depends on the relative strength of versus .
If : interactions dominate → alternating pattern
If : field dominates → all spins up
If : frustration—the system can’t satisfy both constraints
The crossover between these regimes is called a phase transition. You’ll explore this in the homework.
Nature’s strategy: Simulated Annealing¶
You now have a well-defined optimization problem:
Given and , find the spin configuration that minimizes .
For small , you can try all configurations. But for , that’s 1015 configurations—impossible.
Is there a smarter approach?
Think about how nature actually solves this problem. When you cool a material slowly, the atoms don’t instantly jump to the ground state. Instead:
At high temperature, atoms jiggle around randomly
As temperature drops, atoms start settling into lower-energy arrangements
At very low temperature, the system freezes into (hopefully) the ground state
This process is called annealing, and we can simulate it on a computer.
Here is the simulated annealing algorithm.
Step 1: Start hot
Initialize with a random spin configuration:
At high temperature, any configuration is acceptable.
Step 2: Propose a change
Here we randomly change the system. This could be swapping two neighboring spins. It could be by randomly flipping any single spin. This is supposed to model the randomness intrinsic to hot systems - they are constantly changing. We make the change. Then we calculate the energy change:
Step 3: Accept or reject
Here’s the standard Metropolis acceptance rule:
Let (\Delta E = E_{\text{new}} - E_{\text{old}}).
If (\Delta E \le 0) (energy decreases or stays the same): always accept the move
If (\Delta E > 0) (energy increases): accept with probability
[ p = e^{-\Delta E / T} ]
In practice: draw (u \sim \text{Uniform}(0,1)). Accept the move if (u < e^{-\Delta E/T}).
At high temperature (T), (e^{-\Delta E/T}) is closer to 1, so the algorithm often accepts uphill moves and explores widely.
At low temperature (T), (e^{-\Delta E/T}) becomes very small for (\Delta E>0), so the algorithm rarely accepts uphill moves and tends to get stuck in low-energy states.
Step 4: Cool down and repeat
Gradually lower the temperature while repeating steps 2–3. The system will (hopefully) settle into a low-energy configuration.
Algorithm: Simulated Annealing (Metropolis)
───────────────────────────────────────────
Initialize random configuration s
Set initial temperature T = T_high
While T > T_low:
a. Pick random spin i
b. Compute ΔE if we flip spin i
c. If ΔE ≤ 0: flip the spin
Else:
Draw u ~ Uniform(0,1)
If u < exp(-ΔE/T): flip the spin
d. Lower T slightly (e.g., T → 0.99 × T)
Return final configurationThis is a heuristic—it doesn’t guarantee the global minimum, but it often finds good solutions. It’s inspired directly by how nature finds ground states. Simulated annealing is clever, but it’s still classical. It explores configurations one at a time, making local changes and hoping to find the global minimum. For some problems, this works well. For others, the energy landscape has many local minima, and the algorithm gets stuck.
Is there a fundamentally different approach? What if, instead of exploring configurations one at a time, we could explore all configurations simultaneously? This is exactly what quantum mechanics allows.
Nature Can Explore All Configurations at the Same Time¶
Here is the key idea that separates quantum from classical:
In quantum mechanics, a system doesn’t have to be in one configuration—it can be in a superposition of all configurations simultaneously.
This isn’t just uncertainty about which configuration the system is in. The system genuinely is in all configurations at once, with each configuration carrying a complex amplitude that determines how it contributes to the whole.
Let’s build up to this idea step by step.
From Classical Position to Quantum Wavefunction¶
Think about a single particle moving in space.
Classical picture: At any moment, the particle has a definite position . We might not know exactly where it is, but it’s definitely somewhere.
Quantum picture: The particle is described by a wavefunction —a complex number assigned to every possible position .
Classical: • ← particle is HERE
x
Quantum: ~~~•~~~ ← amplitude spread over positions
ψ(x)The wavefunction tells us: if we were to measure the particle’s position, how likely are we to find it at each location? But before measurement, the particle doesn’t have a definite position—it exists as a wave spread across all possibilities.
Let’s give this a name: each possible position is a configuration. The wavefunction assigns a complex amplitude to every configuration.
Complex Amplitudes and Interference¶
What is a “complex amplitude”? It’s a complex number , which has two parts:
Magnitude : how “much” of that configuration
Phase : the “angle” of the complex number (from 0 to )
You can visualize a complex amplitude as a little arrow (or clock hand): the length is the magnitude, and the direction is the phase.
Complex plane:
Im
↑
| ↗ c = a + bi
| /
| / |c| = magnitude
|/θ
----+------ Re
| θ = phaseWhy does phase matter? Because of interference. When two amplitudes combine:
CONSTRUCTIVE (same phase): DESTRUCTIVE (opposite phase):
↗ ↗ ↗ ↙
\ / \ /
\ / \ /
↘ ↙ ↘ ↙
↓ •
BIG amplitude ZERO amplitude
(high probability) (low probability)(Sorry these figures are awful. These are all new lectures. If you’d like to help with svg figures please see these instructions. )
This is the heart of quantum mechanics: amplitudes with the same phase add up (constructive interference), while amplitudes with opposite phases cancel (destructive interference).
We’ll explore complex numbers more next week. For now, just remember: each configuration gets a magnitude and a phase, and the phase determines how configurations interfere.
Simplifying: Two Positions¶
Continuous position space is complicated. Let’s simplify to the extreme: imagine a particle that can only be in two positions, Left or Right.
Classical: The particle is either at L or at R. Even if we’re uncertain, it’s definitely one or the other.
Quantum: The particle has a complex amplitude for each position:
where:
is the amplitude to be on the left
is the amplitude to be on the right
and are the two configurations
This is called a superposition. The particle isn’t at L or R—it’s in both configurations simultaneously, with amplitudes and .
There’s one constraint: the total probability must equal 1. Since probability is the magnitude squared:
This is called normalization. It guarantees that if we measure the position, we’ll definitely find the particle somewhere.
A Physical Two-Configuration System: Spin¶
There’s a real physical system that behaves exactly this way: the spin of an electron.
When you measure an electron’s spin along any axis (say, the -axis), you only ever get one of two results: “up” () or “down” (). There’s no in-between.
But before measurement, the spin can be in a superposition:
This is mathematically identical to the two-position system: - Two configurations: and - Two complex amplitudes: and - Normalization:
We can represent the quantum state as a vector:
This is called the state vector. For a two-configuration system, it’s a vector in (two-dimensional complex space).
Time Evolution: Matrices¶
How does a quantum state change in time?
Recall classical mechanics: position updates via velocity.
In quantum mechanics, the state vector updates via matrix multiplication:
The matrix is called a unitary matrix. “Unitary” means it preserves normalization—if before evolution, it’s still 1 afterward. (You’ll prove this in the homework.)
Computational cost: For a 2×2 matrix times a 2-vector, we need about multiplications per time step. Easy!
Two Spins: Where It Gets Interesting¶
Now let’s add a second spin.
Classical: With two spins, there are four possible configurations:
The system is in exactly one of these at any moment.
Quantum: The system can be in a superposition of all four configurations:
Each configuration gets its own complex amplitude—its own magnitude and phase.
A note on notation: Here the subscripts like are labels for configurations. This is different from the classical Ising model where we used as numerical values. In the quantum case, is a complex number (the amplitude), while is a basis state (the configuration).
The state vector now has 4 components:
And time evolution requires a 4×4 matrix:
where is now a unitary matrix.
Computational cost: operations per time step.
A Glimpse of Entanglement¶
Here’s something remarkable. Consider the two-spin state:
Can we write this as “spin 1 in some state” times “spin 2 in some state”?
Try it: if spin 1 is and spin 2 is , the combined state would be:
For this to equal , we’d need and . But if , then either or , which would make or . Contradiction!
This state cannot be written as a product. The two spins are entangled—their fates are correlated in a way that has no classical analog. We’ll explore entanglement much more in future lectures.
N Spins: The Exponential Wall¶
Now let’s generalize. For spins:
| spins | Configurations | State vector size | Matrix size |
|---|---|---|---|
| 1 | 2 | 2 | 2 × 2 |
| 2 | 4 | 4 | 4 × 4 |
| 3 | 8 | 8 | 8 × 8 |
| 10 | 1,024 | 1,024 | 1,024 × 1,024 |
| 20 | ~1 million | ~1 million | ~1012 entries |
The general quantum state is a superposition over all configurations:
Or in vector form:
Time evolution:
The Cost of Simulating Quantum Mechanics¶
The computational cost to simulate one time step:
For time steps:
This is why quantum mechanics is hard to simulate.
For 50 spins, you need a matrix with entries. No computer on Earth—no computer that could ever exist—can store that.
And yet nature does this effortlessly. Every atom, every molecule, every piece of matter is constantly “computing” quantum evolution with exponentially many configurations.
Measurement: The Collapse¶
We’ve seen that quantum states can be superpositions of exponentially many configurations. But here’s the catch:
When you measure a quantum system, you get one classical outcome.
Consider a single spin in superposition:
If we measure the spin—say, by sending it through a Stern-Gerlach apparatus (a device that uses a magnetic field gradient to spatially separate spin-up from spin-down)—we don’t get “both.” We get either or .
The probabilities are given by the Born rule:
In this case, . It’s a coin flip.
After measurement, the state collapses to the measured outcome: - If we measure , the state becomes - If we measure , the state becomes
The superposition is destroyed. This is one of the deep mysteries of quantum mechanics—but for now, we take it as a rule.
The Fundamental Tension¶
This creates a fundamental tension for quantum computing:
Before measurement: The state has amplitudes, all evolving and interfering
After measurement: We get just classical bits (one per spin)
We put in bits, we get out bits. Where did the exponential complexity go?
The answer: the amplitudes don’t disappear—they interfere. During the computation, amplitudes flow between configurations. When they meet, they can add (constructive interference) or cancel (destructive interference). By the time we measure, the interference has concentrated probability onto a small number of outcomes.
The exponential complexity was used to orchestrate interference—it shaped which outcomes are likely.
A Common Misconception¶
It’s tempting to think: “A quantum computer tries all answers simultaneously and just picks the best one.”
This is wrong.
Measurement doesn’t return the best answer—it returns a random answer, weighted by the squared amplitudes. If you just put a quantum computer in superposition and measured, you’d get a random configuration. That’s no better than guessing!
The hard part is designing the quantum operations so that the right answer has high probability. This requires carefully engineering interference—making wrong answers cancel and right answers reinforce.
What Is a Quantum Computer?¶
We now have all the pieces. A quantum computer is:
1. Prepare an Initial State¶
Start with all qubits in a known configuration, typically all zeros:
This is easy—it’s just a classical state.
2. Evolve Through Quantum Gates¶
Apply a sequence of operations (called quantum gates) that cause the wavefunction to spread across configurations.
Each gate is a unitary matrix. Simple gates act on one or two qubits at a time, but their combined effect creates superpositions over all configurations.
3. Exploit Interference¶
Here’s the key insight. If amplitude flows from configuration A to configuration B by two different paths, and those paths come back together, the amplitudes interfere:
Same phase → constructive interference: amplitudes add, probability increases
Opposite phase → destructive interference: amplitudes cancel, probability decreases
Path 1: A ──────→ B (phase φ₁)
↘
⊕ → Interference!
↗
Path 2: A ──────→ B (phase φ₂)
If φ₁ = φ₂: amplitudes ADD → high probability
If φ₁ = φ₂ + π: amplitudes CANCEL → low probability(Sorry these figures are awful. These are all new lectures. If you’d like to help with svg figures please see these instructions. )
This is exactly like the double-slit experiment, but now it’s happening in the abstract space of all configurations.
4. Measure at the End¶
Finally, measure all the qubits. The superposition collapses to a single classical outcome:
Just a string of 0s and 1s.
The Goal¶
The art of quantum algorithms is designing the gates so that:
Wrong answers interfere destructively → low probability
Right answers interfere constructively → high probability
When you measure, you’re likely to get the right answer.
This is not magic. It’s engineering interference patterns in a -dimensional space.
Summary: Classical vs. Quantum¶
| Classical | Quantum | |
|---|---|---|
| State | One configuration | Superposition of all configurations |
| Stored information | bits | complex amplitudes |
| Evolution | Update one configuration | Matrix multiply all amplitudes |
| Exploration | One path at a time | All paths simultaneously |
| Output | Deterministic | Probabilistic (via interference) |
The exponential state space is both the source of quantum power and the reason quantum systems are hard to simulate classically.
Homework¶
Problem 1: Ising Energy by Hand¶
Consider 4 spins in a line with the Hamiltonian:
with (ferromagnetic) and .
(a) Calculate the energy for .
(b) Calculate the energy for .
(c) Find the ground state(s) by checking all 16 configurations. You may use Python or do by hand. If multiple configurations have the same lowest energy, list all of them.
(d) What is the ground state if ? (Hint: there may be more than one!) Explain physically why there is degeneracy.
(e) What is the ground state for ? Compare to . Explain why only the sign matters here.
Problem 2: Phase Transition in the Antiferromagnet¶
Now consider an antiferromagnetic chain with spins:
with (antiferromagnetic, so neighboring spins want to anti-align) and .
(a) What is the ground state when ? What is its energy? (Note: there may be two degenerate ground states—list both if so.)
(b) What is the ground state when ? What is its energy?
(c) Write a Python program to find the ground state (by brute force) for values of from 0 to 5 in steps of 0.25. For each , record: - The ground state configuration(s) (if there are ties, pick one) - The ground state energy - The “magnetization” (average spin)
(d) Plot the magnetization versus . At what value of does the ground state change from the alternating pattern to the fully aligned state? This is the critical field of the phase transition.
(e) Explain in 2-3 sentences why the transition happens at this particular value of .
(f) (Optional) Derive the critical field analytically. Compare the energy of the alternating state to the energy of the all-up state and find the value of where they cross.
Problem 3: Simulated Annealing¶
Implement simulated annealing to find the ground state of the antiferromagnetic chain from Problem 2.
(a) Write a Python function simulated_annealing(J, B, N, T_init, T_final, steps) that: - Starts with a random spin configuration - Proposes single spin flips - Accepts/rejects according to the Metropolis criterion: accept if , otherwise accept with probability - Gradually cools from T_init to T_final over steps iterations
You can use either linear cooling (T = T_init - (T_init - T_final) * step / steps) or exponential cooling (T = T_init * (T_final / T_init)^(step / steps)). Exponential cooling is often more effective.
(b) Run your algorithm for , , , with , , and 10,000 steps. Plot the energy versus iteration number.
(c) Does your algorithm find the true ground state (which you know from Problem 2c)? Run it 10 times and report how often it succeeds. Consider it a “success” if the final energy matches the brute-force ground state energy to within 0.01.
(d) Now try , , . You can no longer verify by brute force. Run simulated annealing 10 times and report the lowest energy found. How consistent are the results?
Hint: For antiferromagnetic with , the ground state energy should be around . If you’re getting energies much higher than this, try increasing the number of steps or adjusting your cooling schedule.
Starter code:
import numpy as np
import matplotlib.pyplot as plt
def compute_energy(s, J, B):
"""
Compute Ising energy for configuration s.
s: array of +1/-1 values
J: nearest-neighbor coupling
B: external field
"""
interaction = -J * np.sum(s[:-1] * s[1:])
field = -B * np.sum(s)
return interaction + field
def simulated_annealing(J, B, N, T_init, T_final, steps):
"""
Find low-energy spin configuration using simulated annealing.
Returns: final configuration, final energy, energy history
"""
# Initialize random configuration
s = np.random.choice([-1, 1], size=N)
E = compute_energy(s, J, B)
energy_history = [E]
for step in range(steps):
# Exponential cooling schedule
T = T_init * (T_final / T_init) ** (step / steps)
# Your code here:
# 1. Pick a random spin index i
# 2. Compute energy change ΔE if we flip spin i
# (Hint: you don't need to recompute the full energy—
# only the terms involving spin i change)
# 3. Accept or reject according to Metropolis criterion
# 4. If accepted, flip the spin and update E
energy_history.append(E)
return s, E, energy_history
# Test your implementation
s_final, E_final, history = simulated_annealing(J=-1, B=1, N=10,
T_init=10, T_final=0.01,
steps=10000)
print(f"Final energy: {E_final}")
print(f"Final configuration: {s_final}")
plt.plot(history)
plt.xlabel('Step')
plt.ylabel('Energy')
plt.title('Simulated Annealing')
plt.show()