The vacuum tube was a miracle. Before it, there was no electronics industry. No amplification, no radio, no computing. ENIAC ran on 17,468 tubes and could calculate artillery trajectories faster than any human alive. It worked. It was, by the standards of its time, astonishing.

It was also enormous. Fragile. Hot. The tubes burned out constantly. ENIAC consumed 150 kilowatts of power and filled a room. Engineers spent more time replacing failed tubes than running calculations. The architecture worked, but it was the wrong architecture. It couldn’t scale. Not because the engineering was bad, but because the physics was wrong. You can’t build the future on heated filaments in glass envelopes.

The Parallel

This is where we are with AI.

The large language model is a vacuum tube. It works. It does things that would have seemed impossible five years ago. It can write, reason, code, translate, analyze — hold a conversation that feels like thought. By the standards of our time, it is astonishing.

It is also enormous. It requires data centers burning megawatts of power. It hallucinates. It drifts. It costs a fortune to train and a small fortune to run. The architecture works, but every serious limitation we hit traces back to the same root: this is the wrong architecture. We’re heating filaments in glass envelopes and wondering why it’s expensive.

The people declaring AI overhyped are looking at the vacuum tube and seeing its flaws clearly. They’re right about every one. The tubes burn out. The power draw is absurd. The output is unreliable. They’re describing real problems. And making the same mistake people made in 1946 when they looked at ENIAC and concluded that computing would always be a niche endeavor for governments and large corporations.

The Transistor Moment

What came next was not a better vacuum tube. It was a different thing entirely.

Shockley, Bardeen, and Brattain at Bell Labs proved that a piece of semiconductor could do what a vacuum tube did, without the heat, the glass, the fragility. The junction transistor wasn’t an improvement. It was a replacement — different physics, different economics, different future.

Then the Infamous Eight walked out of Shockley’s lab, founded Fairchild Semiconductor, and figured out how to manufacture transistors at scale. They didn’t just build a company — they built the foundation of Silicon Valley and everything that followed: the integrated circuit, the microprocessor, the personal computer, the phone in your pocket.

None of that was visible from the vacuum tube. You couldn’t get there by making vacuum tubes smaller or more efficient. You had to find a different physics.

Still Ahead

The transistor moment for AI hasn’t happened yet.

We are building increasingly elaborate vacuum tubes. Bigger models. More parameters. More data. More power. And they’re getting better, the way vacuum tubes got better through the 1940s and 50s. But the trajectory we’re on is not the trajectory that changes everything. That requires something we haven’t found yet. A different architecture — a different physics of intelligence.

When it arrives, it won’t look like a better chatbot. It will look like a different thing entirely, the way a transistor looks nothing like a vacuum tube. It will be small where this is big, cheap where this is expensive, reliable where this is unpredictable.

We don’t know what it is yet. But we know it’s coming, because we’ve seen this story before. The vacuum tube proved the function was possible. The transistor made it inevitable.

That’s where we are. The function is proven. The architecture is temporary. And somewhere, in some lab, the equivalent of Shockley’s junction transistor is waiting to be found.

0 items under this folder.