Josh:

I had my first experience with computer hardware when I was 8 years old.

My dad had brought home an old computer from work that wasn't being used anymore.

We took it apart, dove inside the case, made sure to avoid touching any capacitors, and

just looked around.

We took out the floppy drive, the motherboard,

sticks of RAM, and the hard drive. We even took the case of the hard drive apart

so we could look at the shiny platters inside. To this day, that's probably one of my favorite

pieces of computer hardware. There's something mesmerizing about those perfectly reflective

platters. And the second you touch them, your fingerprints are all over them. Fun fact, there

are some powerful rare earth magnets in there that you can take out and repurpose. Just be careful

you don't pinch your fingers between them. So that was the first time I ever took apart a computer,

but building one, that happened in college. It was my second year, and I wanted a server to run

virtual machines on from my dorm room. My college had a slash 16 network, which means every network

jack you plugged into had a public IP. You could plug in a computer, and boom, it was accessible

on the internet. So I specced it out and placed my Newegg order. This was before Newegg was acquired

by a Chinese company. Then for the next few weeks, as parts came in, I would hike up the hill to the

mail room, pick up each package, and make the one mile walk back to my dorm room. A mile in both

directions with each individual component really makes you understand just how many different parts

you need for a computer. Once all the parts came in, I got it assembled and up and running.

So in this season, I want to cover each main component of a computer. We all use them every

day, but do we really understand them? The component that does everything you ask it,

on this episode of In The Shell. The wrong thing to do is just go out and buy a computer

and then learn about it. You'll learn, but you'll learn a lot of things that maybe you didn't want

to learn. A computer that you buy today will likely be obsolete six months from now, and there's not a

dang thing that you can do about it. My name is Josh, and I'm able to keep this podcast independent

and advertisement-free because of support from listeners like you. If you are finding value in

what I'm doing here, consider becoming a paid supporter at members.sideofburritos.com. And as

a thank you, members get early access to new videos, ad-free versions of everything, bonus content,

and access to a live monthly Q&A. Thanks for considering. Now let's get back to the show.

At its core, pun intended, the CPU is a processor. It processes instructions. When your computer runs

a program, or even just boots up, the CPU fetches instructions from memory, decodes them, and executes

them one after another at lightning speed. This cycle of fetch, decode, execute repeats billions of

times per second, synchronized by the CPU's internal clock. It also has a small set of specialized tools

to help get its job done. One tool is the Arithmetic Logic Unit, ALU, which handles basic math and logical

comparisons. Another is the Control Unit, which manages the flow of data, telling memory and input

and output devices where to send or receive information. The CPU also uses tiny storage areas

called registers to keep data it's actively working on. Modern CPUs often include multiple processing

units called cores on one chip. A quad-core CPU has four cores that can work on tasks in parallel,

allowing the CPU to handle multiple tasks at the same time more efficiently. Despite all these

advancements, the CPU's fundamental job remains the same, to execute instructions and process data.

But how does this tiny chip actually switch and calculate so incredibly fast? The transistor.

A transistor is essentially a microscopic electrical switch. It can turn

on or off to either allow electrical current to flow or to block it. This on-off behavior

is exactly what we need to represent the ones and zeros of binary code, the language of computers.

Each transistor can be in an on state, allowing current representing a one,

or an off state, blocking current representing a zero. By arranging billions of transistors

into complex circuits, engineers create logic gates and high-level components that perform all

the calculations and decision making inside the CPU. For example, combinations of transistors form

logic gates like AND, OR, and NOT, which in turn combine to implement everything from addition

and subtraction to multimedia processing. It's incredible to realize that even the most advanced

computation,

ultimately boil down to transistors switching on and off. Historically, transistors replaced

their larger predecessors, vacuum tubes, in the 1950s and 60s. Early computers like the 1940s

ENIAC used around 18,000 vacuum tubes as switches, and those tubes ran hot and burnt out frequently.

Transistors were first demonstrated in 1947 at Bell Labs. They were tiny, used far less power,

ran cooler, and were more reliable. Over the next decade, engineers dramatically improved

transistor designs. By the mid-1960s, a new manufacturing approach, the planar process,

made it possible to build many transistors on a single slice of silicon. This led to the first

integrated circuits and eventually...

the first microprocessors in the early 1970s. For perspective, Intel's first CPU, the 4004

in 1971, had about 2,300 transistors, compared to modern CPUs that can house tens of billions

of transistors on one chip. This rapid growth in transistor counts followed a trend known as

Moore's Law, the observation that roughly every two years, the number of transistors on a chip

doubles, which I talked about more in Season 2, Episode 2 on Gordon Moore. When you hear that a

CPU runs at 3GHz, what that actually means is that its internal clock ticks 3 billion times per second.

Each tick is an opportunity for the CPU to move one tiny step forward in the fetch-decode-execute cycle.

One other important aspect of CPUs is their architecture.

CPU architecture means the basic set of machine instructions a CPU understands to carry out fundamental operations, things like arithmetic, logic, and moving data around.

Over the years, different families of CPUs have developed their own languages.

The most common architectures you'll hear about today are x86, x86-64, which is x86's 64-bit extension, and ARM.

Modern Intel and AMD desktop CPUs use the x86-64 architecture, which is basically a 64-bit evolution of the older 32-bit x86 standard.

The jump to x86-64 around the early 2000s was a big deal because it expanded the CPU's capability.

Older 32-bit x86 chips could only directly use about 4GB of RAM, whereas 64-bit CPUs can address

vastly more memory, theoretically up to 16 exabytes. Another reason CPU architecture is a big deal

is the difference in performance and efficiency. x86 and x86-64 chips, the kind in most desktops

and laptops. They have complex instruction sets, often called CISC, spelled C-I-S-C, which include

many built-in operations. These CPUs tend to be very powerful and capable of handling heavy workloads

like video editing, gaming, and software development, but they also use more electricity

and run hotter as a trade-off. On the other hand, ARM CPUs use a reduced

instruction set called RISC, spelled R-I-S-C, which is focused on efficiency.

.

.

.

ARM chips are designed to do more with less. They draw far less power and run cooler,

which is why they dominate in smartphones, tablets, and other battery-powered devices

where energy efficiency is crucial. An ARM-based phone can run all day on a small battery.

Historically, the trade-off was that ARM's CPUs weren't as powerful as their x86 counterparts.

They handled everyday tasks just fine, but weren't geared for the really demanding stuff.

However, that line is blurring today. A great example is Apple's M1 chip and its successors

used in newer Macs. It's an ARM-based CPU that proved ARM chips can be both extremely powerful

and power-efficient.

In some cases, Apple's ARM CPUs are on par with or even outperformed traditional PC x86 chips while still running cool.

And look at how the CPU actually fits into a PC when you are building one.

Physically, a CPU is a small squarish chip that needs to be installed onto the motherboard.

The motherboard is something we'll be covering in detail in a future episode.

The motherboard has a special CPU socket, a slot that is designed to hold the CPU, and connect it to the rest of the system.

Every motherboard model supports only a particular type of socket, and a CPU will only work if it matches that socket on the motherboard.

This is why, if you pick an AMD Ryzen CPU,

You need a motherboard with the correct AMD socket for that chip.

Once the CPU is seated in the motherboard, the next crucial step is keeping it cool.

As powerful as these chips are, they generate a lot of heat.

Without proper cooling, a CPU would overheat and shut down, or throttle itself dramatically to avoid damage.

So in your PC build, after the CPU is in the socket, you'll attach a CPU cooler on top of it.

This might be an air cooler, a metal heatsink with a fan, or a liquid cooler, but either way it serves the same purpose, to carry heat away from the CPU.

Thermal paste is applied between the CPU and the cooler.

It's a tiny dab of gray stuff that ensures there are no air gaps and maximizes heat transfer between the CPU's surface and the cooler's surface.

base. When done correctly, this cooling setup keeps the CPU at a happy operating temperature

even under load. As I was going through my script for this episode, there was one question I kept

asking myself that I really wanted to know, and that's why are these tiny transistors made of

silicon? Silicon is one of the most abundant elements on earth. Common sand is largely silicon

dioxide, which means it's cheap and widely available. But more importantly, silicon is a

semiconductor, meaning it can act as an electrical conductor or an insulator under different conditions.

In pure form, silicon doesn't conduct electricity well. However, by doping it, introducing tiny

amounts of other elements, like phosphorus or boron, we can tweak its electrical properties.

Doped silicon can carry current in a controlled way, creating regions that either have excess electrons, N-type, or excess holes, P-type.

Put an N-type or P-type silicon together, and you get a transistor that can switch on or off at your command.

Additionally, silicon is friendly to manufacturing.

It can be refined into extremely pure, single crystals, and then sliced into thin wafers you might have seen in photos of chip factories.

Those wafers are the platform on which billions of transistors are fabricated, using processes like photolithography.

The abundance of silicon means we can produce these wafers in huge quantities relatively cheaply.

And the material stability and reliability mean the resulting chips can run for years without degrading.

It really is amazing when you stop to think about it.

A CPU is this small piece of material that sits quietly inside your computer, yet it does so much.

That old machine I took apart with my dad felt so powerful,

yet today's CPUs run thousands of times faster and pack far more complexity.

In the Shell is written, researched, and recorded by me,

the ARM-based podcaster,

if you are listening in an app that lets you rate shows,

please take a minute to rate this one.

I would truly appreciate it.

I always heard that if you're hiking in the woods, bring a piece of fiber optic cable with you.

If you get lost, you can bury it,

and 15 minutes later, five backhoes will be there to dig it up.

That's it. Take care, and I'll see you next time.