SQL, MySQL, SQLite, you've probably heard these terms before.
They are all about databases, the unsung heroes of the internet. They work behind the scenes,
storing data, keeping it safe, and ensuring applications run smoothly.
Without a functioning database, many websites and services simply wouldn't work.
SQL, or SQL, which stands for Structured Query Language, is the language used to interact with
these databases. When you log into your bank, a SQL query checks whether the username and
password you entered are valid. Once you are logged in, another query retrieves your balance
and recent transactions. SQL queries don't just fetch data, they also write data. If you update
your address with your bank, when you hit save, a SQL query stores the change in the database.
While SQL is extremely powerful, it's also prone to certain vulnerabilities. One of the most notorious
being SQL injection attacks. Back when I was getting into web security, SQL injections were one of the
first things I learned about. A SQL injection exploits a query to make it do something that
it shouldn't. So instead of just getting back my bank account balance, I could manipulate the query
to see the balances for everyone. Or worse, it could be used destructively, like deleting all address
records instead of just updating mine. I remember forums back in the day that had guides on how to
get started with SQL injections. The idea was that if you wanted to defend against these attacks,
you needed to understand them first. Some guides suggested testing against applications like
BWAP, an intentionally vulnerable open source app that you could run locally. But there were also
lists of poorly coded websites that you could practice on. I would never condone that, but the
internet was a bit different back then. Databases are always running in the background, always ready to
serve up the data that we need. But what happens when they are exploited on a massive scale?
That's where the SQL slammer worm comes into play. When the SQL slammer worm hit in 2003,
many internet services came to a screeching halt.
My name is Josh, and this is In the Shell.
Does this cause a national security problem?
Are being used without the operator's knowledge?
And if it sounds malicious, it's because it is.
40 attacks just this year on educational organizations.
And now to the massive cyber attack targeting hotels and casinos in Las Vegas.
To a possible cyber attack at one of the nation's busiest airports.
A cyber security firm, CrowdStrike, has caused this outage.
That it takes you longer to do something by putting it into a computer and calling it up
again than if you just kept simple records yourself in the house.
To understand how we got here, let's go back to the summer of 2002.
That's when multiple vulnerabilities in Microsoft SQL Server 2000 were publicly disclosed on the bug
track, spelled with a Q, mailing list. Microsoft coordinated patches to address these issues,
including a critical buffer overflow vulnerability, which, by the way, I covered what a buffer overflow
is back in episode 2. This particular buffer overflow could be triggered by just a few bytes
sent over the network using a small UDP packet. The vulnerability was found in SQL Server's resolution
service, which listened on port 1434. This service helps clients locate SQL Server instances on a network.
The worm sent a specially crafted packet to this port, causing the buffer overflow that allowed it
to execute its malicious code and start spreading. Before we dive further, let's briefly talk about
UDP and TCP, the two main protocols that handle how data is transmitted over a network.
TCP, or Transmission Control Protocol, is all about reliability. Think of it like sending a registered
letter with delivery confirmation. The postal service guarantees the letter will reach the recipient,
who signs to confirm they received it, and you are notified when it's delivered. TCP establishes a
connection between the sender and receiver. Both sides agree on how they will communicate,
and it makes sure all data arrives in the right order and without errors. If something
goes missing, it's resent. One of the key features of TCP is the three-way handshake.
It's a simple process with three steps, as the name implies, to initiate a connection.
Let's say I'm trying to load reddit.com on my phone because I feel like mindlessly scrolling
for the next three hours. First, my device sends a syn, S-Y-N, packet, which is basically a hello.
Reddit server replies with a synack, Ack spelled A-C-K, acknowledging the request and saying hello
back. And then finally, my device sends an Ack to confirm Reddit's response. Now we've got a
connection and data can begin flowing. You don't notice this because it's almost instantaneous,
but for computers, that's quite a bit of overhead. So that's the three-way handshake, syn, synack, Ack.
TCP also does a bunch of other stuff, but I'm not going to cover that today.
The important thing to know is that TCP is used when reliability is crucial, and you
want every bit of data to arrive intact and in order.
On the other hand, UDP, or User Datagram Protocol, which was used by SQL Slammer, is a different
beast.
It's all about speed.
You can think of it like shouting across a crowded room.
You just yell your message and hope the right person hears it.
There's no guarantee it will arrive, and you don't get any confirmation that it was received.
UDP doesn't care about establishing a connection or ensuring the data arrives in order, which
is what makes it a lot faster than TCP.
It's used when speed is more important than reliability, like in video streaming or online
gaming.
In those cases...
If a few bits of data are lost, like a frame in a video, it's not a big deal because new data, like the next frame of the video, quickly replaces it.
With that out of the way, let's get back to Microsoft SQL Server.
So even though Microsoft released patches to fix these vulnerabilities, they caused issues on various production systems.
System administrators would revert the patches because they needed these systems to work reliably.
Despite sending out more patches, which continued to cause issues, many systems remained vulnerable.
Fast forward six months to January 25th, 2003, early in the morning, between 5 and 6 a.m. UTC, or around midnight in the U.S.
The sequel Slimer Worm, also known as the Sapphire Worm, but let's stick with Slimer.
started to spread across the internet. Within only 10 minutes, over 90% of the 75,000 vulnerable
hosts had been infected. This rapid spread is still considered record-breaking to this day.
One of the first people to notice this was Owen Maresh, who was working at Akamai's Network
Operations Control Center. Akamai, a global network infrastructure company, is designed to withstand
huge amounts of traffic across the internet with automated fail-safes, but even their systems were
affected. Maresh described seeing a flood of 55 million database server requests, something he's
never seen before. Their Hong Kong data center was the first to be overwhelmed, triggering alarms and
causing network failures. It quickly became clear this wasn't just some random internet glitch, it was
the Slammer worm, and it was everywhere. As the worm spread, it crippled large parts
of the global internet infrastructure. Countries like South Korea lost all internet and cell
service, leaving 27 million people offline. In Portugal, 300,000 cable modems went dark.
Even 5 of the 13 root name servers, which are critical for resolving domain names on
the internet were overwhelmed. Now what made Slammer spread so fast? First, it was tiny,
just 376 bytes, a little larger than the size of a tweet. As I mentioned earlier, it used UDP to
scan the internet and infect vulnerable systems running Microsoft SQL Server. UDP played a massive
role here, allowing Slammer to send traffic at lightning speed without needing to verify the
connection.
unlike tcp combine that with the worm's ability to generate random ip addresses to find new targets
and you've got a recipe for rapid propagation some system administrators didn't even realize
they had sql server running making them easy targets the slammer worm wasn't malicious in
the usual sense it didn't steal data or delete files but it replicated so quickly that it flooded
networks with traffic overwhelming servers and causing outages the damage it caused was
essentially in the form of a global ddos slammer hit everyone universities corporations atms went
offline and even entire countries in suburban seattle 911 dispatchers had to resort to paper logs
and continental airlines canceled flights out of newark because they couldn't process tickets
The financial fallout from Slammer was massive, with losses estimated at over $1 billion.
The chaos even spilled into the following week, yet despite all this, we still don't know who created Slammer.
The worm taught us a few important lessons.
First, even though a patch had been available for months, many organizations hadn't applied it.
That really highlights the importance of timely patching, especially for systems exposed to the internet.
Even though delayed patching is still a regular occurrence to this day.
Second, while Slammer didn't delete files or steal data, it caused massive disruptions simply by flooding networks.
This pushed organizations to improve their defenses and network segmentation to prevent something like this from happening again.
Since 2003, malware has...
evolved. We've moved from worms like Slammer, which were focused on speed and disruption,
to more sophisticated threats like ransomware, which is driven by profit and often far more
damaging in terms of data loss and financial impact. Even though the Slammer worm didn't
cause permanent damage, it was a clear reminder of the importance of patching and strong network
security, and to this day, some firewalls still block UDP traffic on port 1434, the same port Slammer used to spread.