Unsafe at Any Speed: Distributed Denial of Service Attacks and Whitelisting

Some of us gray-haired folks remember Ralph Nader’s provocative book “Unsafe at Any Speed” published in 1965. Basically the book (very controversial when released) took on the automakers for building unsafe cars that threatened the safety of All people that travel the roads. It struck me over the 4th of July holidays that we have a similar situation now on our cyber highways.

Even people that have a life beyond blogging on cyber assurance issues likely would have noticed the front page, and top-of-news coverage about the cyber attacks to the web servers at the Pentagon, Whitehouse, Treasury, State Department (and many other sites). If you didn’t, you can get an overview here: http://www.google.com/hostednews/ap/article/ALeqM5iaaWwzg–SOmIz9Qjdju4UYFB5GgD99B7LNO0

And NBC led their primetime National news last week with this report:


http://www.msnbc.msn.com/id/3032619/#31806714

These attacks are classed as of Distributed Denial of Service (DDoS) of attacks where 100’s or even 1000’s of computers (or more) containing remote-triggered malicious code can be commandeered for nefarious purposes. These so-called Zombie computers (when triggered) will direct their payloads at a set of targeted web servers (or other Internet connected compute processes) in an attempt to overwhelm them with malicious traffic, thus rendering them inaccessible. The amplification effect of these multiple, remote controlled machines is what enables DDos attacks to be as dangerous as they are.

DDoS attacks are far from “new” by the way. One of the first widely publicized one traces back to the year 2000 attack on the Yahoo website. Since then we have learned how to deal better with the SYMPTOMS of an attack (when in progress) by recognizing patterns and routing bogus traffic away from the runtime servers, but we’ve made very little progress on stopping the SOURCE of the problem (Zombies should not exist in the wild).

McAfee estimates that between January and May of this year, 12 million new Zombie computers were armed and aimed. See http://news.digitaltrends.com/news-article/19879/twelve-million-zombie-computers-since-january

Much of the last weeks post DDoS event discussion centered on WHO initiated the attacks and, in my opinion, too little attention has been given as to WHY our systems management and compute infrastructure integrity are so vulnerable that it allows 100’s of thousands, or even potentially million’s of computers, pre-armed, to enter zombie mode when they receive a command from their master — anywhere on the planet. And (you might be wondering) where these Zombie’s are? It is pretty well established that at least some of these Zombies are inside of Corporate and Government offices and datacenters! (Just think if one of your Corporate boxes could be a Zombie-slave to some domestic or foreign master, what other nasty things might it contain?)

How can this be?

One of the many benefits of moving IT systems management to whitelist-based, inclusive software and stack validation methods is that it minimizes (eliminates for all practical purposes) the detection “blind spot” that rogue software can parasitically exist in.

Reference Configuration-based whitelisting establishes software manifest-based monitoring and control on the compute devices which can easily alert the user or IT admin staff if any code is added, deleted, or changed on the target system (from an established, managed build reference). These powerful change detection methods virtually eliminate the exposure to “sleeper” or “Zombie” code existing on systems without user awareness.

With reference configuration-based whitelist image management in place, it is virtually impossible to hide malicious code “in plain sight” as is currently happening. Not only does this help to reduce (and eventually eliminate) much of the DDoS risk (from the standpoint of allowing the presence of Zombie code) – many other benefits accrue from these methods (better compliance, improved security, more stable systems, etc.).

These whitelisting methods also reduce much of the risk of so called “zero day” attacks like the Hannaford Bros. event discussed here. http://zerodaythreat.com/?p=40 and the other malicious and parasitic code risks (like the one that almost wiped AIG’s computer disks in early 2009).

So the Ralph Nader corollary holds in my opinion. The internet is currently UNSAFE AT ANY SPEED, and ultimately it becomes the responsibility of the suppliers/vendors to build better and safer devices to navigate on our cyber highways. It also falls to consumers and users of IT to use “practice safe computing” and use tools and best practices to make sure that they are not contributing to the IT-ecosystem danger factor.

Unfortunately we are barely at the comparative “install and fasten seat belt” stage of relative maturity and yet — at the speed we are traveling coupled with our extreme IT dependence — the severity of the impending (and likely inevitable) crash will likely be catastrophic.

When are we going to learn? Put down the shovel, step away from the hole, and think about it. It is long past time for us to enforce best practices, deploy new tools (think reference-based whitelisting), and find stronger political will to address these fundamental and crucial IT risks.

Wyatt.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: