Whitelist Emerges from the Shadows: Re-enforcing the Three-Tier Security and Systems Management Model

As I have discussed before, it is interesting to watch the steady evolution of the IT security, systems management and compliance solution set. If you zoom back to a 5 to 10 year view, and compare how we were thinking about things before, to how today’s thinking has evolved, some interesting macro patterns emerge.

First, there is increasing harmony in the thinking that “we can’t keep up with the blacklist velocity” and it is easier to flip the model to “whitelist” and keep track of that. Even the industry giants are now touting whitelists as a safe bet for the future of IT security. See this video from RSA this year where John Thompson, the CEO of Symantec, talks about future trends:

While John’s comments are useful (and we believe correct) – we need to be careful here. I have indicated before in these blog pages, and in my public-facing keynotes and presentations that Whitelists are NOT substitutes for Blacklists. In fact, the methods are complimentary for now.

As we get better (as an industry) at measuring and asserting Positive Image Management (making sure the good and desired code set remains in a prescribed state over the device usage lifecycle) then our full dependence on Negative Detection methods (AV/IDS/IPS) should diminish rapidly.

Accomplishing this, however, could hinge on our ability to radically shift our paradigm as it relates to security and systems management. Let me expound.

In the traditional AV model, we fundamentally rely on a two-tier model where:

  • Detection and Blocking is handled by the client or agent resident on the IT device (server, workstation or desktop), or in a gateway device that inspects content destined for an endpoint within the domain.
  • Detection is enabled by a “blacklist”, usually maintained by the AV vendor, and this content is made available to the AV scanning tools on a push or pull basis.

Basically the industry approach has been incremental, reactive, and has leaned heavily on this two-tier model.

As we shift to a more proactive and prescriptive Positive Image Management method, it is imperative that we “remap” our traditional two-tier view. We see our customers moving more to this view:

  • Positive Images (the desired “software bill of materials”) for a given IT device can be defined in advance through the software build, QA, and User Acceptance Test (UAT) release cycle. In parallel to building the deployable image, a software reference can be easily created. Additionally, and for legacy environments, some or all of the device image can be “learned” from the device as it is currently deployed.
  • The resulting software reference can then be used in conjunction with endpoint instrumentation (3rd Party or Platform Intrinsic) and a comparison can be made between the image reference (all or some) and the target endpoint.

There are many advantages and benefits for Enterprise to move to this model, but in simple terms this process is commonly called a “control process”, and is very common in almost every other form of repetitive automation and process management.

As we move to these methods, we need to map more to a three-tier model. Where we may already have a three-tier model (Enterprise Management, CMDB, and/or Data Center Automation), we need to add/supplement the middle tier with Positive Image Management Methods.

In our opinion this will create a hierarchy that looks more like this:

  • High-quality/high provenance content repositories (aka “whitelist”) are used to supply common software “measurement” references for Operating Systems, Application Packages, and other data elements that are commonly included in the software reference images. This is likely a global service that can be accessed on a “pull” basis to many customers.
  • A middle-tier, that would typically be within the customers IT domain, serves as the image management layer to support a one-to-many image reference model (typically enterprise IT has many IT devices that inherit some or all of a common image reference). This tier also conveniently allows customers to add their OWN device and domain specific software measurements to complete the image references for their specific environments.
  • Endpoint instrumentation is added (or is included in the platform) that can perform localized measurements for the binary, configuration and platform-specific software elements, and request validation and/or perform application locking/blocking based on “known and trusted” software elements and sets.

In this model it is becoming increasingly clear that the enabling, and high-value components, are in the MIDDLE-TIER. Sure, you need a great set of “whitelist services” above the middle-tier. And yes, you need capable endpoint instrumentation to measure and apply desired detection, blocking/locking and remediation policies. We believe that the key value contribution and long-term differentiation layer of the three-tier model is delivered by the high-resolution image management capabilities present in the middle-tier.

Endpoint instrumentation is largely a commodity and is being subsumed into the platform itself anyway, so that is not a great spot to place your bets (as an IT solutions provider). You can begin to see this emerge in the OS and platforms now with technologies like Validated Execution from Sun (see http://www.opensolaris.org/os/project/valex/) and Trusted Execution (Txt) from Intel (see http://www.intel.com/technology/security/).

And, over time, even the “whitelist” services themselves may begin to commoditize.

So as you begin to grasp this important shift to “whitelisting” that many people are beginning to talk about, don’t be fooled.

Our primary objective must be to provide our customers with an effective IT controls method to enhance their existing “managed IT device environment” through the utilization of Positive Image Management, Control and Remediation Methods. Positive IT control methods are enabled and enforced by high-quality whitelist measurements.

Wyatt Starnes

Advertisements

5 Responses to Whitelist Emerges from the Shadows: Re-enforcing the Three-Tier Security and Systems Management Model

  1. Spaf says:

    In reality, we want to end up with positive CONTROL. We want to run only what policy allows, and know what it is we are running. Ideally, we would have some form of real reference monitor (in the classical sense), but lacking that we need control.

    White vs. black is misleading because most configurations are really grey — we sort of know what we are running, we have a vague policy, and we’re unsure if what we’re running is really correct. None of that leads to effective control, which is what policy is all about.

    Whitelisting give us some measure of control, and further gives us some additional visibility into what we’re running. But with the policy piece to say which authentic items we’re allowed to run, and without the assurance piece to know that it will perform as we expect, we still don’t have an optimum environment.

  2. Wyatt says:

    Spaf,
    Thanks the comment.
    Yes, the entire notion of whitelist/blacklist is a misleading as you indicate. All IT systems have shades of grey for sure, but the goal must be to *minimize* the grey risk by first identifying what is indeed “white” (defined as what is “ok and deemed safe” at both a single executable level, and some (or all) of the s/w stack). If we can’t definitively make this determination, then any load/run policy becomes suspect.
    Clearly there is a difference between how “grey” a consumer box in the wild is versus a server in a financial service data center also. One would hope (and desire) that the data center box IS pretty black and white. In reality, largely because we have had neither the will, or the means, to normalize the desired software image vs. the actual deployed image – our policy control has been at best, lacking. As you indicate as well, traditional control methods often do involve a data reference and a “reference/target” compare. We have been a bit *slow* in adopting the “measure/validate” control methods in the IT world (as compared to almost every other industry, and their use of monitor/control methods for repetitive process automation).
    My point was that the value of “whitelist” has more to do with creating the REFERENCE, which in turn is used to enable better image monitoring/control. And the reference-to-target deltas become the enablers for effective policy execution. And effective policy IS the key to better security, compliance and systems management.
    One of the next blog entries we’ll do on policy ok?

  3. jns says:

    In a short musing on a Monday morning, I wanted to second Wyatt’s comments that the use case, and value case, for blacklist/whitelist are going to be the hard part to convey. In some ways, it’s easier of a paradigm shift happens and one “technology” can replace another one, replacing another $$$ spent, etc. In this case, it will likely not be that case which then has IT teams wondering how to pull it all off.

    Instead, I see whitelisting as the “upsell” … sorta like what the carriers had to do when bandwidth when to zero cost. It’s a new capability, it has a premium to it, it will net cost more to buy even with A/V declining in overall cost, and it’s protection and control profile applies differently. I can envision, for example, a complimentary effort whereby the forensics world can suddenly have the tools recognize the same lists, and the “savings” is in the time spent analyzing, and there will be other examples.

    In the end though, the ability to get $$$ is about control. Almost every IT org I interact with, including mine, is talking about taking back and control and having situational awareness. I imagine from change control to malware to pure comfort level, that will make the investment seen as necessary. The value case _as applied_ is where the money will be seen as worthwhile tho.

  4. Wyatt says:

    JNS:
    Excellent observations and comments. I can affirm this in the market with work we are doing daily.
    While the “business sell” would be frankly easier if whitelist where a “pin-for-pin” replacement for blacklist….the fact is (an you point out) the value is different (much better) when image management, based on whitelist is utilized correctly.
    So thinking about this as an “upsell” or even “new sell” is correct. And the bandwidth analogy is great. Sometimes when we have painted ourselves into a “commodity corner”, as with bandwidth AND antivirus, we need to step back with a clean sheet of technical and marketing paper….and take a fresh approach.
    It IS all about control when seeking $$$ in enterprise these days. And control is about effective IT policy, which in turn is about high-resolution “sensing” as to what is happening in the enterprise.
    We have run the course in sensing/filtering with blacklist. So let’s flip the model and sense/enable with high-resolution image management methods (fed by whitelist) that give us new and valuable information about IT state and changes.
    Not only do we become more CapEx efficient – we save OpEx (resources/people) to deliver higher uptime, intrinsic compliance and better security. BTW, we increase business agility at the same time.
    Customers AND IT suppliers that can make this crucial paradigm shift will reap the rewards quickly. It is truly win-win.
    We are seeing value proof points IN THE MARKET that show at least a 10x value/pricing premium (over AV) for delivering these new methods well.

  5. […] Whitelist Emerges from the Shadows: Re-enforcing the Three-Tier Security and Systems Management Mode… […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: