Gartner, Whitelists and Virtualization Methods

I have mentioned this post before, but to keep you current see:

http://blogs.gartner.com/neil_macdonald/2009/04/21/its-virtualization-security-week/

This post seems like a great “connect the no-brainer” dots together opportunity. Here’s a recap of Neil MacDonald’s Security No-Brainers (SNB) so far:

  • SNB #1: We Need a Global Industry-wide Application Whitelist
  • SNB #2: Use whitelisting in the hypervisor/VMM (especially in the “parent” or Dom0 partition) to prevent the execution of unauthorized code in this security-sensitive layer
  • SNB #3: Root of Trust Measurements for Hypervisors
  • (Relating to SNB #1 – we did announce our working relationship with Microsoft in the area of software whitelisting at the RSA show. A key element of that announcement is the standards for whitelist exchange using a Standard Schema Definition – or Data Exchange Format. So a sub-text SNB is standards of method, protocol and exchange).

    So against the backdrop of the RSA events, this blog heading, and staying within the limits of certain NDA’s that we (SignaCert) are under – let posit a connect-the-dots hypothesis:

    The highest cyber security goal in both the P (physical) and V (virtual) worlds is ultimately the same; namely, to instantiate a computing environment—a software stack on a hardware platform—as secure, reliable, safe and trusted. This trusted stack (hardware plus software) then can become one of the cogs of a business process (a “Service,” in ITIL parlance).

    It is important to note here that while the goals in the P and V world may be largely the same, the complexities in the V world are likely to make these goals even harder to achieve largely because the V world velocity of change is likely to be higher, and we may not even know (or care) where the image physically resides anymore. Even more reason to think about all of this very carefully.

    All of the moving parts of any completed IT service (AKA the business process) should ideally be trusted from end-to-end, right? Not just to a point of instantiation (when it is deployed and turned on), but thru to the point of de-instantiation (un-deployed and turned off). This is really and issue of maintaining lifecycle integrity. (There are some interesting compliance issues here to, but I’ll leave that story for another blog post).

    We now know that to achieve the goal of end-to-end trust, the following processes are necessary (at a minimum):

    1. A root of trust for measurement (RTM) should be established in the hardware in some fashion, say with a Trusted Platform Key, and establish RTM for HV/VMM (SNB #3).

    2. A Trusted Platform Module (TPM) key, or other cryptographic identifier, should be passed and used to request and validate (attest) the HV/VMM in the Domain Zero (Dom0)/parent layer (SNB #2). This is for the purpose of determining whether the Dom0 can be trusted.

    3. Then, with a “known/trusted” parent environment in place (and hopefully a way to keep it that way), we pass our “trust baton” up the stack.

    4. Finally, we need the HV/VMM to instantiate the rest of the software stack with positive attestation methods provided by the Global Industry-wide Application (software) Database (supported with known provenance ISV-sourced software “measurements,” see SNB #1).

    And then it would be useful if we had a way to rate the trustworthiness of the entire system and score the results, in normalized terms. Our goal is to attest/certify what we’ve instantiated with a proactive statement of platform/image trust, make certain it’s stable and durable, and to enable a method to continually “prove it”.

    We might also want a way for that service process stack to be able to offer its “platform trust credentials” to other business/service processes, both in and out of the physical domain in which it resides. This credentialing could be used to exchange relative platform trust with partner service process, for example.
    (Disclaimer: SignaCert has two U.S. patents issued on the notion of Stack/Platform Trust derived from element trust scores.)

    A crucial element to make this all a reality is one of the first rules we learned in kindergarten: how to play well with others.

    The platform players must work with closely with the virtualization players, who must work closely with the whitelist eco-system folks, who need ISV support to play their role, etc. And all must collaborate regularly with the systems management vendors and solution providers.

    Such collaboration is not easy, but it’s not impossible. If deep trust services become a required credential for connecting with partners, demonstrating regulatory compliance and meeting government IA standards…it will suddenly be in everyone’s strong self interest to get onboard the instantiation train, and pass the trust baton.

    By the way, in our experience number four above is one of the trickiest parts. How do we manage multiple complex heterogeneous V and P stacks? How do we create and express broader trust credentials across a matrix of dynamic business/service processes?

    And another important design tenet relates to persistent and non-persistent connect scenarios. We need to carefully avoid the circular loop of “we need a connection to attest trust credentials – but can’t get a connection because we don’t have trust affirmations”. Cooperation with eco-system partners is required to pull this one off too.

    Net-net: We need to think and act differently. The “old” P security methods have largely run out of gas already, so let’s use the V (virtualization) transition to bake in trust and security, as a first principle.

    Wyatt.

    About these ads

2 Responses to Gartner, Whitelists and Virtualization Methods

  1. [...] Symantec and others have effectively used a form of Dynamic Whitelisting (see my blog on whitelisting methods) to create “do not scan again” lists in order to optimize the AV scanning to code that actually [...]

  2. bj79 says:

    A really interesting green computer technology I found is Userful Multiplier. It’s where multiple people can use the same computer at the same time each with their own monitor, mouse and keyboard. This saves a lot of electricity and e-waste. A company called Userful recently set a virtualization world record by delivering over 350,000 virtual desktops to schools in Brazil. They have a free 2-user version for home use too. Check it out: userful.com

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: