Why Software Provenance Matters, Part II

June 6, 2009

I posted a blog a few days ago that covered some of the operational issues of Why Software Provenance Matters, but in talking with partners recently, and listening to other use cases, I thought that I’d add some detail to address these needs and perspectives.

In statistical error analysis we talk about Type One (T1) and Type Two (T2) errors (also known and False Positive and False Negative respectively).

T1 or False Positive is classifying something as true when it isn’t.
T2 or False Negative is classifying something as false with it isn’t.

See the table below:

So in any method where there is a “test” of a state against some actual or known condition, there is a chance of comparison error. The result of the test generally triggers some policy action or alert.

This of course translates to: Accurate measurement is a critical requirement for effective policy implementation.

Let me use a simple example that relates to identity. The test for this example is:

Is this an authorized user?

If it is a legitimate authorized user (Test) I want to grant entry or access (Policy) = True Positive

If it is not a legitimate authorized user (Test) I want to deny entry or access (Policy) = True Negative

If an unauthorized user IS ALLOWED inappropriate access = False Positive or T1

If an authorized user IS NOT ALLOWED appropriate access = False Negative or T2

Both T1 and T2 errors are problematic of course, but the challenge is really the same. How to I precisely identify the user so as to reduce the risk of error?

Precise identification is the answer of course.

Now let me apply this to whitelisting and blacklisting:

Here is the Whitelist Example:

Here is the Blacklist Example:

So again, our “test” must be accurate in order to affect the appropriate policy. Likely the policy in these cases is to “allow” or “deny” the code from loading and/or running.

So the accuracy and provenance (certainty of code hash signature and/or attributes) is THE MAJOR component used to test the condition for both whitelisting and blacklisting methods.

Where whitelisting can compliment blacklisting is generally to reduce the false positives by improving the certainty of the reference methods used in the detection. This can improve customer experience by not inadvertently blocking good code from loading/running.

Also, Symantec and others have effectively used a form of Dynamic Whitelisting (see my blog on whitelisting methods) to create “do not scan again” lists in order to optimize the AV scanning to code that actually needs inspection. This also enhances user experience by speeding up the AV process..

It is for all of these reasons that we think that there are really only BLACK and WHITE lists – and why we believe known provenance is one of the surest ways to precisely establish the reputation reference used for both whitelist and blacklist methods.



Gartner, Whitelists and Virtualization Methods

June 2, 2009

I have mentioned this post before, but to keep you current see:


This post seems like a great “connect the no-brainer” dots together opportunity. Here’s a recap of Neil MacDonald’s Security No-Brainers (SNB) so far:

  • SNB #1: We Need a Global Industry-wide Application Whitelist
  • SNB #2: Use whitelisting in the hypervisor/VMM (especially in the “parent” or Dom0 partition) to prevent the execution of unauthorized code in this security-sensitive layer
  • SNB #3: Root of Trust Measurements for Hypervisors
  • (Relating to SNB #1 – we did announce our working relationship with Microsoft in the area of software whitelisting at the RSA show. A key element of that announcement is the standards for whitelist exchange using a Standard Schema Definition – or Data Exchange Format. So a sub-text SNB is standards of method, protocol and exchange).

    So against the backdrop of the RSA events, this blog heading, and staying within the limits of certain NDA’s that we (SignaCert) are under – let posit a connect-the-dots hypothesis:

    The highest cyber security goal in both the P (physical) and V (virtual) worlds is ultimately the same; namely, to instantiate a computing environment—a software stack on a hardware platform—as secure, reliable, safe and trusted. This trusted stack (hardware plus software) then can become one of the cogs of a business process (a “Service,” in ITIL parlance).

    It is important to note here that while the goals in the P and V world may be largely the same, the complexities in the V world are likely to make these goals even harder to achieve largely because the V world velocity of change is likely to be higher, and we may not even know (or care) where the image physically resides anymore. Even more reason to think about all of this very carefully.

    All of the moving parts of any completed IT service (AKA the business process) should ideally be trusted from end-to-end, right? Not just to a point of instantiation (when it is deployed and turned on), but thru to the point of de-instantiation (un-deployed and turned off). This is really and issue of maintaining lifecycle integrity. (There are some interesting compliance issues here to, but I’ll leave that story for another blog post).

    We now know that to achieve the goal of end-to-end trust, the following processes are necessary (at a minimum):

    1. A root of trust for measurement (RTM) should be established in the hardware in some fashion, say with a Trusted Platform Key, and establish RTM for HV/VMM (SNB #3).

    2. A Trusted Platform Module (TPM) key, or other cryptographic identifier, should be passed and used to request and validate (attest) the HV/VMM in the Domain Zero (Dom0)/parent layer (SNB #2). This is for the purpose of determining whether the Dom0 can be trusted.

    3. Then, with a “known/trusted” parent environment in place (and hopefully a way to keep it that way), we pass our “trust baton” up the stack.

    4. Finally, we need the HV/VMM to instantiate the rest of the software stack with positive attestation methods provided by the Global Industry-wide Application (software) Database (supported with known provenance ISV-sourced software “measurements,” see SNB #1).

    And then it would be useful if we had a way to rate the trustworthiness of the entire system and score the results, in normalized terms. Our goal is to attest/certify what we’ve instantiated with a proactive statement of platform/image trust, make certain it’s stable and durable, and to enable a method to continually “prove it”.

    We might also want a way for that service process stack to be able to offer its “platform trust credentials” to other business/service processes, both in and out of the physical domain in which it resides. This credentialing could be used to exchange relative platform trust with partner service process, for example.
    (Disclaimer: SignaCert has two U.S. patents issued on the notion of Stack/Platform Trust derived from element trust scores.)

    A crucial element to make this all a reality is one of the first rules we learned in kindergarten: how to play well with others.

    The platform players must work with closely with the virtualization players, who must work closely with the whitelist eco-system folks, who need ISV support to play their role, etc. And all must collaborate regularly with the systems management vendors and solution providers.

    Such collaboration is not easy, but it’s not impossible. If deep trust services become a required credential for connecting with partners, demonstrating regulatory compliance and meeting government IA standards…it will suddenly be in everyone’s strong self interest to get onboard the instantiation train, and pass the trust baton.

    By the way, in our experience number four above is one of the trickiest parts. How do we manage multiple complex heterogeneous V and P stacks? How do we create and express broader trust credentials across a matrix of dynamic business/service processes?

    And another important design tenet relates to persistent and non-persistent connect scenarios. We need to carefully avoid the circular loop of “we need a connection to attest trust credentials – but can’t get a connection because we don’t have trust affirmations”. Cooperation with eco-system partners is required to pull this one off too.

    Net-net: We need to think and act differently. The “old” P security methods have largely run out of gas already, so let’s use the V (virtualization) transition to bake in trust and security, as a first principle.