Why Software Provenance Matters, Part II

I posted a blog a few days ago that covered some of the operational issues of Why Software Provenance Matters, but in talking with partners recently, and listening to other use cases, I thought that I’d add some detail to address these needs and perspectives.

In statistical error analysis we talk about Type One (T1) and Type Two (T2) errors (also known and False Positive and False Negative respectively).

T1 or False Positive is classifying something as true when it isn’t.
T2 or False Negative is classifying something as false with it isn’t.

See the table below:

So in any method where there is a “test” of a state against some actual or known condition, there is a chance of comparison error. The result of the test generally triggers some policy action or alert.

This of course translates to: Accurate measurement is a critical requirement for effective policy implementation.

Let me use a simple example that relates to identity. The test for this example is:

Is this an authorized user?

If it is a legitimate authorized user (Test) I want to grant entry or access (Policy) = True Positive

If it is not a legitimate authorized user (Test) I want to deny entry or access (Policy) = True Negative

If an unauthorized user IS ALLOWED inappropriate access = False Positive or T1

If an authorized user IS NOT ALLOWED appropriate access = False Negative or T2

Both T1 and T2 errors are problematic of course, but the challenge is really the same. How to I precisely identify the user so as to reduce the risk of error?

Precise identification is the answer of course.

Now let me apply this to whitelisting and blacklisting:

Here is the Whitelist Example:

Here is the Blacklist Example:

So again, our “test” must be accurate in order to affect the appropriate policy. Likely the policy in these cases is to “allow” or “deny” the code from loading and/or running.

So the accuracy and provenance (certainty of code hash signature and/or attributes) is THE MAJOR component used to test the condition for both whitelisting and blacklisting methods.

Where whitelisting can compliment blacklisting is generally to reduce the false positives by improving the certainty of the reference methods used in the detection. This can improve customer experience by not inadvertently blocking good code from loading/running.

Also, Symantec and others have effectively used a form of Dynamic Whitelisting (see my blog on whitelisting methods) to create “do not scan again” lists in order to optimize the AV scanning to code that actually needs inspection. This also enhances user experience by speeding up the AV process..

It is for all of these reasons that we think that there are really only BLACK and WHITE lists – and why we believe known provenance is one of the surest ways to precisely establish the reputation reference used for both whitelist and blacklist methods.

Wyatt.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: