Why Software Provenance Matters

May 29, 2009

We have announced and talked about the concept of “known provenance” as a crucial software-assurance and IT-lifecycle-management metric for some time, but it struck me today that I haven’t really underscored some of the reasons and use cases that led us to this conclusion.

Firstly, there are multiple dimensions to software integrity assurance that leverage cryptographic validation (hashing) methods including:

1. Do I know that the software elements that I am loading and running on my platform ARE what they say they are?)

2. Security quality assurance – can I couple (1) with a quantitative expression of code vulnerability statements? (Is it the code it purports to be and is it secure?) For example, our recent work with Veracode.

3. And what proof do I have that this was the code that I am using was actually built by the named vendor? (The filesystem may think that it is XYZ ISV, but it is the software that is vouching for itself, perhaps with the aid of a installer-embedded certificate. Inconclusive at best, especially after installation)

So we have taken the position early on that we need PROOF that the code was actually built by the named supplier as a crucial attribute of software and device validation or attestation. We call this (AKA Source Origin or Known Provenance.

The road to obtaining provenance and delivering it across various use cases is clearly the harder road when collecting software measurements. It requires “quality over quantity” dedication…..means that ISV’s and other software producers and integrators need to be involved. After all, true known provenance can only be delivered with a certifiable “chain of custody” all the way from the original software vendor and then managed all the way to the end system.

Open standards in method and schema are key, and the industry has done a decent job at collaborating on these—with additional iterations now pending.

But back to the title of this blog: So What?

Here is quick snapshot of use cases where provenance is increasingly critical:

1. Software Forensics – The objective here is to identify the problem by definitively separately the “good” from the “bad” (and the “unknown”). This is simply common sense, as our objective with forensics is to spend our time as efficiently as possible while looking for the “needle in the haystack”. Efficiency demands that we make the haystack smaller ASAP in our quest for the needle.

2. Supply Chain Assurance – This one deals with both the supply and purchaser concern of “Is this the device that I think it is? (i.e., was it in fact built by the named supplier, and is the h/w and s/w integrity demonstrable?).

3. Service Level Assurance (SLA) Management – This is the classic issue of “Ok, something doesn’t work, and whose fault is it?” (I’m sure you’ve never seen finger pointing on this one.)

4. Compliance – Needless to say, when provenance is clear and trusted, we can improve statements of compliance as well. (I know that I have the right software build in place—i.e., the right software manifest and integrity; and I prove the software and work product in the build is from the named author?)

So the power to prove, enabled by software and hardware provenance, is not a luxury item. In this age of globalization and outsourcing of design, manufacture, distribution, and systems management, we must establish and maintain “trust chains” to all the devices that we build and supply.

For suppliers of complex hardware and software, provenance is a cradle-to-grave issue. How can we truly and cost-effectively own and support our “Brand” without it?

So, as we enter the next chapter of ubiquitous computing (aka Web 2.0), our ability to design trust into our devices early in their lifecycle and systematically validate and pass that trust through the lifecycle of our “brand” in a non-repudiated manner will become a key market differentiator.

One might even go as far as to say:

Those who do not embrace this view may not survive the next wave of consolidation (which by the way is already well underway).

Wyatt.


Enter Configuration-Based Whitelisting

May 27, 2009

This post is going to tie a couple of prior discussions together (I hope).

In August 2008, I posted a blog entitled:

Whitelist Emerges from the Shadows: Re-enforcing the Three-Tier Security and Systems Management Model

And in my most recent post entitled:

The “Whitelist Space” seems to be heating up a bit….

I took a stab as creating a taxonomy for whitelisting methods, as this space is really just taking shape – and clearly not all “code-whitelisting methods” are created equally.

So the “dot-connection” is this:

Effective whitelisting is really about total configuration enforcement, not just blocking individual elements. And as I stressed in the first blog, it is really a THREE-TIER architectural challenge, not a traditional two-tier problem like blacklist solutions.

And interestingly, the “heavy lifting” to make all this work is not at the ends of the architecture (Tier 1 or Tier 3) but in the middle – Tier 2.

(Refresher: IMHV, Tier 1 is the whitelist cloud services, Tier 2 is the domain whitelist caching and the reference-configuration management), and Tier 3 is the endpoint measurement and policy enforcement agent/client/OS/Hypervisor support).

We think that the real power, manageability and scalability of the method comes into view when we move from just “Good File” to “Configuration-based Whitelisting”, where we pass more whitelist “intelligence” to the method (things like parent-child relationships of the elements and provenance of the elements being enforced).

Clearly, the cloud and local whitelist agents are needed to collect and pass that information – but the key is supplementing that information with additional domain-specific configuration and element data, and organizing the entire lot into configuration-setting and software stacks that should be present on the platform under management.

And all of this must be platform/device decoupled, must be data-type independent (files, registry, config settings database fields, etc), must be mappable in the reference configurations and must be vendor/platform/software-type neutral.

Whew. Sorry, that was a mouthful.

Real and immediate use cases for this include requirements like Federal Desktop Core Configuration (FDCC) and other compliance issues.

These are exciting times for the space, IMHV. Stay tuned for more.

Wyatt.


The “Whitelist Space” seems to be heating up a bit….

May 21, 2009

These pages have been talking about the bigger issues of “IT in Transition” for a long while. The shift to “defense in depth”, with the AV players adding whitelist methods, has been a persistent theme on these and other blog pages.

Well in the last few weeks, we’ve seen a couple major moves: first, Microsoft endorsing the concept and working with us to provide their signatures to the market, and now a significant move with the imminent acquisition of Solidcore by McAfee (MFE).

http://newsroom.mcafee.com/article_display.cfm?article_id=3520

It is interesting that MFE will assimilate Solidcore in the Governance, Risk and Compliance Business Unit. It is what I would consider a “bite-size” move to application enforcement based on whitelisting by MFE. Recently, Solidcore has done a good job delivering value to fairly static endpoint devices – largely focused on the embedded device, ATM, and POS market spaces.

There is also mention in the release of SCADA devices commonly used to control physical infrastructure devices such as electrical and water control/management systems. This could bolster work that MFE may be targeting in Government, where they have done well with the ePO platform.
Solidcore describes their method as “dynamic whitelisting” – also pretty good marketing IMHO. So now we have another bullet on the whitelist method slide. So far we have:

  • Application Whitelisting or Allow Listing (single executable locking/blocking/allowance)
  • Dynamic Whitelisting (aka Self-Referencing – see below)
  • Whitelist Caching (this is what Symantec is doing in their latest Norton offerings so that they don’t have to rescan “known code” again with their malicious detection tools)
  • Comprehensive Whitelisting (this is a superset of Application Whitelisting where entire applications or software stacks may be “measured”, and based on device health” determined by these broader measurements – certain policies may be invoked (like allow/deny platform access to other resources)

(These are the just the “code signing” methods. There are other whitelisting and reputation services being employed for email and URL filtering that is another category entirely.)

Dynamic whitelisting is basically a synonym for “self-learned or self-referencing” configuration image and integrity models where the “whitelist” is derived from the device(s) themselves. Tripwire has been doing this pretty well for a few years.

(Full disclosure again – I co-founded Tripwire, and Solidcore competes directly with Tripwire in the desktop and server integrity market space)

While Self-Referencing whitelisting can be useful, it has a number of limitations and drawbacks. Scalability, manageability, and noise management are just a few of them. By “noise management”, I mean too many false positives, such as when merely upgrading a version of software generates thousands of “file-changed” hits. Also, what if your reference master was corrupted? Or … ?

So, on the one hand, we are happy to see a major AV player dip a toe into the whitelist waters as another validation for the space. We’ll be even more excited when customers and vendors really stretch their legs – and push the envelope with deep and comprehensive whitelisting and reference configuration management methods.

Let’s move beyond executable-lock-and-block methods, and configuration monitoring based on self-learned methods – and get to full and scalable compute-platform attestation, with both root of trust (Trust PROOF built INTO the platform) and known-provenance, list-based whitelisting (PROOF that the code was built by the named authors).

Connecting these dots is necessary to have true platform-intrinsic, end-to-end trust – not just to validate the “easy devices” like POS – but for the more complex servers and workstations use cases.
Yes, it is hard. But the pain will be worth the gain.

It’s time to build more trust into our systems.
Wyatt.


Speaking of Standards…..

April 30, 2009

I continue to follow with interest the work that Neil MacDonald from Gartner is doing as he examines trends in physical and virtual security methods and trends.

Here is his latest blog reporting on some observations gleamed from RSA around virtualization and security. Good stuff Neil.

http://blogs.gartner.com/neil_macdonald/2009/04/23/rsa-and-virtualization-security/

I lock onto these things partially just because I am a geek at heart, and because I think it is fascinating to watch, ponder, and hopefully contribute something of value to effort along the way. And also believe that the Physical to Virtual IT transition point presents an excellent opportunity to “think differently”. I posted my thoughts on that in Neil’s latest blog.

And also because it is just freakin’ important to get these IT systems working better. You see I have this silly (and perhaps old-fashioned) notion of leaving the world better place than I found for my part in it.
And the only way I know how to do that is to work with a world-class team (like the one we have here at SignaCert) and to challenge the status quo day-in and day-out. And the only technology and discipline area that I know well is Information Technology security and systems management.
So here we are……

We’ll keep hammering on this with our friends, colleagues and trusted partners. With enough effort and will, even the biggest rocks can be moved.

Wyatt.


A Standards-based approach

April 24, 2009

A few months ago a bunch of my friends and colleagues decided to do something crazy:

To collaborate and write a book pooling collective knowledge, experience and vision around the state of the security and information assurance business.

My good friend Carlos Solari took the lead (he really did the heavy lifting regardless of the exceptionally generous, “About the Contributors” intro).

After a ton of work on long plane flights, and many lost weekends, the book is complete and was published at the RSA 2009 conference this week.

We’d enjoy your input and comments.

Here is a PDF of the intro:

http://www.signacert.com/resources/downloads/Security_Book_Intro.pdf

And here is a link to Amazon.com where you can buy a hardback copy:

http://www.amazon.com/Security-Web-2-0-World-Standards-Based/dp/0470745754/ref=sr_1_1?ie=UTF8&s=books&qid=1240590998&sr=1-1

With special thanks to Carlos and the entire team for their dedication to this book project, and for the passion they show every day to improve the discipline of our field.


SignaCert Announcement relating to Microsoft at RSA

April 21, 2009

Today at RSA we announced a significant “arrangement” with Microsoft.  We also participated in the Microsoft Theater (link to presentation coming soon).

Obviously this is a big deal for us, but that is not why I am writing this blog entry.

This blog is titled “IT in Transition” and if this isn’t transitional, I don’t know what is.  From the release:

“This is a very important step in enabling much better trust, security and management solutions for Microsoft customers.  It underscores the ongoing commitment of Microsoft to provide expanded object reputation services within its products and services as new security standards and methods evolve,” said Greg Kohanim, Product Unit Manager of Microsoft. “As an ISV, Microsoft is proud to extend this common repository with its own information to enable the industry to increase security across the board.”

Thank you Mr. Kohanim.

Also from the release:

“Software whitelisting is becoming strategic for protecting compute devices. Who builds and maintains the list is one of the more significant issues,” said Neil MacDonald, VP and Gartner Fellow.  “Since ISVs are the source of much of the software (including the OS foundation), it makes sense to have the worldwide ISV community contribute, in a standard way, to a whitelist that has the broadest adoption and impact versus the complexity involved in building or contributing to proprietary databases.”

And thank you for your contributions Mr. MacDonald.  The insight around important IT trends, and identified “no brainers” in your blog posts are spot-on IMHO.

Here are the main elements of the arrangement without the required p/r marketing spin

  • SignaCert to deliver rich content services with direct-from-Microsoft software measurements
  • Microsoft to deliver products with known-provenance, cross-platform third-party content aggregated by SignaCert
  • Data Exchange Format to be made available for ISV/OEM Partner use

Thank you Microsoft.

We are very proud to have been selected as a key partner for Microsoft, and it is a tribute to the work of countless people who have supported and encouraged us to continue our work in these important areas for the last decade or so.  And thanks to all of our investors for the support of the vision and product creation.

Now the work really begins.

Stay tuned.
Wyatt.


Gartner and Whitelists

April 11, 2009

Sorry for the long hiatus from the blog pages.  We have a series of press releases rolling out in the next several weeks (off of the one we posted about our 3.0 solution release this week).  Hopefully  I can point to the work in those releases as my excuse for not blogging on important IT transitional issues over the last several weeks. 🙂

But I have actually done a comment or two.  Check out these threads on the Gartner site.  I think you’ll find them of interest:

http://blogs.gartner.com/neil_macdonald/2009/03/31/will-whitelisting-eliminate-the-need-for-antivirus/

http://blogs.gartner.com/neil_macdonald/2009/04/03/we-need-a-global-industry-wide-application-whitelist/

http://blogs.gartner.com/neil_macdonald/2009/04/10/whitelisting-meet-virtualization-virtualization-meet-whitelisting/