SignaCert Announcement relating to Microsoft at RSA

April 21, 2009

Today at RSA we announced a significant “arrangement” with Microsoft.  We also participated in the Microsoft Theater (link to presentation coming soon).

Obviously this is a big deal for us, but that is not why I am writing this blog entry.

This blog is titled “IT in Transition” and if this isn’t transitional, I don’t know what is.  From the release:

“This is a very important step in enabling much better trust, security and management solutions for Microsoft customers.  It underscores the ongoing commitment of Microsoft to provide expanded object reputation services within its products and services as new security standards and methods evolve,” said Greg Kohanim, Product Unit Manager of Microsoft. “As an ISV, Microsoft is proud to extend this common repository with its own information to enable the industry to increase security across the board.”

Thank you Mr. Kohanim.

Also from the release:

“Software whitelisting is becoming strategic for protecting compute devices. Who builds and maintains the list is one of the more significant issues,” said Neil MacDonald, VP and Gartner Fellow.  “Since ISVs are the source of much of the software (including the OS foundation), it makes sense to have the worldwide ISV community contribute, in a standard way, to a whitelist that has the broadest adoption and impact versus the complexity involved in building or contributing to proprietary databases.”

And thank you for your contributions Mr. MacDonald.  The insight around important IT trends, and identified “no brainers” in your blog posts are spot-on IMHO.

Here are the main elements of the arrangement without the required p/r marketing spin

  • SignaCert to deliver rich content services with direct-from-Microsoft software measurements
  • Microsoft to deliver products with known-provenance, cross-platform third-party content aggregated by SignaCert
  • Data Exchange Format to be made available for ISV/OEM Partner use

Thank you Microsoft.

We are very proud to have been selected as a key partner for Microsoft, and it is a tribute to the work of countless people who have supported and encouraged us to continue our work in these important areas for the last decade or so.  And thanks to all of our investors for the support of the vision and product creation.

Now the work really begins.

Stay tuned.
Wyatt.


Gartner and Whitelists

April 11, 2009

Sorry for the long hiatus from the blog pages.  We have a series of press releases rolling out in the next several weeks (off of the one we posted about our 3.0 solution release this week).  Hopefully  I can point to the work in those releases as my excuse for not blogging on important IT transitional issues over the last several weeks. 🙂

But I have actually done a comment or two.  Check out these threads on the Gartner site.  I think you’ll find them of interest:

http://blogs.gartner.com/neil_macdonald/2009/03/31/will-whitelisting-eliminate-the-need-for-antivirus/

http://blogs.gartner.com/neil_macdonald/2009/04/03/we-need-a-global-industry-wide-application-whitelist/

http://blogs.gartner.com/neil_macdonald/2009/04/10/whitelisting-meet-virtualization-virtualization-meet-whitelisting/


Microsoft Releases Hyper-V Server 2008

October 17, 2008

Well, well, well… look at this. Microsoft is unfurling more and more layers of its next-gen computing and software strategy – especially with regards to virtualization.

(Ok, a required disclosure: We are currently under NDA with Microsoft and have some confidential knowledge around certain roadmap and product plans, but NOTHING in this blog post is based on any inside knowledge derived from, or in any way based on, those confidential discussions).

See:

http://weblog.infoworld.com/virtualization/archives/2008/10/microsoft_relea_6.html?source=NLC-VIRTUALIZATION&cgd=2008-10-09

The reason I wanted to blog on this is in relationship to the IT in Transition theme is that, as I have written in several blogs, the entire landscape of the endpoint is changing. A lot of people see this, so this view is in no way unique or revolutionary to us.

A couple of posts ago I blogged on the coming Endpoint Wars of 2009. In order to make that post digestible, I intentionally left a detailed and deep discussion about the impact of virtualization and hypervisors out of that post.

Let me add a bit of my color (and opinion) here:

Quoting from David Marshall’s article:

So what’s new and different? Didn’t they already release Hyper-V? This platform is slightly different from the version found in Microsoft’s Windows Server 2008 operating system. According to Microsoft, it provides a simplified, reliable, and optimized virtualization solution for customers to consolidate Windows or Linux workloads on a single physical server or to run client operating systems and applications in server based virtual machines running in the datacenter. And it allows customers to leverage their existing tools, processes and skills. But perhaps best of all, Microsoft is making this product a no-cost Web download — yup, it’s free!

Yup, it’s free.

Also from the article:

The provisioning and management tools are based on Microsoft System Center, providing centralized, enterprise-class management of both physical and virtual resources.

And the management mechanisms and tools are “above platform” as we’d expect, with Microsoft System Center being adapted as the management framework, as we’d expect.

So the Hypervisor (HV) wars are in full force now as well. Obviously this is just the leading edge of the one of the fronts of the Endpoint Wars.

Seems like the three major battlegrounds are VMWare, Citrix and now Microsoft. If highly capable hypervisors are going to be “loss leader” in any go-forward virtualization platform strategy, then where will the value and revenue shift to as the traditional demarcations are realigned?

Our guess is that more of the instrumentation will be subsumed into the platforms (as we have stated for quite some time) including into the HV. This obviously will force more of the method “above platform” including image management and enforcement. And where does traditional infosec (AV, IDS, etc) move in this new world?

Think services.

And these services will go well beyond software streaming, and likely include image management and high-assurance software and full software stack delivery methods.

And platform intrinsic security and compliance “instrumentation”, supported by above platform validation and attestation methods, will likely become commonplace.

Food for thought.

Wyatt.


Speaking of Endpoint Instrumentation……

August 20, 2008

Some of you may have attended the recent BlackHat/DefCon events in Las Vegas earlier this month. Of the notable events and mentions, two in particular I thought might be of interest.

One of these events is reported in the article linked below:
http://dmnnewswire.digitalmedianet.com/articles/viewarticle.jsp?id=483836
with the headline:

CoreTrace’s Application Whitelisting Solution Stops 100 Percent of Computer Viruses During DEFCON 16 “Race-to-Zero” Competition

The key paragraph is:

“After the blacklist-focused contest was completed, we ran the samples through CoreTrace’s whitelisting solution, BOUNCER,” said “Race-to-Zero” organizer, Simon Howard. “By not allowing any of the samples to execute on the host computer, BOUNCER stopped 100 percent of the viruses. I strongly recommend that companies add application whitelisting solutions like BOUNCER to their arsenal.”

Congrats to our friends at CoreTrace! It’s no surprise to us that “positive” code identification and application “allowance” is more effective than bad code detection and blocking alone.

Both blacklist and whitelist methods have a common thread:

  • With the blacklist method if you can’t identify what’s trying to run, you can’t block it.
  • With the whitelist method, if you can identify what’s trying to execute (and the rest of the “allowed” code) then you can enable it to run.

This means that the measurement method is a means to an end, with the desired end being to create and invoke effective policies that are predictable and reliable. BOTH blacklist and whitelist are measurement methods. The difference (and the reason that CoreTrace prevailed in the Race-to-Zero) is that their method is FINITE. They ONLY allowed what was known and trusted to execute. The other guys had the infinite detection problem in that there are an infinite number of “bad things” that can come at the endpoint. It’s become increasingly difficult to keep up from a blacklist perspective (with the identification method, timing or quantity).

The trick however for all makers of IT “endpoint instrumentation” is maintaining the method above the endpoint. As I have mentioned in previous blogs, the full extension of the value of whitelists cannot be fully enabled without effective image management methods AND the whitelist content (organization, quality and supply).

Think middle-tier image management and source quality of whitelist measurements. We must create capable image management methods that can scale to enterprise, and supplement these methods with quality measurements (whitelists) for us to scale with confidence.

After all, if our customers can’t scale these methods to thousands of endpoints, and manage/integrate them effectively, they will not be practical over the long-term.

So we applaud the efforts of CoreTrace, and of all of the endpoint folks (3rd party, ISV and Platform), to enable more of these Positive IT controls capabilities “out of the box”.

We stand ready to serve any/all of them with the most comprehensive set of image management methods and high-quality whitelist content available today.

P.S. I indicated that there were two notable developments out of Blackhat. I will blog on the second one in the next few days.


Value, Pricing and the Positive IT Control Model

August 11, 2008

While on the subject (of Positive IT Controls methods and architecture) I’d like to share some observations on the value/pricing metrics.

We must take the “customer perspective” in our approach to pricing new value-add IT security and image/system management methods for servers and endpoints.

For our Enterprise Customers this means:

Operating Expense Considerations:

  • To what extent does our method improve the utilization economics of my customer’s challenge?

o Can I demonstrate improvement in Mean Time To Failure?

o Can I demonstrate reduction in Mean Time to Repair?

o If the answer is yes to the these, it translates directly to extra ‘nines’ of availability

  • To what extent does our method reduce the manpower expenditure for the capacity delivered by my customer’s IT process? (In the Financial Services sector, a savings of $250k per year can be achieved by repurposing one IT person from the maintenance of existing infrastructure by improving IT availability/management efficiency)
  • To what extent does the installation of our product/method disrupt the customer’s existing IT process flow?
  • How fast can we deploy and demonstrate value?
  • Can we demonstrate value while “running in parallel” with the customers current operations?
  • How much of my customers IT manpower will be required to make our solution effective for them?

o On initial install

o In daily operation

Capital Expense Considerations:

  • Does my customer have to spend capital dollars to pay for the solution?
  • Can they offset other capital costs by deploying our solution?
  • Can they increase the efficiency and utilization of their existing IT infrastructure by deferring and/or offsetting new IT capacity spending?

Translating these metrics to the available budget for a vendor solution requires calculating the estimated impact of the solution for our customer (in dollars per year). Additionally, by dividing the dollars per year benefit for the solution by the number of endpoints we can estimate the effective value per endpoint offered by the solution.

A big advantage of Positive IT Control Methods is that most of the considerations discussed above can be tangibly demonstrated.

In contrast, Negative Model security ROI is most often based on actuarial analysis and is often viewed as an “insurance” sale. In other words, if I DON’T deploy the solution, my risk if an incident occurs might be $X, so I am willing to pay $Y per device as insurance to mitigate that risk.

In the Positive Image Management and Controls scenario we can measure TRUE OpEx and CapEx impact.

When we have a denominator (the impact/savings) we can divide it by the numerator (the number or devices and endpoints served) and begin to understand our value delivered.

Example:

If customer savings = $1mm / 12mos;

and number device units served = 10,000;

then $1×10^6 / 1×10^4 = $10^2 (or $100 per device per year)

Not only is this value/ROI analysis useful for the supplier (in setting a price for the value delivered) it is mandatory for our enterprise customers in this challenging spending environment, and when justifying their choice of vendor and solution.

The bottom line is:

Proactive systems management can and does create specific and measureable operating efficiencies (as well as offsets of capital spending thru better IT systems efficiency and agility).

In our opinion vendors should live by the following guideline:

If we can’t tangibly save or make our customers money through their utilization of our solution in their IT environment, then we shouldn’t be wasting their time.


Whitelist Emerges from the Shadows: Re-enforcing the Three-Tier Security and Systems Management Model

August 6, 2008

As I have discussed before, it is interesting to watch the steady evolution of the IT security, systems management and compliance solution set. If you zoom back to a 5 to 10 year view, and compare how we were thinking about things before, to how today’s thinking has evolved, some interesting macro patterns emerge.

First, there is increasing harmony in the thinking that “we can’t keep up with the blacklist velocity” and it is easier to flip the model to “whitelist” and keep track of that. Even the industry giants are now touting whitelists as a safe bet for the future of IT security. See this video from RSA this year where John Thompson, the CEO of Symantec, talks about future trends:

While John’s comments are useful (and we believe correct) – we need to be careful here. I have indicated before in these blog pages, and in my public-facing keynotes and presentations that Whitelists are NOT substitutes for Blacklists. In fact, the methods are complimentary for now.

As we get better (as an industry) at measuring and asserting Positive Image Management (making sure the good and desired code set remains in a prescribed state over the device usage lifecycle) then our full dependence on Negative Detection methods (AV/IDS/IPS) should diminish rapidly.

Accomplishing this, however, could hinge on our ability to radically shift our paradigm as it relates to security and systems management. Let me expound.

In the traditional AV model, we fundamentally rely on a two-tier model where:

  • Detection and Blocking is handled by the client or agent resident on the IT device (server, workstation or desktop), or in a gateway device that inspects content destined for an endpoint within the domain.
  • Detection is enabled by a “blacklist”, usually maintained by the AV vendor, and this content is made available to the AV scanning tools on a push or pull basis.

Basically the industry approach has been incremental, reactive, and has leaned heavily on this two-tier model.

As we shift to a more proactive and prescriptive Positive Image Management method, it is imperative that we “remap” our traditional two-tier view. We see our customers moving more to this view:

  • Positive Images (the desired “software bill of materials”) for a given IT device can be defined in advance through the software build, QA, and User Acceptance Test (UAT) release cycle. In parallel to building the deployable image, a software reference can be easily created. Additionally, and for legacy environments, some or all of the device image can be “learned” from the device as it is currently deployed.
  • The resulting software reference can then be used in conjunction with endpoint instrumentation (3rd Party or Platform Intrinsic) and a comparison can be made between the image reference (all or some) and the target endpoint.

There are many advantages and benefits for Enterprise to move to this model, but in simple terms this process is commonly called a “control process”, and is very common in almost every other form of repetitive automation and process management.

As we move to these methods, we need to map more to a three-tier model. Where we may already have a three-tier model (Enterprise Management, CMDB, and/or Data Center Automation), we need to add/supplement the middle tier with Positive Image Management Methods.

In our opinion this will create a hierarchy that looks more like this:

  • High-quality/high provenance content repositories (aka “whitelist”) are used to supply common software “measurement” references for Operating Systems, Application Packages, and other data elements that are commonly included in the software reference images. This is likely a global service that can be accessed on a “pull” basis to many customers.
  • A middle-tier, that would typically be within the customers IT domain, serves as the image management layer to support a one-to-many image reference model (typically enterprise IT has many IT devices that inherit some or all of a common image reference). This tier also conveniently allows customers to add their OWN device and domain specific software measurements to complete the image references for their specific environments.
  • Endpoint instrumentation is added (or is included in the platform) that can perform localized measurements for the binary, configuration and platform-specific software elements, and request validation and/or perform application locking/blocking based on “known and trusted” software elements and sets.

In this model it is becoming increasingly clear that the enabling, and high-value components, are in the MIDDLE-TIER. Sure, you need a great set of “whitelist services” above the middle-tier. And yes, you need capable endpoint instrumentation to measure and apply desired detection, blocking/locking and remediation policies. We believe that the key value contribution and long-term differentiation layer of the three-tier model is delivered by the high-resolution image management capabilities present in the middle-tier.

Endpoint instrumentation is largely a commodity and is being subsumed into the platform itself anyway, so that is not a great spot to place your bets (as an IT solutions provider). You can begin to see this emerge in the OS and platforms now with technologies like Validated Execution from Sun (see http://www.opensolaris.org/os/project/valex/) and Trusted Execution (Txt) from Intel (see http://www.intel.com/technology/security/).

And, over time, even the “whitelist” services themselves may begin to commoditize.

So as you begin to grasp this important shift to “whitelisting” that many people are beginning to talk about, don’t be fooled.

Our primary objective must be to provide our customers with an effective IT controls method to enhance their existing “managed IT device environment” through the utilization of Positive Image Management, Control and Remediation Methods. Positive IT control methods are enabled and enforced by high-quality whitelist measurements.

Wyatt Starnes


Apple and Transitional IT (i.e better user experience…)

October 24, 2007

Have you been following Apple’s technical roadmap these days? I know most of us track the new whiz-bang features and amazing marketing that comes out of that company.

You know I was counseled by some Apple folks recently and told “If we get you an audience with Steve Jobs, don’t say a word about Apple being in the “IT business”…and further that “Apple is a consumer products and content company and that technology is all about helping to deliver consumer experience…”  (We will get kicked out if I start talking about “IT”…)

Hmmmmm. Makes good sense. IT, technology and the services built upon them are the means to the end…not the end. The end is sexy and easy to use. The technology is largely transparent to the user experience.

Sort of like a well made German sports car…

As the subject of this blog is really observations of IT in transition, and because I am a closet geek anyway – I must dive a level down and make some technology observations.

I’ll start with a note from Steve Jobs last week. See a copy here: http://blog.zingwat.com/?p=164

Note his comment on code security and “integrity”:

“It will take until February to release an SDK because we’re trying to do two diametrically opposed things at once—provide an advanced and open platform to developers while at the same time protect iPhone users from viruses, malware, privacy attacks, etc. This is no easy task. Some claim that viruses and malware are not a problem on mobile phones—this is simply not true. There have been serious viruses on other mobile phones already, including some that silently spread from phone to phone over the cell network. As our phones become more powerful, these malicious programs will become more dangerous. And since the iPhone is the most advanced phone ever, it will be a highly visible target.”

Also last week Apple made the highly touted announcement of “Leopard” and the mind boggling list of new capabilities and features. See: http://www.apple.com/macosx/features/300.html

Well, one very capable analyst, Carl Howe, wrote an interesting article zeroing in on some common technological similarities between the challenges with these offerings. See:

http://seekingalpha.com/article/50315-apple-s-impressive-platform-security-for-iphone-leopard-development

The two items that Carl picked out and correlated (iPhone to Leopard) are really interesting and relevant to these blog pages. From the article:

Tagging Downloaded Applications
Protect yourself from potential threats. Any application downloaded to your Mac is tagged. Before it runs for the first time, the system asks for your consent — telling you when it was downloaded, what application was used to download it, and, if applicable, what URL it came from.

Signed Applications
Feel safe with your applications. A digital signature on an application verifies its identity and ensures its integrity. All applications shipped with Leopard are signed by Apple, and third-party software developers can also sign their applications.

And Carl goes on to say:

“Those features jumped out at me because the very first Forrester report I wrote in 1996 was about desktop security and the threat of active content. In that report, I wrote that if you want a truly secure platform, you need both app signing and run-time validation to guarantee that you only run trusted code. I further noted that Windows would never become a truly secure platform without these features. The fact that these features are built into Leopard says that even as Macs gain in popularity, Apple has no intent of letting its OS or its iPhone become an easy security target. And these two features are worth the entire cost of upgrade and more to anyone worried about desktop and server security.”

Wow, did you note the “positive platform attestation” comments in his observations? He is saying (I believe) that the device itself is responsible for maintaining the boundaries of what code should be allowed to run on the platform. And that we can “secure a platform” by making sure the trusted code stays trusted, and deal with mobile code asserted to the platform by having some sense of “provenance” – i.e. “where did the code come from, who (which app) requested it, and is it safe to run.”

He finishes the article with “Nice work Apple…”

I concur – great stuff. Not sexy in its own right necessarily. But by building these features into both the architecture AND the third party infrastructure, intrinsic positive platform protection can be more effectively assured.

With this, the stuff just works better. It is more reliable. It is safer…and all leads to a better user experience, and (likely) lower support costs for Apple.

Happier customer, more security transparency based on positive code measurement (signing) and attestation (verification).

Wow. Smart.

Nice work Apple.

And nice work Carl for helping to sort this out.

(Have to run, heading to the Apple store)

Wyatt.


Credit Card Regulations and IT Controls

October 3, 2007

The Wall Street Journal ran an article yesterday, “Security-Software Industry’s Miniboom,” talking about data privacy and security spending. See:

View article here.

The focus of the article is around the Payment Card Industry (PCI) and the so called Data Security Standards (DSS). The credit card industry (primarily driven by Visa) has been steadily and systematically shifting more of the responsibilities and liabilities for credit card losses to merchants.

Now this actually makes good sense. Insiders have known for a long time that the losses due to fraud, privacy issues and increasing identity theft have been huge, in absolute terms for many years. (“Huge” means single digit percentage losses multiplied by trillions of dollars moving through the system).

The tension around this is simple really, and we should all care. On the one hand the credit card “brands” are encouraging us to continue to use our cards, and actively promote “don’t worry Mr. Consumer – if you have losses, we have your back.” That is the public position. Slowing down the flow of transactions due to consumer fear is not really a good option for them J

But the brands have been quietly working hard to reduce losses in the system, as they have been picking up (from their perspective) more than their fair share of the consumer loss charges and blame.

So the focus goes to the transaction chain. The PCI regs, which are being ratcheted up and broadened, are really seeking to enforce better practices for all participants in the system. In December 2006, Visa announced the “compliance acceleration program” which potentially fines the largest banks and merchants for non-compliance with fines beginning at $25,000 a month. The deadline for compliance came into force on September 30, 2007. The next tier of bank and merchants face a similar situation effective December 31, 2007.

Simply put, Visa (and other brands) are simply not willing to pick up the tab for sloppy transactions controls by the credit supply chain. And we should ALL care because at the end of the day WE pay for the losses with higher fees and interest rates.

These are real data management best practices and security issues. We should make sure all of our “negative controls” are working. The firewalls should be in place, intrusion and anti-virus stuff should be set up correctly, etc.

For the most part, the “physical risk” of losses in the system is yesterday’s news. The bulk of the transactions are handled by the “big banks” and they are pretty darn good at all of this security stuff. And I don’t believe for a minute that we lose as many laptops and servers as the media reports.

The problem with all of this CISP/PCI DSS stuff is that it focuses largely on reactive and negative controls and has traditionally been based on “honor system” compliance with draconian implications if they “catch you.”

There is a better way for all parties. Wouldn’t it be better to deploy “positive” IT controls? (i.e. “I know that all of the software on my IT-based transaction systems are in compliance — and I can prove it over their usage lifetime.”

All sides win with affirmative and positive IT controls based on software and standard image measurement/management.

With IT controls the brands can move away from the honor system and the web services used to connect and pass transactions can exchange positive platform “trust tokens,” assuring a new level of transparent compliance. The banks and merchants can produce higher levels of demonstrated compliance, with a lower cost to implement.

And maybe then consumers will get some break on costs and interest rates. That, or the brands, banks and merchants will see their profits increase nicely….Okay, so I lapsed into cynical….I digress.

Parallel-process, check and balance IT controls to demonstrate affirmative system compliance is just common sense. And the IT world needs a bit more common sense from time to time.

Wyatt.


“This is the Future of Security Technology…”

September 21, 2007

I picked up an interesting link to an article yesterday, so I thought I’d share. It’s about white listing . . .The article is written by Peter Nowak of CBC News and interviews Michael Murphy of Symantec Canada on his observations of a philosophical change in the anti-virus market.

Article: Internet security moving toward “white list”

Nowak says in the article:

“Under the current system, a security firm discovers a new threat, adds it to its black-list database and updates its customers’ anti-virus software to combat the problem. A “white list” would instead compile every known legitimate software program, including applications such as Microsoft Word and Adobe Acrobat, and add new ones as they are developed. Every program not on the list would simply not be allowed to function on a computer.”

“This is the future of security technology,” Murphy said at a presentation of the company’s twice-yearly security report on Friday. The trick is to develop a “global seal of approval.”

Not that this is a really big surprise. There have been several articles and announcements in recent weeks and months that relate to the emergence of the “positive model” – or what some companies refer to as “security by inclusion.”

This is all really common sense stuff when you think about it right? The “black list” challenge continues to be highly elusive; after all, it IS an infinite problem. Not that black list will go away anytime soon. Our customers will continue to pursue the “defense in depth” strategy.

On the other hand, IT controls and measurement systems based on “white list” or manifests of authorized code sets can easily be managed in a highly finite way using SignaCert. Also, positive system affirmation really provides much more customer value at the end of the day. In addition to the “keep the bad stuff out” benefit of black list, we can fold in the “verify the good stuff is still as intended” and “make sure that the originally and intended code is still present on the platform” benefits.

So the value of IT measurement and controls go way beyond pure security. Implemented correctly it is FULL configuration verification (image manifest AND software measurements) and code validation with source of ownership information (software provenance and pedigree)….all grounded to a common trust reference within our customers domain.

It is interesting to consider: This is how most other industries made their “automation” transitions. Think aerospace, telecom, auto and others. More on that later.

So net-net – we agree…this IS the future of security.

And likely the key to more comprehensive and proactive systems management methods.

So, the pendulum continues to swing even faster. Stay tuned.

Wyatt.


Is Software Measurable?

April 7, 2007

How do you measure anything?  Generally metrology, the science of weights and measures, involves the analysis of a sample against a standard reference model.  In the case of physical measurements the Standard Reference Materials (SRM) are used as the calibration standard.  So then, how do you measure software, which is merely a collection of electronic binary digits (bits) representing programs, graphics, and documentation? 

Everyday, we download or install software onto our laptops, desktops, servers and PDAs with no understanding of what it is we’re actually getting. Mostly it’s a specific software vendor’s reputation or a promising new feature that drives us to blindly point, click and install, update or upgrade our software. It’s interesting to note that all of the methods that control the delivery and usage of that software are covered and controlled by standards developed to span the seven layers of the OSI Model, but when it comes to the measurement of the software at layer seven, for quantitative, qualitative or authoritative results, the user is left with nothing but hope.  Hope that he hasn’t introduced malware or instability into his platform.   

But there are technologies today that can be used to provide proven and effective compact measurements for software.  For example, One-way cryptographic hashing functions, such as SHA-1, easily transform large bodies of bits into definitive short-hand signatures (also called fingerprints) that uniquely represent the original source data.  When the source data is widely distributed, anyone can generate the same cryptographic hash and, if it were available, could compare it against the source hash to verify the authenticity of the data.  A change to any single bit of information would result in a completely different fingerprint, enabling immediate detection of alterations. The main point here is that the sample data must be checked against a known reference in order to determine it validity. 

Anti-virus vendors use similar techniques to generate black-lists, a list of fingerprints for known bad software.  An AV agent collects sample measurements from the software on the target platform and compares them against a reference of undesirable elements. With this comparison certain policies can be triggered such as isolating and/or inoculating the unwanted or malicious code. 

But wouldn’t it be more effective if this method were inverted?  Enable the use of proactive measurement and validation techniques (also called attestation), to sample software and then compare against a known and trusted reference. By using this “white-list” method the policy method for enforcement is grounded in and extended up from a source of known-good values and  would allow only positively identified samples to run in the environment.  

But here are the tricks:   

How is the white list (the standard or trusted reference) derived, maintained, managed and effectively deployed in the IT enterprise?   

How does one minimize or avoid false positives (apparent dangerous is really benign) or false negatives (apparent benign is really dangerous)? 

How is trust “grounded” and normalized, and can we provide the desired zero-knowledge-proof?   

And is the white-list method mutually exclusive from the black list method described above? 

More on these issues in follow on blogs.  Your thoughts are welcome.