SignaCert Announcement relating to Microsoft at RSA

April 21, 2009

Today at RSA we announced a significant “arrangement” with Microsoft.  We also participated in the Microsoft Theater (link to presentation coming soon).

Obviously this is a big deal for us, but that is not why I am writing this blog entry.

This blog is titled “IT in Transition” and if this isn’t transitional, I don’t know what is.  From the release:

“This is a very important step in enabling much better trust, security and management solutions for Microsoft customers.  It underscores the ongoing commitment of Microsoft to provide expanded object reputation services within its products and services as new security standards and methods evolve,” said Greg Kohanim, Product Unit Manager of Microsoft. “As an ISV, Microsoft is proud to extend this common repository with its own information to enable the industry to increase security across the board.”

Thank you Mr. Kohanim.

Also from the release:

“Software whitelisting is becoming strategic for protecting compute devices. Who builds and maintains the list is one of the more significant issues,” said Neil MacDonald, VP and Gartner Fellow.  “Since ISVs are the source of much of the software (including the OS foundation), it makes sense to have the worldwide ISV community contribute, in a standard way, to a whitelist that has the broadest adoption and impact versus the complexity involved in building or contributing to proprietary databases.”

And thank you for your contributions Mr. MacDonald.  The insight around important IT trends, and identified “no brainers” in your blog posts are spot-on IMHO.

Here are the main elements of the arrangement without the required p/r marketing spin

  • SignaCert to deliver rich content services with direct-from-Microsoft software measurements
  • Microsoft to deliver products with known-provenance, cross-platform third-party content aggregated by SignaCert
  • Data Exchange Format to be made available for ISV/OEM Partner use

Thank you Microsoft.

We are very proud to have been selected as a key partner for Microsoft, and it is a tribute to the work of countless people who have supported and encouraged us to continue our work in these important areas for the last decade or so.  And thanks to all of our investors for the support of the vision and product creation.

Now the work really begins.

Stay tuned.

Gartner and Whitelists

April 11, 2009

Sorry for the long hiatus from the blog pages.  We have a series of press releases rolling out in the next several weeks (off of the one we posted about our 3.0 solution release this week).  Hopefully  I can point to the work in those releases as my excuse for not blogging on important IT transitional issues over the last several weeks. 🙂

But I have actually done a comment or two.  Check out these threads on the Gartner site.  I think you’ll find them of interest:

Symantec’s Blackhat Survey

August 22, 2008

I mentioned there were two notable items (from our very objective perspective) at Blackhat/DefCon.

I wrote on one already. Here is the other based on this blog from the Symantec website:

On the opening day of BlackHat 2008, Symantec commissioned an anonymous survey among the attendees to learn about contemporary views on security related topics, such as vulnerability research, future threats and trends, and what types of challenges we as security professionals will collectively face in the coming year.


Almost a third (34%) of respondents said that they implemented some form of whitelisting within their organization (39% said no, and 26% actually didn’t know!). Note that whitelisting may not necessarily apply to all systems, but could be restricted to specific machines. For example, most respondents look to whitelisting to protect more “static” high-availability machines like servers (40%), gateways (31%), and desktops (32%) rather than more dynamic environments like laptops (26%) and wireless devices (29%). Symantec has been stressing for quite some time that we are on the cusp of a critical inflection point where the number of unique malicious code instances is surpassing the number of legitimate code instances. This trend necessitates considering a new approach to providing security; namely, rather than blocking out the bad, we should consider just allowing in the good. Naturally there are a host of challenges in this area, but given our tremendous reach and deep insight, we believe that there are some highly promising approaches to facilitating whitelisting – and this area is one that I’m personally both very excited about and also actively involved with.

As indicated SYMC did an anonymous survey querying on the subject of whitelist. While that in itself is not a surprise, the results were interesting.

Besides what we would consider a pretty high “yes” rate to the question of “Have you deployed some form of whitelisting” (34% said yes), the underlying comments reveal a pretty sophisticated view of the initial and important use cases for whitelisting.

Now, one assumes that the people surveyed are most likely enterprise (commercial and government) users, but it is very interesting to note the pretty even distribution of end target platform use for whitelist (40% server/31% gateway/32% desktop) with the inference that these are the more “static” devices.

This is interesting (and does check with our field experience BTW) in that we would guess the delineation is being made on what the enterprise users deem as the “managed devices”.

So what is “managed device”? In our view in is an IT element and has some form of best practice controls in play all the way from software stack development, QA, user acceptance tests (UAT) and deployment. Generally a managed device would (ideally) have one “point of management” for updates and maintenance. Laptops and other devices in many organizations lack these important software release/management best practices.

One also might assume that the whitelist method being applied to these managed devices have something to do with IT process enforcement and compliance, likely driven by ITIL, PCI or some other best practice and/or regulation.

Given the relatively high “yes” to whitelist answer (in that whitelist enabled methods are still somewhat nascent) one might further guess that technologies in use might be a combination of “home grown” , first-generation, well known open-source and commercial methods like Tripwire (good for you Tripwire!).

It seems to us that as understanding and acceptance of these methods increases, the use cases and the respective appreciation for the value-add of IT controls based on image management, enabled by whitelist, will only improve (after all we have a contribution margin of new yes’s of 65% (the no’s and don’t knows).

This represents a great opportunity for the suppliers to explore, better understand, and deliver next-gen methods for enhance positive IT controls! (at least that is what WE are doing)

In our view this survey reveals some very important data points. We can blog on the virtues of new “whitelist” methods until the cows come home, but the only important questions at the end of the day are:

“Are customers ready for these new methods?” (and the answer appears to indicate YES) and;

“What are they willing to pay” (and the answer is TBD based on use case and value delivered) and;

“Who will be the de facto standard vendor(s)?” (great question….!)

Anyway, thanks for the survey Symantec welcome (again) to the discussion!


Speaking of Endpoint Instrumentation……

August 20, 2008

Some of you may have attended the recent BlackHat/DefCon events in Las Vegas earlier this month. Of the notable events and mentions, two in particular I thought might be of interest.

One of these events is reported in the article linked below:
with the headline:

CoreTrace’s Application Whitelisting Solution Stops 100 Percent of Computer Viruses During DEFCON 16 “Race-to-Zero” Competition

The key paragraph is:

“After the blacklist-focused contest was completed, we ran the samples through CoreTrace’s whitelisting solution, BOUNCER,” said “Race-to-Zero” organizer, Simon Howard. “By not allowing any of the samples to execute on the host computer, BOUNCER stopped 100 percent of the viruses. I strongly recommend that companies add application whitelisting solutions like BOUNCER to their arsenal.”

Congrats to our friends at CoreTrace! It’s no surprise to us that “positive” code identification and application “allowance” is more effective than bad code detection and blocking alone.

Both blacklist and whitelist methods have a common thread:

  • With the blacklist method if you can’t identify what’s trying to run, you can’t block it.
  • With the whitelist method, if you can identify what’s trying to execute (and the rest of the “allowed” code) then you can enable it to run.

This means that the measurement method is a means to an end, with the desired end being to create and invoke effective policies that are predictable and reliable. BOTH blacklist and whitelist are measurement methods. The difference (and the reason that CoreTrace prevailed in the Race-to-Zero) is that their method is FINITE. They ONLY allowed what was known and trusted to execute. The other guys had the infinite detection problem in that there are an infinite number of “bad things” that can come at the endpoint. It’s become increasingly difficult to keep up from a blacklist perspective (with the identification method, timing or quantity).

The trick however for all makers of IT “endpoint instrumentation” is maintaining the method above the endpoint. As I have mentioned in previous blogs, the full extension of the value of whitelists cannot be fully enabled without effective image management methods AND the whitelist content (organization, quality and supply).

Think middle-tier image management and source quality of whitelist measurements. We must create capable image management methods that can scale to enterprise, and supplement these methods with quality measurements (whitelists) for us to scale with confidence.

After all, if our customers can’t scale these methods to thousands of endpoints, and manage/integrate them effectively, they will not be practical over the long-term.

So we applaud the efforts of CoreTrace, and of all of the endpoint folks (3rd party, ISV and Platform), to enable more of these Positive IT controls capabilities “out of the box”.

We stand ready to serve any/all of them with the most comprehensive set of image management methods and high-quality whitelist content available today.

P.S. I indicated that there were two notable developments out of Blackhat. I will blog on the second one in the next few days.

Value, Pricing and the Positive IT Control Model

August 11, 2008

While on the subject (of Positive IT Controls methods and architecture) I’d like to share some observations on the value/pricing metrics.

We must take the “customer perspective” in our approach to pricing new value-add IT security and image/system management methods for servers and endpoints.

For our Enterprise Customers this means:

Operating Expense Considerations:

  • To what extent does our method improve the utilization economics of my customer’s challenge?

o Can I demonstrate improvement in Mean Time To Failure?

o Can I demonstrate reduction in Mean Time to Repair?

o If the answer is yes to the these, it translates directly to extra ‘nines’ of availability

  • To what extent does our method reduce the manpower expenditure for the capacity delivered by my customer’s IT process? (In the Financial Services sector, a savings of $250k per year can be achieved by repurposing one IT person from the maintenance of existing infrastructure by improving IT availability/management efficiency)
  • To what extent does the installation of our product/method disrupt the customer’s existing IT process flow?
  • How fast can we deploy and demonstrate value?
  • Can we demonstrate value while “running in parallel” with the customers current operations?
  • How much of my customers IT manpower will be required to make our solution effective for them?

o On initial install

o In daily operation

Capital Expense Considerations:

  • Does my customer have to spend capital dollars to pay for the solution?
  • Can they offset other capital costs by deploying our solution?
  • Can they increase the efficiency and utilization of their existing IT infrastructure by deferring and/or offsetting new IT capacity spending?

Translating these metrics to the available budget for a vendor solution requires calculating the estimated impact of the solution for our customer (in dollars per year). Additionally, by dividing the dollars per year benefit for the solution by the number of endpoints we can estimate the effective value per endpoint offered by the solution.

A big advantage of Positive IT Control Methods is that most of the considerations discussed above can be tangibly demonstrated.

In contrast, Negative Model security ROI is most often based on actuarial analysis and is often viewed as an “insurance” sale. In other words, if I DON’T deploy the solution, my risk if an incident occurs might be $X, so I am willing to pay $Y per device as insurance to mitigate that risk.

In the Positive Image Management and Controls scenario we can measure TRUE OpEx and CapEx impact.

When we have a denominator (the impact/savings) we can divide it by the numerator (the number or devices and endpoints served) and begin to understand our value delivered.


If customer savings = $1mm / 12mos;

and number device units served = 10,000;

then $1×10^6 / 1×10^4 = $10^2 (or $100 per device per year)

Not only is this value/ROI analysis useful for the supplier (in setting a price for the value delivered) it is mandatory for our enterprise customers in this challenging spending environment, and when justifying their choice of vendor and solution.

The bottom line is:

Proactive systems management can and does create specific and measureable operating efficiencies (as well as offsets of capital spending thru better IT systems efficiency and agility).

In our opinion vendors should live by the following guideline:

If we can’t tangibly save or make our customers money through their utilization of our solution in their IT environment, then we shouldn’t be wasting their time.

Whitelist Emerges from the Shadows: Re-enforcing the Three-Tier Security and Systems Management Model

August 6, 2008

As I have discussed before, it is interesting to watch the steady evolution of the IT security, systems management and compliance solution set. If you zoom back to a 5 to 10 year view, and compare how we were thinking about things before, to how today’s thinking has evolved, some interesting macro patterns emerge.

First, there is increasing harmony in the thinking that “we can’t keep up with the blacklist velocity” and it is easier to flip the model to “whitelist” and keep track of that. Even the industry giants are now touting whitelists as a safe bet for the future of IT security. See this video from RSA this year where John Thompson, the CEO of Symantec, talks about future trends:

While John’s comments are useful (and we believe correct) – we need to be careful here. I have indicated before in these blog pages, and in my public-facing keynotes and presentations that Whitelists are NOT substitutes for Blacklists. In fact, the methods are complimentary for now.

As we get better (as an industry) at measuring and asserting Positive Image Management (making sure the good and desired code set remains in a prescribed state over the device usage lifecycle) then our full dependence on Negative Detection methods (AV/IDS/IPS) should diminish rapidly.

Accomplishing this, however, could hinge on our ability to radically shift our paradigm as it relates to security and systems management. Let me expound.

In the traditional AV model, we fundamentally rely on a two-tier model where:

  • Detection and Blocking is handled by the client or agent resident on the IT device (server, workstation or desktop), or in a gateway device that inspects content destined for an endpoint within the domain.
  • Detection is enabled by a “blacklist”, usually maintained by the AV vendor, and this content is made available to the AV scanning tools on a push or pull basis.

Basically the industry approach has been incremental, reactive, and has leaned heavily on this two-tier model.

As we shift to a more proactive and prescriptive Positive Image Management method, it is imperative that we “remap” our traditional two-tier view. We see our customers moving more to this view:

  • Positive Images (the desired “software bill of materials”) for a given IT device can be defined in advance through the software build, QA, and User Acceptance Test (UAT) release cycle. In parallel to building the deployable image, a software reference can be easily created. Additionally, and for legacy environments, some or all of the device image can be “learned” from the device as it is currently deployed.
  • The resulting software reference can then be used in conjunction with endpoint instrumentation (3rd Party or Platform Intrinsic) and a comparison can be made between the image reference (all or some) and the target endpoint.

There are many advantages and benefits for Enterprise to move to this model, but in simple terms this process is commonly called a “control process”, and is very common in almost every other form of repetitive automation and process management.

As we move to these methods, we need to map more to a three-tier model. Where we may already have a three-tier model (Enterprise Management, CMDB, and/or Data Center Automation), we need to add/supplement the middle tier with Positive Image Management Methods.

In our opinion this will create a hierarchy that looks more like this:

  • High-quality/high provenance content repositories (aka “whitelist”) are used to supply common software “measurement” references for Operating Systems, Application Packages, and other data elements that are commonly included in the software reference images. This is likely a global service that can be accessed on a “pull” basis to many customers.
  • A middle-tier, that would typically be within the customers IT domain, serves as the image management layer to support a one-to-many image reference model (typically enterprise IT has many IT devices that inherit some or all of a common image reference). This tier also conveniently allows customers to add their OWN device and domain specific software measurements to complete the image references for their specific environments.
  • Endpoint instrumentation is added (or is included in the platform) that can perform localized measurements for the binary, configuration and platform-specific software elements, and request validation and/or perform application locking/blocking based on “known and trusted” software elements and sets.

In this model it is becoming increasingly clear that the enabling, and high-value components, are in the MIDDLE-TIER. Sure, you need a great set of “whitelist services” above the middle-tier. And yes, you need capable endpoint instrumentation to measure and apply desired detection, blocking/locking and remediation policies. We believe that the key value contribution and long-term differentiation layer of the three-tier model is delivered by the high-resolution image management capabilities present in the middle-tier.

Endpoint instrumentation is largely a commodity and is being subsumed into the platform itself anyway, so that is not a great spot to place your bets (as an IT solutions provider). You can begin to see this emerge in the OS and platforms now with technologies like Validated Execution from Sun (see and Trusted Execution (Txt) from Intel (see

And, over time, even the “whitelist” services themselves may begin to commoditize.

So as you begin to grasp this important shift to “whitelisting” that many people are beginning to talk about, don’t be fooled.

Our primary objective must be to provide our customers with an effective IT controls method to enhance their existing “managed IT device environment” through the utilization of Positive Image Management, Control and Remediation Methods. Positive IT control methods are enabled and enforced by high-quality whitelist measurements.

Wyatt Starnes

Microsoft and Veridian

February 7, 2008

Over the last few weeks Microsoft (MSFT) announced more details of their long awaited virtualization strategy (drum roll please) and their expanded partnership with Citrix/Xen (with an emphasis on servers) and simultaneously announced the acquisition of Calista.

When Citrix originally announced the acquisition of XenSource a few months ago we thought that is was apparent that MSFT had to be “in the know” as Citrix and MSFT have been in a love/hate relationship in within the enterprise markets for nearly 18 years.

The prior relationship was much to do with terminal services and “backracking” and streaming of applications to the end-point.  In many ways these uses are a pre-cursor to virtualization – an enterprise “Petri dish” to see what and how customers find value around alternate enterprise usage of platforms and software delivery. Now the next shoe is dropping.

With the success of VMware―both in terms of early enterprise acceptance and deployment AND the IPO (giving VMW a huge warchest)―Microsoft has been forced to move.  Some would see it as “late”, but the virtualization market is really very nascent.  The bulk of VMW revenue is made up of deal sizes $100k or less….(likely ASP’ing at <=$70k right now)….so the bulk of the $1B+ in revenue by VMW is still “pilot” and for development usage.  So we are very early stage.

But the shift is happening quickly and the full transition is inevitable in short order (Less than 5 years for leading sectors to cross over to more than 50% virtualized infrastructure.)

While it is clear that the Virtual Memory Manager (VMM) and Hypervisors (HV) are ultimately commodity delivery mechanisms for the stack and software in the Virtual Machine (VM) enablers, control of VMM and HV is important to the big guys until the other layers of the value-add opportunities develop and evolve.

The longer term question for the “little guys” (every one with less than $100b market cap) is “where are the defensible 3rd party value-add areas” as the paradigm shift fully reveals itself?  What “goes away” as the shift from the one-to-one (hardware to OS) monolithic platform yields to the one-to-many virtual platform.

What happens to traditional IT security in this brave new world?  Where can we hang our respective 3rd party hats as the elephants trample the old ground in search of new and fertile new areas?

It is in these questions that the “positive” security and systems management model really begins to stand out.  Knowing that VM instantiations are ASSEMBLED FROM trusted code by validating them against a platform and vendor agnostic, high-quality “white list” resource becomes critical.

Also knowing WHAT CODE IS LOADED WHERE AND FOR HOW LONG becomes an enabling capability, regardless of which VMM and HV is used to create the VM software stacks.

Also, compliance and software licensing become even more important, but can be easily handled with trusted code and stack measure/validate methods.  Being able to “attest” the stack to an external “white list” reference built from a rich supply of high-quality software reference measurements becomes a highly-defensible and long-term way of adding value to the new virtual compute paradigm.

Interestingly for those that get the jump on this, this represents a huge, content-based, recurring revenue model that the first-party players will have a difficult time displacing (because they don’t have ready access to software measurements from other vendors and due to the “trusted third party” implications).

Will we really trust Microsoft to validate Microsoft?

So let’s just view this as another card in an unfolding game of mammoth proportions and implications.

Stay tuned.  This is going a lot of fun to watch, and to participate in.