SignaCert Announcement relating to Microsoft at RSA

April 21, 2009

Today at RSA we announced a significant “arrangement” with Microsoft.  We also participated in the Microsoft Theater (link to presentation coming soon).

Obviously this is a big deal for us, but that is not why I am writing this blog entry.

This blog is titled “IT in Transition” and if this isn’t transitional, I don’t know what is.  From the release:

“This is a very important step in enabling much better trust, security and management solutions for Microsoft customers.  It underscores the ongoing commitment of Microsoft to provide expanded object reputation services within its products and services as new security standards and methods evolve,” said Greg Kohanim, Product Unit Manager of Microsoft. “As an ISV, Microsoft is proud to extend this common repository with its own information to enable the industry to increase security across the board.”

Thank you Mr. Kohanim.

Also from the release:

“Software whitelisting is becoming strategic for protecting compute devices. Who builds and maintains the list is one of the more significant issues,” said Neil MacDonald, VP and Gartner Fellow.  “Since ISVs are the source of much of the software (including the OS foundation), it makes sense to have the worldwide ISV community contribute, in a standard way, to a whitelist that has the broadest adoption and impact versus the complexity involved in building or contributing to proprietary databases.”

And thank you for your contributions Mr. MacDonald.  The insight around important IT trends, and identified “no brainers” in your blog posts are spot-on IMHO.

Here are the main elements of the arrangement without the required p/r marketing spin

  • SignaCert to deliver rich content services with direct-from-Microsoft software measurements
  • Microsoft to deliver products with known-provenance, cross-platform third-party content aggregated by SignaCert
  • Data Exchange Format to be made available for ISV/OEM Partner use

Thank you Microsoft.

We are very proud to have been selected as a key partner for Microsoft, and it is a tribute to the work of countless people who have supported and encouraged us to continue our work in these important areas for the last decade or so.  And thanks to all of our investors for the support of the vision and product creation.

Now the work really begins.

Stay tuned.
Wyatt.

Advertisements

Gartner and Whitelists

April 11, 2009

Sorry for the long hiatus from the blog pages.  We have a series of press releases rolling out in the next several weeks (off of the one we posted about our 3.0 solution release this week).  Hopefully  I can point to the work in those releases as my excuse for not blogging on important IT transitional issues over the last several weeks. 🙂

But I have actually done a comment or two.  Check out these threads on the Gartner site.  I think you’ll find them of interest:

http://blogs.gartner.com/neil_macdonald/2009/03/31/will-whitelisting-eliminate-the-need-for-antivirus/

http://blogs.gartner.com/neil_macdonald/2009/04/03/we-need-a-global-industry-wide-application-whitelist/

http://blogs.gartner.com/neil_macdonald/2009/04/10/whitelisting-meet-virtualization-virtualization-meet-whitelisting/


Microsoft Releases Hyper-V Server 2008

October 17, 2008

Well, well, well… look at this. Microsoft is unfurling more and more layers of its next-gen computing and software strategy – especially with regards to virtualization.

(Ok, a required disclosure: We are currently under NDA with Microsoft and have some confidential knowledge around certain roadmap and product plans, but NOTHING in this blog post is based on any inside knowledge derived from, or in any way based on, those confidential discussions).

See:

http://weblog.infoworld.com/virtualization/archives/2008/10/microsoft_relea_6.html?source=NLC-VIRTUALIZATION&cgd=2008-10-09

The reason I wanted to blog on this is in relationship to the IT in Transition theme is that, as I have written in several blogs, the entire landscape of the endpoint is changing. A lot of people see this, so this view is in no way unique or revolutionary to us.

A couple of posts ago I blogged on the coming Endpoint Wars of 2009. In order to make that post digestible, I intentionally left a detailed and deep discussion about the impact of virtualization and hypervisors out of that post.

Let me add a bit of my color (and opinion) here:

Quoting from David Marshall’s article:

So what’s new and different? Didn’t they already release Hyper-V? This platform is slightly different from the version found in Microsoft’s Windows Server 2008 operating system. According to Microsoft, it provides a simplified, reliable, and optimized virtualization solution for customers to consolidate Windows or Linux workloads on a single physical server or to run client operating systems and applications in server based virtual machines running in the datacenter. And it allows customers to leverage their existing tools, processes and skills. But perhaps best of all, Microsoft is making this product a no-cost Web download — yup, it’s free!

Yup, it’s free.

Also from the article:

The provisioning and management tools are based on Microsoft System Center, providing centralized, enterprise-class management of both physical and virtual resources.

And the management mechanisms and tools are “above platform” as we’d expect, with Microsoft System Center being adapted as the management framework, as we’d expect.

So the Hypervisor (HV) wars are in full force now as well. Obviously this is just the leading edge of the one of the fronts of the Endpoint Wars.

Seems like the three major battlegrounds are VMWare, Citrix and now Microsoft. If highly capable hypervisors are going to be “loss leader” in any go-forward virtualization platform strategy, then where will the value and revenue shift to as the traditional demarcations are realigned?

Our guess is that more of the instrumentation will be subsumed into the platforms (as we have stated for quite some time) including into the HV. This obviously will force more of the method “above platform” including image management and enforcement. And where does traditional infosec (AV, IDS, etc) move in this new world?

Think services.

And these services will go well beyond software streaming, and likely include image management and high-assurance software and full software stack delivery methods.

And platform intrinsic security and compliance “instrumentation”, supported by above platform validation and attestation methods, will likely become commonplace.

Food for thought.

Wyatt.


Speaking of Endpoint Instrumentation……

August 20, 2008

Some of you may have attended the recent BlackHat/DefCon events in Las Vegas earlier this month. Of the notable events and mentions, two in particular I thought might be of interest.

One of these events is reported in the article linked below:
http://dmnnewswire.digitalmedianet.com/articles/viewarticle.jsp?id=483836
with the headline:

CoreTrace’s Application Whitelisting Solution Stops 100 Percent of Computer Viruses During DEFCON 16 “Race-to-Zero” Competition

The key paragraph is:

“After the blacklist-focused contest was completed, we ran the samples through CoreTrace’s whitelisting solution, BOUNCER,” said “Race-to-Zero” organizer, Simon Howard. “By not allowing any of the samples to execute on the host computer, BOUNCER stopped 100 percent of the viruses. I strongly recommend that companies add application whitelisting solutions like BOUNCER to their arsenal.”

Congrats to our friends at CoreTrace! It’s no surprise to us that “positive” code identification and application “allowance” is more effective than bad code detection and blocking alone.

Both blacklist and whitelist methods have a common thread:

  • With the blacklist method if you can’t identify what’s trying to run, you can’t block it.
  • With the whitelist method, if you can identify what’s trying to execute (and the rest of the “allowed” code) then you can enable it to run.

This means that the measurement method is a means to an end, with the desired end being to create and invoke effective policies that are predictable and reliable. BOTH blacklist and whitelist are measurement methods. The difference (and the reason that CoreTrace prevailed in the Race-to-Zero) is that their method is FINITE. They ONLY allowed what was known and trusted to execute. The other guys had the infinite detection problem in that there are an infinite number of “bad things” that can come at the endpoint. It’s become increasingly difficult to keep up from a blacklist perspective (with the identification method, timing or quantity).

The trick however for all makers of IT “endpoint instrumentation” is maintaining the method above the endpoint. As I have mentioned in previous blogs, the full extension of the value of whitelists cannot be fully enabled without effective image management methods AND the whitelist content (organization, quality and supply).

Think middle-tier image management and source quality of whitelist measurements. We must create capable image management methods that can scale to enterprise, and supplement these methods with quality measurements (whitelists) for us to scale with confidence.

After all, if our customers can’t scale these methods to thousands of endpoints, and manage/integrate them effectively, they will not be practical over the long-term.

So we applaud the efforts of CoreTrace, and of all of the endpoint folks (3rd party, ISV and Platform), to enable more of these Positive IT controls capabilities “out of the box”.

We stand ready to serve any/all of them with the most comprehensive set of image management methods and high-quality whitelist content available today.

P.S. I indicated that there were two notable developments out of Blackhat. I will blog on the second one in the next few days.


Value, Pricing and the Positive IT Control Model

August 11, 2008

While on the subject (of Positive IT Controls methods and architecture) I’d like to share some observations on the value/pricing metrics.

We must take the “customer perspective” in our approach to pricing new value-add IT security and image/system management methods for servers and endpoints.

For our Enterprise Customers this means:

Operating Expense Considerations:

  • To what extent does our method improve the utilization economics of my customer’s challenge?

o Can I demonstrate improvement in Mean Time To Failure?

o Can I demonstrate reduction in Mean Time to Repair?

o If the answer is yes to the these, it translates directly to extra ‘nines’ of availability

  • To what extent does our method reduce the manpower expenditure for the capacity delivered by my customer’s IT process? (In the Financial Services sector, a savings of $250k per year can be achieved by repurposing one IT person from the maintenance of existing infrastructure by improving IT availability/management efficiency)
  • To what extent does the installation of our product/method disrupt the customer’s existing IT process flow?
  • How fast can we deploy and demonstrate value?
  • Can we demonstrate value while “running in parallel” with the customers current operations?
  • How much of my customers IT manpower will be required to make our solution effective for them?

o On initial install

o In daily operation

Capital Expense Considerations:

  • Does my customer have to spend capital dollars to pay for the solution?
  • Can they offset other capital costs by deploying our solution?
  • Can they increase the efficiency and utilization of their existing IT infrastructure by deferring and/or offsetting new IT capacity spending?

Translating these metrics to the available budget for a vendor solution requires calculating the estimated impact of the solution for our customer (in dollars per year). Additionally, by dividing the dollars per year benefit for the solution by the number of endpoints we can estimate the effective value per endpoint offered by the solution.

A big advantage of Positive IT Control Methods is that most of the considerations discussed above can be tangibly demonstrated.

In contrast, Negative Model security ROI is most often based on actuarial analysis and is often viewed as an “insurance” sale. In other words, if I DON’T deploy the solution, my risk if an incident occurs might be $X, so I am willing to pay $Y per device as insurance to mitigate that risk.

In the Positive Image Management and Controls scenario we can measure TRUE OpEx and CapEx impact.

When we have a denominator (the impact/savings) we can divide it by the numerator (the number or devices and endpoints served) and begin to understand our value delivered.

Example:

If customer savings = $1mm / 12mos;

and number device units served = 10,000;

then $1×10^6 / 1×10^4 = $10^2 (or $100 per device per year)

Not only is this value/ROI analysis useful for the supplier (in setting a price for the value delivered) it is mandatory for our enterprise customers in this challenging spending environment, and when justifying their choice of vendor and solution.

The bottom line is:

Proactive systems management can and does create specific and measureable operating efficiencies (as well as offsets of capital spending thru better IT systems efficiency and agility).

In our opinion vendors should live by the following guideline:

If we can’t tangibly save or make our customers money through their utilization of our solution in their IT environment, then we shouldn’t be wasting their time.


Whitelist Emerges from the Shadows: Re-enforcing the Three-Tier Security and Systems Management Model

August 6, 2008

As I have discussed before, it is interesting to watch the steady evolution of the IT security, systems management and compliance solution set. If you zoom back to a 5 to 10 year view, and compare how we were thinking about things before, to how today’s thinking has evolved, some interesting macro patterns emerge.

First, there is increasing harmony in the thinking that “we can’t keep up with the blacklist velocity” and it is easier to flip the model to “whitelist” and keep track of that. Even the industry giants are now touting whitelists as a safe bet for the future of IT security. See this video from RSA this year where John Thompson, the CEO of Symantec, talks about future trends:

While John’s comments are useful (and we believe correct) – we need to be careful here. I have indicated before in these blog pages, and in my public-facing keynotes and presentations that Whitelists are NOT substitutes for Blacklists. In fact, the methods are complimentary for now.

As we get better (as an industry) at measuring and asserting Positive Image Management (making sure the good and desired code set remains in a prescribed state over the device usage lifecycle) then our full dependence on Negative Detection methods (AV/IDS/IPS) should diminish rapidly.

Accomplishing this, however, could hinge on our ability to radically shift our paradigm as it relates to security and systems management. Let me expound.

In the traditional AV model, we fundamentally rely on a two-tier model where:

  • Detection and Blocking is handled by the client or agent resident on the IT device (server, workstation or desktop), or in a gateway device that inspects content destined for an endpoint within the domain.
  • Detection is enabled by a “blacklist”, usually maintained by the AV vendor, and this content is made available to the AV scanning tools on a push or pull basis.

Basically the industry approach has been incremental, reactive, and has leaned heavily on this two-tier model.

As we shift to a more proactive and prescriptive Positive Image Management method, it is imperative that we “remap” our traditional two-tier view. We see our customers moving more to this view:

  • Positive Images (the desired “software bill of materials”) for a given IT device can be defined in advance through the software build, QA, and User Acceptance Test (UAT) release cycle. In parallel to building the deployable image, a software reference can be easily created. Additionally, and for legacy environments, some or all of the device image can be “learned” from the device as it is currently deployed.
  • The resulting software reference can then be used in conjunction with endpoint instrumentation (3rd Party or Platform Intrinsic) and a comparison can be made between the image reference (all or some) and the target endpoint.

There are many advantages and benefits for Enterprise to move to this model, but in simple terms this process is commonly called a “control process”, and is very common in almost every other form of repetitive automation and process management.

As we move to these methods, we need to map more to a three-tier model. Where we may already have a three-tier model (Enterprise Management, CMDB, and/or Data Center Automation), we need to add/supplement the middle tier with Positive Image Management Methods.

In our opinion this will create a hierarchy that looks more like this:

  • High-quality/high provenance content repositories (aka “whitelist”) are used to supply common software “measurement” references for Operating Systems, Application Packages, and other data elements that are commonly included in the software reference images. This is likely a global service that can be accessed on a “pull” basis to many customers.
  • A middle-tier, that would typically be within the customers IT domain, serves as the image management layer to support a one-to-many image reference model (typically enterprise IT has many IT devices that inherit some or all of a common image reference). This tier also conveniently allows customers to add their OWN device and domain specific software measurements to complete the image references for their specific environments.
  • Endpoint instrumentation is added (or is included in the platform) that can perform localized measurements for the binary, configuration and platform-specific software elements, and request validation and/or perform application locking/blocking based on “known and trusted” software elements and sets.

In this model it is becoming increasingly clear that the enabling, and high-value components, are in the MIDDLE-TIER. Sure, you need a great set of “whitelist services” above the middle-tier. And yes, you need capable endpoint instrumentation to measure and apply desired detection, blocking/locking and remediation policies. We believe that the key value contribution and long-term differentiation layer of the three-tier model is delivered by the high-resolution image management capabilities present in the middle-tier.

Endpoint instrumentation is largely a commodity and is being subsumed into the platform itself anyway, so that is not a great spot to place your bets (as an IT solutions provider). You can begin to see this emerge in the OS and platforms now with technologies like Validated Execution from Sun (see http://www.opensolaris.org/os/project/valex/) and Trusted Execution (Txt) from Intel (see http://www.intel.com/technology/security/).

And, over time, even the “whitelist” services themselves may begin to commoditize.

So as you begin to grasp this important shift to “whitelisting” that many people are beginning to talk about, don’t be fooled.

Our primary objective must be to provide our customers with an effective IT controls method to enhance their existing “managed IT device environment” through the utilization of Positive Image Management, Control and Remediation Methods. Positive IT control methods are enabled and enforced by high-quality whitelist measurements.

Wyatt Starnes


Apple and Transitional IT (i.e better user experience…)

October 24, 2007

Have you been following Apple’s technical roadmap these days? I know most of us track the new whiz-bang features and amazing marketing that comes out of that company.

You know I was counseled by some Apple folks recently and told “If we get you an audience with Steve Jobs, don’t say a word about Apple being in the “IT business”…and further that “Apple is a consumer products and content company and that technology is all about helping to deliver consumer experience…”  (We will get kicked out if I start talking about “IT”…)

Hmmmmm. Makes good sense. IT, technology and the services built upon them are the means to the end…not the end. The end is sexy and easy to use. The technology is largely transparent to the user experience.

Sort of like a well made German sports car…

As the subject of this blog is really observations of IT in transition, and because I am a closet geek anyway – I must dive a level down and make some technology observations.

I’ll start with a note from Steve Jobs last week. See a copy here: http://blog.zingwat.com/?p=164

Note his comment on code security and “integrity”:

“It will take until February to release an SDK because we’re trying to do two diametrically opposed things at once—provide an advanced and open platform to developers while at the same time protect iPhone users from viruses, malware, privacy attacks, etc. This is no easy task. Some claim that viruses and malware are not a problem on mobile phones—this is simply not true. There have been serious viruses on other mobile phones already, including some that silently spread from phone to phone over the cell network. As our phones become more powerful, these malicious programs will become more dangerous. And since the iPhone is the most advanced phone ever, it will be a highly visible target.”

Also last week Apple made the highly touted announcement of “Leopard” and the mind boggling list of new capabilities and features. See: http://www.apple.com/macosx/features/300.html

Well, one very capable analyst, Carl Howe, wrote an interesting article zeroing in on some common technological similarities between the challenges with these offerings. See:

http://seekingalpha.com/article/50315-apple-s-impressive-platform-security-for-iphone-leopard-development

The two items that Carl picked out and correlated (iPhone to Leopard) are really interesting and relevant to these blog pages. From the article:

Tagging Downloaded Applications
Protect yourself from potential threats. Any application downloaded to your Mac is tagged. Before it runs for the first time, the system asks for your consent — telling you when it was downloaded, what application was used to download it, and, if applicable, what URL it came from.

Signed Applications
Feel safe with your applications. A digital signature on an application verifies its identity and ensures its integrity. All applications shipped with Leopard are signed by Apple, and third-party software developers can also sign their applications.

And Carl goes on to say:

“Those features jumped out at me because the very first Forrester report I wrote in 1996 was about desktop security and the threat of active content. In that report, I wrote that if you want a truly secure platform, you need both app signing and run-time validation to guarantee that you only run trusted code. I further noted that Windows would never become a truly secure platform without these features. The fact that these features are built into Leopard says that even as Macs gain in popularity, Apple has no intent of letting its OS or its iPhone become an easy security target. And these two features are worth the entire cost of upgrade and more to anyone worried about desktop and server security.”

Wow, did you note the “positive platform attestation” comments in his observations? He is saying (I believe) that the device itself is responsible for maintaining the boundaries of what code should be allowed to run on the platform. And that we can “secure a platform” by making sure the trusted code stays trusted, and deal with mobile code asserted to the platform by having some sense of “provenance” – i.e. “where did the code come from, who (which app) requested it, and is it safe to run.”

He finishes the article with “Nice work Apple…”

I concur – great stuff. Not sexy in its own right necessarily. But by building these features into both the architecture AND the third party infrastructure, intrinsic positive platform protection can be more effectively assured.

With this, the stuff just works better. It is more reliable. It is safer…and all leads to a better user experience, and (likely) lower support costs for Apple.

Happier customer, more security transparency based on positive code measurement (signing) and attestation (verification).

Wow. Smart.

Nice work Apple.

And nice work Carl for helping to sort this out.

(Have to run, heading to the Apple store)

Wyatt.