Microsoft and Veridian

Over the last few weeks Microsoft (MSFT) announced more details of their long awaited virtualization strategy (drum roll please) and their expanded partnership with Citrix/Xen (with an emphasis on servers) and simultaneously announced the acquisition of Calista.

When Citrix originally announced the acquisition of XenSource a few months ago we thought that is was apparent that MSFT had to be “in the know” as Citrix and MSFT have been in a love/hate relationship in within the enterprise markets for nearly 18 years.

The prior relationship was much to do with terminal services and “backracking” and streaming of applications to the end-point.  In many ways these uses are a pre-cursor to virtualization – an enterprise “Petri dish” to see what and how customers find value around alternate enterprise usage of platforms and software delivery. Now the next shoe is dropping.

With the success of VMware―both in terms of early enterprise acceptance and deployment AND the IPO (giving VMW a huge warchest)―Microsoft has been forced to move.  Some would see it as “late”, but the virtualization market is really very nascent.  The bulk of VMW revenue is made up of deal sizes $100k or less….(likely ASP’ing at <=$70k right now)….so the bulk of the $1B+ in revenue by VMW is still “pilot” and for development usage.  So we are very early stage.

But the shift is happening quickly and the full transition is inevitable in short order (Less than 5 years for leading sectors to cross over to more than 50% virtualized infrastructure.)

While it is clear that the Virtual Memory Manager (VMM) and Hypervisors (HV) are ultimately commodity delivery mechanisms for the stack and software in the Virtual Machine (VM) enablers, control of VMM and HV is important to the big guys until the other layers of the value-add opportunities develop and evolve.

The longer term question for the “little guys” (every one with less than $100b market cap) is “where are the defensible 3rd party value-add areas” as the paradigm shift fully reveals itself?  What “goes away” as the shift from the one-to-one (hardware to OS) monolithic platform yields to the one-to-many virtual platform.

What happens to traditional IT security in this brave new world?  Where can we hang our respective 3rd party hats as the elephants trample the old ground in search of new and fertile new areas?

It is in these questions that the “positive” security and systems management model really begins to stand out.  Knowing that VM instantiations are ASSEMBLED FROM trusted code by validating them against a platform and vendor agnostic, high-quality “white list” resource becomes critical.

Also knowing WHAT CODE IS LOADED WHERE AND FOR HOW LONG becomes an enabling capability, regardless of which VMM and HV is used to create the VM software stacks.

Also, compliance and software licensing become even more important, but can be easily handled with trusted code and stack measure/validate methods.  Being able to “attest” the stack to an external “white list” reference built from a rich supply of high-quality software reference measurements becomes a highly-defensible and long-term way of adding value to the new virtual compute paradigm.

Interestingly for those that get the jump on this, this represents a huge, content-based, recurring revenue model that the first-party players will have a difficult time displacing (because they don’t have ready access to software measurements from other vendors and due to the “trusted third party” implications).

Will we really trust Microsoft to validate Microsoft?

So let’s just view this as another card in an unfolding game of mammoth proportions and implications.

Stay tuned.  This is going a lot of fun to watch, and to participate in.

Leave a comment