The Coming IT Endpoint Wars

September 22, 2008

Ok, let’s face it. Our customers don’t want “yet another” endpoint agent to perform some function that could and should have been done “out of the box” by the hardware or software platform.

(Our definition of an Endpoint: Any computing device that runs code including servers, workstations, desktops, laptops and even PDA’s and other smart handheld devices)

The reasons are clear:

Any new software included in a production build of the enterprise software stack adds complexity, increases risk, and adds cost.

Often the value of the agent/client (tacked on to the dozen on more agents included in the stack already)is offset by the testing costs to make sure that the new agent/client doesn’t negatively impact the (often delicate) stability of the stack itself.

These risks are particularly acute with many IT security and systems management agents, as these agents often seek to “hook the kernel” in order to detect and/or block events and actions that are not easy to see without deep connections into the platform and/or operating system.

We are asking customers to stack risk on top of risk at a time where the value of the traditional agents/clients is increasingly marginal to the functionality, security, or uptime of the IT device.

So what happens next you ask? Well get ready for the endpoint wars of 2009.

A few things are increasingly clear:

Nothing “new” gets into the low-stack of our customers production software gold images unless its adds significant value.

This likely means that “yet another virus detection tool” doesn’t stand much of a chance for inclusion. Methods that add proactive and positive control and/or flexible application control will likely make the cut for certain customers against important emerging uses cases. In cases where the positive image management and whitelist-feeds are delivered through existing 3rd party agent framework, these agents MUST be repurposed to support the new use cases for positive image control and whitelist.

As mentioned before, whitelist and blacklist methods are not, and never will be “pin for pin” replacements for each other. While certain aspects are complimentary, high-resolution image management based on whitelists provide significant new leverage for customers in terms of IT reliability, stability, compliance – leading to greatly enhanced system availability and reduced TCO.

We see these new “whitelist methods” as holistic solutions involving software image management and control, likely fed by very high-quality “whitelist” software measurements (hashes). The methods are intended to enforce some simple, but important (and largely new use case) policies:

  • Make sure that we are deploying trusted code and code sets on all of our devices
  • Enable us to PROVE that the “good code stays good” over the usage lifecycle
  • Enable fine-grain policies to enable only trusted applications to load and run
  • Enable us to detect undesired/unknown code on the device and/or environment is quickly

We also see more and more of the agent functionality being subsumed into the platform, both into the hardware and the software. This would indicate that the low-stack suppliers must take a more proactive role to protect and enhance their brands.

Clearly these are the chip guys, the OEM and retail platform providers, as well as the operating system (OS) and business application software providers.

Now here’s at least one of the wild cards: What happens as we move more to virtualized platforms? These are low-stack players also. Shouldn’t hypervisors and virtual machine (VM) software management tools play a role in how the stack is built, secured and maintained?

Of course they should. Now is the time to make sure we don’t make the same mistakes again.

So we are heading for a real battle. There is not enough room in the platforms for all of the vendors that want THEIR agent to be top dog. There are way too many people vectoring for way too little real estate. Also, the key real estate is really controlled by the low-stack vendors first, and by our customers second.

So place your bets quickly people. The endpoint wars of 2009 have already begun. In fact they are in full bloom. The vendor collisions will clearly occur in 2009, and then it will likely take another year or so for the dust to settle.

At this stage we can’t stop the resulting impact and resulting collateral damage – the market forces are already in play.

If we (as vendors) haven’t moved to a safe and defensible position to add customer value by now, it is likely too late.