Symantec’s Blackhat Survey

August 22, 2008

I mentioned there were two notable items (from our very objective perspective) at Blackhat/DefCon.

I wrote on one already. Here is the other based on this blog from the Symantec website:

https://forums.symantec.com/syment/blog/article?blog.id=emerging&message.id=109#M109

On the opening day of BlackHat 2008, Symantec commissioned an anonymous survey among the attendees to learn about contemporary views on security related topics, such as vulnerability research, future threats and trends, and what types of challenges we as security professionals will collectively face in the coming year.

Whitelisting:

Almost a third (34%) of respondents said that they implemented some form of whitelisting within their organization (39% said no, and 26% actually didn’t know!). Note that whitelisting may not necessarily apply to all systems, but could be restricted to specific machines. For example, most respondents look to whitelisting to protect more “static” high-availability machines like servers (40%), gateways (31%), and desktops (32%) rather than more dynamic environments like laptops (26%) and wireless devices (29%). Symantec has been stressing for quite some time that we are on the cusp of a critical inflection point where the number of unique malicious code instances is surpassing the number of legitimate code instances. This trend necessitates considering a new approach to providing security; namely, rather than blocking out the bad, we should consider just allowing in the good. Naturally there are a host of challenges in this area, but given our tremendous reach and deep insight, we believe that there are some highly promising approaches to facilitating whitelisting – and this area is one that I’m personally both very excited about and also actively involved with.

As indicated SYMC did an anonymous survey querying on the subject of whitelist. While that in itself is not a surprise, the results were interesting.

Besides what we would consider a pretty high “yes” rate to the question of “Have you deployed some form of whitelisting” (34% said yes), the underlying comments reveal a pretty sophisticated view of the initial and important use cases for whitelisting.

Now, one assumes that the people surveyed are most likely enterprise (commercial and government) users, but it is very interesting to note the pretty even distribution of end target platform use for whitelist (40% server/31% gateway/32% desktop) with the inference that these are the more “static” devices.

This is interesting (and does check with our field experience BTW) in that we would guess the delineation is being made on what the enterprise users deem as the “managed devices”.

So what is “managed device”? In our view in is an IT element and has some form of best practice controls in play all the way from software stack development, QA, user acceptance tests (UAT) and deployment. Generally a managed device would (ideally) have one “point of management” for updates and maintenance. Laptops and other devices in many organizations lack these important software release/management best practices.

One also might assume that the whitelist method being applied to these managed devices have something to do with IT process enforcement and compliance, likely driven by ITIL, PCI or some other best practice and/or regulation.

Given the relatively high “yes” to whitelist answer (in that whitelist enabled methods are still somewhat nascent) one might further guess that technologies in use might be a combination of “home grown” , first-generation, well known open-source and commercial methods like Tripwire (good for you Tripwire!).

It seems to us that as understanding and acceptance of these methods increases, the use cases and the respective appreciation for the value-add of IT controls based on image management, enabled by whitelist, will only improve (after all we have a contribution margin of new yes’s of 65% (the no’s and don’t knows).

This represents a great opportunity for the suppliers to explore, better understand, and deliver next-gen methods for enhance positive IT controls! (at least that is what WE are doing)

In our view this survey reveals some very important data points. We can blog on the virtues of new “whitelist” methods until the cows come home, but the only important questions at the end of the day are:

“Are customers ready for these new methods?” (and the answer appears to indicate YES) and;

“What are they willing to pay” (and the answer is TBD based on use case and value delivered) and;

“Who will be the de facto standard vendor(s)?” (great question….!)

Anyway, thanks for the survey Symantec welcome (again) to the discussion!

Wyatt.

Advertisements

Speaking of Endpoint Instrumentation……

August 20, 2008

Some of you may have attended the recent BlackHat/DefCon events in Las Vegas earlier this month. Of the notable events and mentions, two in particular I thought might be of interest.

One of these events is reported in the article linked below:
http://dmnnewswire.digitalmedianet.com/articles/viewarticle.jsp?id=483836
with the headline:

CoreTrace’s Application Whitelisting Solution Stops 100 Percent of Computer Viruses During DEFCON 16 “Race-to-Zero” Competition

The key paragraph is:

“After the blacklist-focused contest was completed, we ran the samples through CoreTrace’s whitelisting solution, BOUNCER,” said “Race-to-Zero” organizer, Simon Howard. “By not allowing any of the samples to execute on the host computer, BOUNCER stopped 100 percent of the viruses. I strongly recommend that companies add application whitelisting solutions like BOUNCER to their arsenal.”

Congrats to our friends at CoreTrace! It’s no surprise to us that “positive” code identification and application “allowance” is more effective than bad code detection and blocking alone.

Both blacklist and whitelist methods have a common thread:

  • With the blacklist method if you can’t identify what’s trying to run, you can’t block it.
  • With the whitelist method, if you can identify what’s trying to execute (and the rest of the “allowed” code) then you can enable it to run.

This means that the measurement method is a means to an end, with the desired end being to create and invoke effective policies that are predictable and reliable. BOTH blacklist and whitelist are measurement methods. The difference (and the reason that CoreTrace prevailed in the Race-to-Zero) is that their method is FINITE. They ONLY allowed what was known and trusted to execute. The other guys had the infinite detection problem in that there are an infinite number of “bad things” that can come at the endpoint. It’s become increasingly difficult to keep up from a blacklist perspective (with the identification method, timing or quantity).

The trick however for all makers of IT “endpoint instrumentation” is maintaining the method above the endpoint. As I have mentioned in previous blogs, the full extension of the value of whitelists cannot be fully enabled without effective image management methods AND the whitelist content (organization, quality and supply).

Think middle-tier image management and source quality of whitelist measurements. We must create capable image management methods that can scale to enterprise, and supplement these methods with quality measurements (whitelists) for us to scale with confidence.

After all, if our customers can’t scale these methods to thousands of endpoints, and manage/integrate them effectively, they will not be practical over the long-term.

So we applaud the efforts of CoreTrace, and of all of the endpoint folks (3rd party, ISV and Platform), to enable more of these Positive IT controls capabilities “out of the box”.

We stand ready to serve any/all of them with the most comprehensive set of image management methods and high-quality whitelist content available today.

P.S. I indicated that there were two notable developments out of Blackhat. I will blog on the second one in the next few days.


Value, Pricing and the Positive IT Control Model

August 11, 2008

While on the subject (of Positive IT Controls methods and architecture) I’d like to share some observations on the value/pricing metrics.

We must take the “customer perspective” in our approach to pricing new value-add IT security and image/system management methods for servers and endpoints.

For our Enterprise Customers this means:

Operating Expense Considerations:

  • To what extent does our method improve the utilization economics of my customer’s challenge?

o Can I demonstrate improvement in Mean Time To Failure?

o Can I demonstrate reduction in Mean Time to Repair?

o If the answer is yes to the these, it translates directly to extra ‘nines’ of availability

  • To what extent does our method reduce the manpower expenditure for the capacity delivered by my customer’s IT process? (In the Financial Services sector, a savings of $250k per year can be achieved by repurposing one IT person from the maintenance of existing infrastructure by improving IT availability/management efficiency)
  • To what extent does the installation of our product/method disrupt the customer’s existing IT process flow?
  • How fast can we deploy and demonstrate value?
  • Can we demonstrate value while “running in parallel” with the customers current operations?
  • How much of my customers IT manpower will be required to make our solution effective for them?

o On initial install

o In daily operation

Capital Expense Considerations:

  • Does my customer have to spend capital dollars to pay for the solution?
  • Can they offset other capital costs by deploying our solution?
  • Can they increase the efficiency and utilization of their existing IT infrastructure by deferring and/or offsetting new IT capacity spending?

Translating these metrics to the available budget for a vendor solution requires calculating the estimated impact of the solution for our customer (in dollars per year). Additionally, by dividing the dollars per year benefit for the solution by the number of endpoints we can estimate the effective value per endpoint offered by the solution.

A big advantage of Positive IT Control Methods is that most of the considerations discussed above can be tangibly demonstrated.

In contrast, Negative Model security ROI is most often based on actuarial analysis and is often viewed as an “insurance” sale. In other words, if I DON’T deploy the solution, my risk if an incident occurs might be $X, so I am willing to pay $Y per device as insurance to mitigate that risk.

In the Positive Image Management and Controls scenario we can measure TRUE OpEx and CapEx impact.

When we have a denominator (the impact/savings) we can divide it by the numerator (the number or devices and endpoints served) and begin to understand our value delivered.

Example:

If customer savings = $1mm / 12mos;

and number device units served = 10,000;

then $1×10^6 / 1×10^4 = $10^2 (or $100 per device per year)

Not only is this value/ROI analysis useful for the supplier (in setting a price for the value delivered) it is mandatory for our enterprise customers in this challenging spending environment, and when justifying their choice of vendor and solution.

The bottom line is:

Proactive systems management can and does create specific and measureable operating efficiencies (as well as offsets of capital spending thru better IT systems efficiency and agility).

In our opinion vendors should live by the following guideline:

If we can’t tangibly save or make our customers money through their utilization of our solution in their IT environment, then we shouldn’t be wasting their time.


Whitelist Emerges from the Shadows: Re-enforcing the Three-Tier Security and Systems Management Model

August 6, 2008

As I have discussed before, it is interesting to watch the steady evolution of the IT security, systems management and compliance solution set. If you zoom back to a 5 to 10 year view, and compare how we were thinking about things before, to how today’s thinking has evolved, some interesting macro patterns emerge.

First, there is increasing harmony in the thinking that “we can’t keep up with the blacklist velocity” and it is easier to flip the model to “whitelist” and keep track of that. Even the industry giants are now touting whitelists as a safe bet for the future of IT security. See this video from RSA this year where John Thompson, the CEO of Symantec, talks about future trends:

While John’s comments are useful (and we believe correct) – we need to be careful here. I have indicated before in these blog pages, and in my public-facing keynotes and presentations that Whitelists are NOT substitutes for Blacklists. In fact, the methods are complimentary for now.

As we get better (as an industry) at measuring and asserting Positive Image Management (making sure the good and desired code set remains in a prescribed state over the device usage lifecycle) then our full dependence on Negative Detection methods (AV/IDS/IPS) should diminish rapidly.

Accomplishing this, however, could hinge on our ability to radically shift our paradigm as it relates to security and systems management. Let me expound.

In the traditional AV model, we fundamentally rely on a two-tier model where:

  • Detection and Blocking is handled by the client or agent resident on the IT device (server, workstation or desktop), or in a gateway device that inspects content destined for an endpoint within the domain.
  • Detection is enabled by a “blacklist”, usually maintained by the AV vendor, and this content is made available to the AV scanning tools on a push or pull basis.

Basically the industry approach has been incremental, reactive, and has leaned heavily on this two-tier model.

As we shift to a more proactive and prescriptive Positive Image Management method, it is imperative that we “remap” our traditional two-tier view. We see our customers moving more to this view:

  • Positive Images (the desired “software bill of materials”) for a given IT device can be defined in advance through the software build, QA, and User Acceptance Test (UAT) release cycle. In parallel to building the deployable image, a software reference can be easily created. Additionally, and for legacy environments, some or all of the device image can be “learned” from the device as it is currently deployed.
  • The resulting software reference can then be used in conjunction with endpoint instrumentation (3rd Party or Platform Intrinsic) and a comparison can be made between the image reference (all or some) and the target endpoint.

There are many advantages and benefits for Enterprise to move to this model, but in simple terms this process is commonly called a “control process”, and is very common in almost every other form of repetitive automation and process management.

As we move to these methods, we need to map more to a three-tier model. Where we may already have a three-tier model (Enterprise Management, CMDB, and/or Data Center Automation), we need to add/supplement the middle tier with Positive Image Management Methods.

In our opinion this will create a hierarchy that looks more like this:

  • High-quality/high provenance content repositories (aka “whitelist”) are used to supply common software “measurement” references for Operating Systems, Application Packages, and other data elements that are commonly included in the software reference images. This is likely a global service that can be accessed on a “pull” basis to many customers.
  • A middle-tier, that would typically be within the customers IT domain, serves as the image management layer to support a one-to-many image reference model (typically enterprise IT has many IT devices that inherit some or all of a common image reference). This tier also conveniently allows customers to add their OWN device and domain specific software measurements to complete the image references for their specific environments.
  • Endpoint instrumentation is added (or is included in the platform) that can perform localized measurements for the binary, configuration and platform-specific software elements, and request validation and/or perform application locking/blocking based on “known and trusted” software elements and sets.

In this model it is becoming increasingly clear that the enabling, and high-value components, are in the MIDDLE-TIER. Sure, you need a great set of “whitelist services” above the middle-tier. And yes, you need capable endpoint instrumentation to measure and apply desired detection, blocking/locking and remediation policies. We believe that the key value contribution and long-term differentiation layer of the three-tier model is delivered by the high-resolution image management capabilities present in the middle-tier.

Endpoint instrumentation is largely a commodity and is being subsumed into the platform itself anyway, so that is not a great spot to place your bets (as an IT solutions provider). You can begin to see this emerge in the OS and platforms now with technologies like Validated Execution from Sun (see http://www.opensolaris.org/os/project/valex/) and Trusted Execution (Txt) from Intel (see http://www.intel.com/technology/security/).

And, over time, even the “whitelist” services themselves may begin to commoditize.

So as you begin to grasp this important shift to “whitelisting” that many people are beginning to talk about, don’t be fooled.

Our primary objective must be to provide our customers with an effective IT controls method to enhance their existing “managed IT device environment” through the utilization of Positive Image Management, Control and Remediation Methods. Positive IT control methods are enabled and enforced by high-quality whitelist measurements.

Wyatt Starnes