Continuous Monitoring: It isn’t just about security

October 26, 2010

As part of Harris’ Cyber Integrated Solutions efforts to deliver a trusted hosting and cloud environment to our clients we need to highlight the importance of continuous monitoring in delivering explicit trust. Explicit trust is comprised of not only security but also risk mitigation and reliability.

Security is a vital component, but we must shift to a broader definition of “cyber”, and to the full breadth and value of explicit trust. In setting aside our long-standing and industry re-enforced message that it is all about security – we can. We must begin to get comfortable in expanding our cyber definitions to ensure that security is understood as only one dimension. In the most efficient, secure and cost efficient IT operations security issues represent only a small percentage of the total incidents that impact the availability of their systems. Mitigating risks and ensuring the reliability of the IT environment is predicated on continuous monitoring.

Risk Detection = Risk Mitigation

At the heart of understanding risk is understanding change. Any time a system changes from a known good state, the opportunity for risk is incurred. Detecting and understanding those changes are key to weighing the risk associated with a given change, and to taking appropriate actions. Precision change detection is therefore, the foundation of IT risk mitigation.

Continuous monitoring is the time dimension of comprehensive Trust Management. Continual checking to determine whether a data object or setting is “in scope” is crucial to closing the risk detection and mitigation gap.

The other major dimension is understanding the “as-built, supply-chain-anchored”, full multidimensional state of the IT device. That is, it’s not simply good enough to say that something changed. It is also important to be able to identify what changed with enough granularity, fidelity, and provenance that decision makers can determine the appropriate risk profile for the change, including policy actions.

Regardless of whether it is a physical or virtual server, a router or a workstation, if it runs code then the state can be captured and continuously monitored.

By continually observing multidimensional change with very high resolution methods, we enable early and proactive change detection and that detection allows us to trigger important policies that impact the compliance, availability, readiness and the security of all the devices within an IT environment.

Reliability and Availability

Have you ever sat waiting for the mechanics on a plane to change that broken part that is causing a light to go on in the cockpit? The detection performed by the aircraft sensor (in continuous monitor mode) is accompanied by a flashing light (indicator) and the policy is that the plane does not fly until the issue is resolved. While we sit impatiently, we are quietly thankful that they found and fixed that thing while we were on the ground.

And availability (those famous “nine’s”), closely related to reliability, is a direct function of detecting and remediating change. Note that A (Availability) of a device or Service process is defined as follows, where MTBF is the Mean Time Between Failure and MTTR is the Mean Time To Repair:

A = MTBF / (MTBF+MTTR)

With precise change detection within a continuous monitoring loop, and with specific policy actions in place, MTBF can be maximized. Simultaneously, MTTR can be compressed because the change detection provides a precise indication of what changes so that it can be repaired quickly. By mitigating risk caused by all changes (not just security risk) on all devices in the business process, then we can deliver more nines for the entire business process – whatever it happens to be (flying an airplane, or delivering IT services).

Harris CIS must get comfortable with this “more than security” thinking – as history is in the process of showing us (again) that the bulk of our challenge in the delivery of trust with our business service processes is the holistic delivery of repeated reliability “high nines” uptime and availability with all components of business service delivery.

Changing the Game: Continuous Monitoring

As we move forward and extend explicit trust to our clients we must advocate the importance of continuous monitoring. Enhanced security, reliability and risk mitigation rely on continuous monitoring. Harris’ Cyber Integrated Solutions is poised to take a leadership position in our industry by continuously monitoring client environments with the most advanced trusted content and methods available.


Prevalence vs. Provenance: The McAfee Incident

April 27, 2010

April 21, 2010 was a bad day for our friends at McAfee.   Let’s call it Black Wednesday for the purposes of this blog.

First some background:

There has been a lot of rhetoric around “Reputation Services.”  Here is the lay person, real world idea behind Reputation Services:

If I have seen you before, and I think I recognize you, then I’ll add you to my “reputation list.”

This is the idea of “prevalence-based” whitelisting.  All of the big guys (Symantec, McAfee, Trend) have some form of this.  Not that these “voting-based” methods aren’t useful, they are, but they are “community-based” and therefore have some major limitations.  Given that reputation still falls under the category of “Assumed Trust,” rather than “Explicit Trust,” it still has a significant chance of errors (False Positive, False Negative).

On Black Wednesday McAfee learned an important  lesson the hard way.  My point here is NOT to pile on the effort to vilify McAfee.  It could, and might have happened with any other vendor that relies on Assumed Trust methods.

Here are some of the posts from the McAfee web site:

http://siblog.mcafee.com/ceo-perspectives/open-letter-to-mcafee-customers/

http://siblog.mcafee.com/support/mcafee-response-on-current-false-positive-issue/

http://siblog.mcafee.com/support/an-update-on-false-positive-remediation/

Let me break this down a bit more:

For those of you that commented in the McAfee Blog “What the heck is a false positive?” – let me provide a quick answer.

In blacklisting methods like AV, it is incorrectly reporting something that is “good” as “bad”.  Reciprocally, a False Negative is incorrectly reporting that something that is “bad” as “good”. On Black Wednesday Mcafee’s software incorrectly reported a critical Windows executable as bad, and subsequently quarantined the file.

Whitelist or blacklist aside for the moment, the issue is really all about signal to noise ratio. How do I attenuate the signal, and make it unambiguous?  I can either pump up the “detection” amplitude, or reduce the “noise” – preferably both.

I wrote a three-part blog on this a few months ago for geeks like me to go deeper into this subject. Here are the references to that work:

http://signacert.wordpress.com/2009/05/29/why-software-provenance-matters/

http://signacert.wordpress.com/2009/06/06/why-software-provenance-matters-part-ii/

http://signacert.wordpress.com/2009/07/24/why-software-provenance-matters-part-iii-supply-chain-management/

So what is “Software Provenance” and why does it matter? Think about it this way. All software has DNA.  If we take a swab of that DNA upstream from the community, preferably when the software is being “born” at the actual manufacturing site, then we have a real edge, which is:

  • We have supply chain knowledge (I can tell whether code found in the wild is actually the code that was built and shipped by the named vendor).
  • We have more certainty that the code is “real” and not just assumed real as is often the case with graylist and reputation-based systems.

True whitelisting does this.  It provides a mechanism to set and enforce EXPLICIT TRUST with individual software components, packages, and indeed entire business services from power-on through cursor move.  All other methods are just extensions of the old assumed trust methods, and are subject to sever signal to noise issues.

End-to-end explicit trust based or high-resolution reference image management, sourced with known-provenance whitelisting is on its way.    This transition will demark our graduation to real Security 2.0 methods

We need Security 2.0 now.   Until we fully grasp this, I can almost guarantee we will see another Black Wednesday.

Wyatt.


Operation Aurora & Zeus – A wake up call?

February 20, 2010

How can Malicious Code Hide in Plain Sight in the year 2010?

I have been intentionally holding back on addressing the recent cyber breaches (Google China, etc.) in order to allow the situation to fully unfold. Also, I have been near the front lines on some of this with customers and partners, and therefore have been fairly busy.

The temptation for any blog author who has a product that can address some or all of the problems/symptoms is to write a self-serving blog to endorse THEIR products as the holy grail for these issues. I will resist that temptation here.

Rather, let me take the higher ground – again. We are in a world of hurt when, in this day and age where we clearly have the technologies to mitigate much of this risk, we are not applying and implementing the available technologies. One has to wonder why….

Here is the stark reality. Much like our mindset pre-September 11, 2001 – we are of the mistaken belief that we are in a symmetrical battle with a distinct perimeter/battle line, bad actor clearly identified, with some degree of residual respect for the sovereignty of nations. On September 11, 2001 we learned the hard way that it is NOT business as usual. The world has changed and there is no going back.

We now know that in the physical world, our adversaries are leveraging ASYMMETRICAL advantages. The perimeter is porous. The adversary is within and without. It is more than nation states that we must defend against. Fanatical individuals and small groups have shown that they are capable of exerting immense influence and damage. Thomas Freidman does a great job of illuminating these new threats in his book Longitude and Attitudes: Exploring the World after Sept 11.

Our continuing SYMMETRICAL mindset with regard cyber risk simply will not cut it. I believe we are repeating the mistakes of the physical threat environment with our cyber mindset, creating the same, or even greater, exposure to damage and loss that we faced on September 11, 2001.

Operation Aurora (thank you McAfee) and the emerging Zeus issue (thank you NetWitness) is a clear wakeup call. Are we going to press the snooze button and go back to sleep? Or are will we say, “whew, dodged that one” and go back to business as usual?

I sincerely hope not.

Here are some of the lessons we need to learn (again):

  • Cyber Security and Assurance methods cannot assume that there is a defined perimeter that we can effectively defend.
  • Bad Actors have the advantage in so many areas. They can be anywhere, are not clearly identifiable, and have the luxury of time.
  • The stakes have never been higher. This is about corporate intellectual property and national interests (including physical and economic security). This is a war in which Cyber Security is just one critical component.
  • We cannot assume our adversaries are dumb, or that we are so smart that we can stay ahead of them with our “old” tools and technologies.
  • The problem is more than just an “outside in” issue. Some believe that If we just create taller walls, or deeper moats we’ll be ok. Wrong.

Consider this when assessing this current situation:

  • We are learning that Operation Aurora and the Google China issue were just the tip of the iceberg.
  • At least 2,500 companies and government agencies have been compromised.
  • At least 75,000 computers were (many likely still are) armed with a control botnets.
  • Our adversaries have clearly penetrated our defenses and have been comfortably hanging out IN OUR CYBER HOUSES eating from our CYBER REFRIGERATORS for 18 months in some cases. And they are likely STILL in our house. And they made copies of our house keys so they know how to get BACK IN.
  • They have had the luxury of being able to spy on us from the inside out. To observe our documents AND our behavior.
  • There is NO WAY to determine what they took, looked at, or what they have learned while squatting in our cyber houses. We will NEVER know the full extent or breadth of our loss.

How did this happen? (the simple technical primer):

  • They exploited one or more software flaws to penetrate and gain control of the systems and domains (the penetration).
  • Once in, they could hang out largely undetected in order to look at things, take things, and to create control mechanisms and backdoors (with botnets and other mechanisms).

How did this happen? (the simple social primer):

  • We were naive and underestimated our adversaries.
  • We relied on old tools and had (have!) a symmetrical mindset.

Net-net: We had a false sense of cyber security.

What do we do now?

We must learn from this. We must be able to at least detect (if not stop) these zero day threats and develop/use more effective Advanced Persistent Threat (APT) detection methods. Without revealing protected projects, SignaCert is working in cooperation with other entities on APT methods utilizing our configuration image management and whitelist capabilities.

But here is the biggest issue in my opinion:

We must be able to *actively detect* when and what has been compromised in our systems more rapidly. The fact is that we largely “stumbled” into the discovery of Aurora and Zeus.

We MUST be able to continuously monitor whether the “good and deployed” software environment is STILL good. Precise modeling of the state of system components used to create the IT infrastructure is necessary so that ANY change can be detected. Think of this as a very sensitive “motion detector” that is watching everything from “power on to cursor move”. Only prescriptive and approved changes are allowed in this environment. Advanced methods to minimize false alarms (false positive and negative detections) are crucial.

Let’s face it: Precision and continuous IT device change detection is no longer a NICE TO HAVE in this challenging IT world. It is a MUST HAVE. Coupling software supply chain integrity to these advanced methods creates even stronger intrinsic and closed-loop trust attestation.

Methods to address these challenges are available now, and must be deployed if we are going to stand a chance of keeping up with the army of bad actors (trying to wreak havoc) as well as the good actors that make mistakes (that can inadvertently wreak havoc).

We must wake up NOW, and address this risk. While it’s late, it is not too late. In the future I hope we look back at these recent incidents as “near misses” and that we take real action before we collide head on with our complacency.

Wyatt.


Why Software Provenance Matters, Part III: Supply Chain Management

July 24, 2009

Another interesting use case for whitelist-based configuration management is bubbling to the surface (again): IT Device Supply Chain Management

I say *again* because this one came to our attention several years ago, when we built a successful Proof of Concept (PoC), but the IT device manufacturer (who will go unnamed in this blog post) never deployed the PoC as they believed that it was either not that big of an issue/risk/priority or they decided they could handle internally with their own workflow management and in house systems.

Ultimately in that case, the problem was never fixed effectively – and now it has become a high-profile issue in the marketplace.

So here is the problem statement/use case:

I am a manufacturer of a piece of IT hardware that contains software. It may be a computer, a mobile device, a medical device, or even network devices (routers, switches etc.).

I build the hardware, provision the software (most likely these processes are geographically distributed and often outsourced) and then the device is boxed up (with MY brand on the box, shipped thru a few levels of distribution, in and out of customs, and eventually ends up at a customer site.

How do I (as the manufacturer) assure my customer that what was built, shipped and branded by me is what the customer received and installed.

Why does this matter you might ask. Or more specifically, what is the risk?

Well the box may have been opened a couple of times, a VAR or reseller may have added or changed some software (to include their branding or logo for example) – so what I built and shipped IS NOT what actually ended up in the customer site. But I still have warranty responsibility. And I am responsible for my BRAND regardless of how the product made its way to the customer.

Let add another risk dimension:

The device ends up at the customer, it has my brand on it, may have been sold thru channels as “new” but it is actually remanufactured – or worse yet – an actual reproduction that LOOKS LIKE my product. Even though this is not my “fault” – it is still my brand, and my channel (potentially) that is compromised so at least some of the blame/brand damage falls to me.

Ok, you seeing the risk here I hope….but how does known-provenance whitelisting help? Glad you asked.

The PoC that we did a couple of years ago was intended to support this workflow:

- We cryptographically “captured” the software/firmware archive as the device manufacturer built and released new “production code” from their software build systems.

- When the software is “married” to the hardware as a part of the final product/device assembly – we established the relationship of the hardware is connected to the software that is populated on the product/device. This can be as simple as capturing the serial number and the software revisions and cryptographic meta-expressions in a database. In more advanced platforms and use cases, a hardware cryptographic token (such as a trusted platform module) might be used to anchor the authenticity and provenance of the device/platform.

- When the customer installs the device, as a part of the installation/registration process the device reaches out to the database (via encrypted web services) to make sure that the build status aligns with the install status.

This is a classic “close the loop” validation process. Pretty simple to do. And it has many of advantages beyond device/brand integrity. Perhaps the manufacturer has updated the firmware/software and it needs to be reflashed. Also (as the manufacturer) I might be able to benefit from some of the “phone home” data interchange that is enabled by this method. Also gray market or clone products will likely be revealed by this work flow.

These supply chain issues are another important use of known-provenance whitelist software validation.
All of these statements known-provenance, measurement and attestation are really intended to solve the same basic issue. Our compute platforms are “open-loop” by design, as are many of our IT methods and best practices.

We cannot secure what we do not monitor. And we cannot monitor what we do not measure. It is as simple as that.

We MUST fix these issues, and sooner is better than later.

Wyatt.


Unsafe at Any Speed: Distributed Denial of Service Attacks and Whitelisting

July 15, 2009

Some of us gray-haired folks remember Ralph Nader’s provocative book “Unsafe at Any Speed” published in 1965. Basically the book (very controversial when released) took on the automakers for building unsafe cars that threatened the safety of All people that travel the roads. It struck me over the 4th of July holidays that we have a similar situation now on our cyber highways.

Even people that have a life beyond blogging on cyber assurance issues likely would have noticed the front page, and top-of-news coverage about the cyber attacks to the web servers at the Pentagon, Whitehouse, Treasury, State Department (and many other sites). If you didn’t, you can get an overview here: http://www.google.com/hostednews/ap/article/ALeqM5iaaWwzg–SOmIz9Qjdju4UYFB5GgD99B7LNO0

And NBC led their primetime National news last week with this report:

http://www.msnbc.msn.com/id/3032619/#31806714

These attacks are classed as of Distributed Denial of Service (DDoS) of attacks where 100’s or even 1000’s of computers (or more) containing remote-triggered malicious code can be commandeered for nefarious purposes. These so-called Zombie computers (when triggered) will direct their payloads at a set of targeted web servers (or other Internet connected compute processes) in an attempt to overwhelm them with malicious traffic, thus rendering them inaccessible. The amplification effect of these multiple, remote controlled machines is what enables DDos attacks to be as dangerous as they are.

DDoS attacks are far from “new” by the way. One of the first widely publicized one traces back to the year 2000 attack on the Yahoo website. Since then we have learned how to deal better with the SYMPTOMS of an attack (when in progress) by recognizing patterns and routing bogus traffic away from the runtime servers, but we’ve made very little progress on stopping the SOURCE of the problem (Zombies should not exist in the wild).

McAfee estimates that between January and May of this year, 12 million new Zombie computers were armed and aimed. See http://news.digitaltrends.com/news-article/19879/twelve-million-zombie-computers-since-january

Much of the last weeks post DDoS event discussion centered on WHO initiated the attacks and, in my opinion, too little attention has been given as to WHY our systems management and compute infrastructure integrity are so vulnerable that it allows 100’s of thousands, or even potentially million’s of computers, pre-armed, to enter zombie mode when they receive a command from their master — anywhere on the planet. And (you might be wondering) where these Zombie’s are? It is pretty well established that at least some of these Zombies are inside of Corporate and Government offices and datacenters! (Just think if one of your Corporate boxes could be a Zombie-slave to some domestic or foreign master, what other nasty things might it contain?)

How can this be?

One of the many benefits of moving IT systems management to whitelist-based, inclusive software and stack validation methods is that it minimizes (eliminates for all practical purposes) the detection “blind spot” that rogue software can parasitically exist in.

Reference Configuration-based whitelisting establishes software manifest-based monitoring and control on the compute devices which can easily alert the user or IT admin staff if any code is added, deleted, or changed on the target system (from an established, managed build reference). These powerful change detection methods virtually eliminate the exposure to “sleeper” or “Zombie” code existing on systems without user awareness.

With reference configuration-based whitelist image management in place, it is virtually impossible to hide malicious code “in plain sight” as is currently happening. Not only does this help to reduce (and eventually eliminate) much of the DDoS risk (from the standpoint of allowing the presence of Zombie code) – many other benefits accrue from these methods (better compliance, improved security, more stable systems, etc.).

These whitelisting methods also reduce much of the risk of so called “zero day” attacks like the Hannaford Bros. event discussed here. http://zerodaythreat.com/?p=40 and the other malicious and parasitic code risks (like the one that almost wiped AIG’s computer disks in early 2009).

So the Ralph Nader corollary holds in my opinion. The internet is currently UNSAFE AT ANY SPEED, and ultimately it becomes the responsibility of the suppliers/vendors to build better and safer devices to navigate on our cyber highways. It also falls to consumers and users of IT to use “practice safe computing” and use tools and best practices to make sure that they are not contributing to the IT-ecosystem danger factor.

Unfortunately we are barely at the comparative “install and fasten seat belt” stage of relative maturity and yet — at the speed we are traveling coupled with our extreme IT dependence — the severity of the impending (and likely inevitable) crash will likely be catastrophic.

When are we going to learn? Put down the shovel, step away from the hole, and think about it. It is long past time for us to enforce best practices, deploy new tools (think reference-based whitelisting), and find stronger political will to address these fundamental and crucial IT risks.

Wyatt.


Why Software Provenance Matters, Part II

June 6, 2009

I posted a blog a few days ago that covered some of the operational issues of Why Software Provenance Matters, but in talking with partners recently, and listening to other use cases, I thought that I’d add some detail to address these needs and perspectives.

In statistical error analysis we talk about Type One (T1) and Type Two (T2) errors (also known and False Positive and False Negative respectively).

T1 or False Positive is classifying something as true when it isn’t.
T2 or False Negative is classifying something as false with it isn’t.

See the table below:

So in any method where there is a “test” of a state against some actual or known condition, there is a chance of comparison error. The result of the test generally triggers some policy action or alert.

This of course translates to: Accurate measurement is a critical requirement for effective policy implementation.

Let me use a simple example that relates to identity. The test for this example is:

Is this an authorized user?

If it is a legitimate authorized user (Test) I want to grant entry or access (Policy) = True Positive

If it is not a legitimate authorized user (Test) I want to deny entry or access (Policy) = True Negative

If an unauthorized user IS ALLOWED inappropriate access = False Positive or T1

If an authorized user IS NOT ALLOWED appropriate access = False Negative or T2

Both T1 and T2 errors are problematic of course, but the challenge is really the same. How to I precisely identify the user so as to reduce the risk of error?

Precise identification is the answer of course.

Now let me apply this to whitelisting and blacklisting:

Here is the Whitelist Example:

Here is the Blacklist Example:

So again, our “test” must be accurate in order to affect the appropriate policy. Likely the policy in these cases is to “allow” or “deny” the code from loading and/or running.

So the accuracy and provenance (certainty of code hash signature and/or attributes) is THE MAJOR component used to test the condition for both whitelisting and blacklisting methods.

Where whitelisting can compliment blacklisting is generally to reduce the false positives by improving the certainty of the reference methods used in the detection. This can improve customer experience by not inadvertently blocking good code from loading/running.

Also, Symantec and others have effectively used a form of Dynamic Whitelisting (see my blog on whitelisting methods) to create “do not scan again” lists in order to optimize the AV scanning to code that actually needs inspection. This also enhances user experience by speeding up the AV process..

It is for all of these reasons that we think that there are really only BLACK and WHITE lists – and why we believe known provenance is one of the surest ways to precisely establish the reputation reference used for both whitelist and blacklist methods.

Wyatt.


Gartner, Whitelists and Virtualization Methods

June 2, 2009

I have mentioned this post before, but to keep you current see:

http://blogs.gartner.com/neil_macdonald/2009/04/21/its-virtualization-security-week/

This post seems like a great “connect the no-brainer” dots together opportunity. Here’s a recap of Neil MacDonald’s Security No-Brainers (SNB) so far:

  • SNB #1: We Need a Global Industry-wide Application Whitelist
  • SNB #2: Use whitelisting in the hypervisor/VMM (especially in the “parent” or Dom0 partition) to prevent the execution of unauthorized code in this security-sensitive layer
  • SNB #3: Root of Trust Measurements for Hypervisors
  • (Relating to SNB #1 – we did announce our working relationship with Microsoft in the area of software whitelisting at the RSA show. A key element of that announcement is the standards for whitelist exchange using a Standard Schema Definition – or Data Exchange Format. So a sub-text SNB is standards of method, protocol and exchange).

    So against the backdrop of the RSA events, this blog heading, and staying within the limits of certain NDA’s that we (SignaCert) are under – let posit a connect-the-dots hypothesis:

    The highest cyber security goal in both the P (physical) and V (virtual) worlds is ultimately the same; namely, to instantiate a computing environment—a software stack on a hardware platform—as secure, reliable, safe and trusted. This trusted stack (hardware plus software) then can become one of the cogs of a business process (a “Service,” in ITIL parlance).

    It is important to note here that while the goals in the P and V world may be largely the same, the complexities in the V world are likely to make these goals even harder to achieve largely because the V world velocity of change is likely to be higher, and we may not even know (or care) where the image physically resides anymore. Even more reason to think about all of this very carefully.

    All of the moving parts of any completed IT service (AKA the business process) should ideally be trusted from end-to-end, right? Not just to a point of instantiation (when it is deployed and turned on), but thru to the point of de-instantiation (un-deployed and turned off). This is really and issue of maintaining lifecycle integrity. (There are some interesting compliance issues here to, but I’ll leave that story for another blog post).

    We now know that to achieve the goal of end-to-end trust, the following processes are necessary (at a minimum):

    1. A root of trust for measurement (RTM) should be established in the hardware in some fashion, say with a Trusted Platform Key, and establish RTM for HV/VMM (SNB #3).

    2. A Trusted Platform Module (TPM) key, or other cryptographic identifier, should be passed and used to request and validate (attest) the HV/VMM in the Domain Zero (Dom0)/parent layer (SNB #2). This is for the purpose of determining whether the Dom0 can be trusted.

    3. Then, with a “known/trusted” parent environment in place (and hopefully a way to keep it that way), we pass our “trust baton” up the stack.

    4. Finally, we need the HV/VMM to instantiate the rest of the software stack with positive attestation methods provided by the Global Industry-wide Application (software) Database (supported with known provenance ISV-sourced software “measurements,” see SNB #1).

    And then it would be useful if we had a way to rate the trustworthiness of the entire system and score the results, in normalized terms. Our goal is to attest/certify what we’ve instantiated with a proactive statement of platform/image trust, make certain it’s stable and durable, and to enable a method to continually “prove it”.

    We might also want a way for that service process stack to be able to offer its “platform trust credentials” to other business/service processes, both in and out of the physical domain in which it resides. This credentialing could be used to exchange relative platform trust with partner service process, for example.
    (Disclaimer: SignaCert has two U.S. patents issued on the notion of Stack/Platform Trust derived from element trust scores.)

    A crucial element to make this all a reality is one of the first rules we learned in kindergarten: how to play well with others.

    The platform players must work with closely with the virtualization players, who must work closely with the whitelist eco-system folks, who need ISV support to play their role, etc. And all must collaborate regularly with the systems management vendors and solution providers.

    Such collaboration is not easy, but it’s not impossible. If deep trust services become a required credential for connecting with partners, demonstrating regulatory compliance and meeting government IA standards…it will suddenly be in everyone’s strong self interest to get onboard the instantiation train, and pass the trust baton.

    By the way, in our experience number four above is one of the trickiest parts. How do we manage multiple complex heterogeneous V and P stacks? How do we create and express broader trust credentials across a matrix of dynamic business/service processes?

    And another important design tenet relates to persistent and non-persistent connect scenarios. We need to carefully avoid the circular loop of “we need a connection to attest trust credentials – but can’t get a connection because we don’t have trust affirmations”. Cooperation with eco-system partners is required to pull this one off too.

    Net-net: We need to think and act differently. The “old” P security methods have largely run out of gas already, so let’s use the V (virtualization) transition to bake in trust and security, as a first principle.

    Wyatt.


Follow

Get every new post delivered to your Inbox.