Over the last couple of months our friends at Symantec have readied and released Norton 2009. We hope that the engineering team is taking a long rest after the significant effort to bring this new offering to the market!
Norton 2009 is touting the use of “whitelists” as a part of the offering. See:
From the article:
Symantec has adopted whitelising techniques in an effort to dramatically improve the performance of its upcoming Norton 2009 security suite, according to the company’s vice president of consumer engineering, Rowan Trollope.
Trollope admitted that poor performance was the main reason Norton Internet Security customers abandoned previous versions of the product. In the next version, he explained, a “whitelisting approach” significantly reduced the amount of time scanning files that are known to be safe.
It is very interesting to see another use case and value definition for whitelisting. Not that we agree, mind you. It is just interesting.
All of the AV vendors are seeking ways to deal with “Blacklist bloat and overhead”. The use of “smart whitelisting” in an AV product is an interesting way to address this need. Essentially what the Symantec team realized is that there is no need to constantly scan and rescan good code. Bravo.
But what they missed is that this is not the real and valuable use case for code whitelisting in the longer term.
As discussed before in this blog, whitelisting is not a direct replacement for blacklisting. Blacklisting, in its pure form, is about discovery and hopefully the blocking/immunization of “malicious code” – code intentionally built and distributed to damage compute platforms and increasingly used to steal private information.
Whitelisting, in its pure form (admittedly by our definition) is about making sure that the “good and desired code set” remains in a good and desired integrity and configuration state over the entire usage life cycle of the software stack (physical or virtual).
In order to make this work in practice, one needs to redefine the architecture of computer security and systems management from end-to-end. Simply tacking whitelist onto an existing blacklist solution does not yield the real benefits that can be achieved with true whitelisting.
The test of true whitelisting is really driven by more fundamental benefits like improved computer stability, security and compliance – leading to higher availability (increased MTTF and reduced MTTR). Importantly these benefits should be delivered with LOWER operational costs than we have now. This means that we need to lower the people costs associated with delivering computer availability and capacity. This means we need to increase our visibility to ALL of the risks to computer stability, including malicious code. This also means that we need to instrument and automate IT best practices.
Positive IT Controls are the answer. We are making steady progress in “flipping the model” from blacklist to whitelist – but don’t be fooled. Norton 2009 is not really a “whitelist solution”. Real whitelist solutions will be very common as a true and vendor agnostic, high-provenance whitelist eco-system evolves. And as the endpoint wars begin to settle down in 2009.
Keep your eyes on these pages for more on this.