As stated in a prior post, the goals of most IT departments are simple: Deploy and manage an agile, secure, reliable and stable global information technology (IT) infrastructure – and manage it with increasing efficiency
Accomplishing these goals can be a challenging endeavor. Industry and Government are increasingly dependant on IT systems. And yet, is IT (vendor and customer –side) really up to the task? There is much evidence suggest that it is not.
Blacklisting, and forms of signature-based filtering and anomaly detection, have traditionally been the de facto standard method for IT device security. There is now sufficient evidence indicating that these methods have reached the point of maximum, and even diminishing return for many, if not most, IT users.
Current generation “white listing” methods (such as Tripwire), are effective to an extent, but these relative integrity methods leave certain measurement gaps as well. For example, how do I know that the code on the machine that purports to be authentic release code by vendor XYZ is really their code? And relative integrity validation can still lead to integrity drift between like systems within the same enterprise.
So, as often happens, the answer to current and future needs can be gleamed from the past. As the 19th century scientist, Lord Kelvin said: “To measure is to know.” and “If you can not measure it, you can not improve it”
IT systems management must continue its transformation from art to science. Software measurement is the key – it can, and must be done to close the gaps that we all struggle with around IT security, compliance, scaling, and stability issues.