White hat/black hat, white-list/black-list, which witch is which?
The old saw about White lists/Black lists is continuing to make the rounds. Which approach your organization uses for governing its Cloud-based outsourcing — assuming IT has anything to say about Cloud-contracts at your organization — is going to make the difference between organizational survivability, or not.
The old White-list/Black-list approach to security is predicated on a simple duality:
(1) White list: nothing is allowed until approved
(2) Black list: anything is allowed until proven to be harmful
In truth, we tend to have a combination of these two approaches for information security maintained in IT operations at our owned-premises.
White hat
The first approach assumes that nothing can be trusted. It is accompanied by “white-lists” for known good people, procedures, web-sites, applications, databases, smart-phones, systems, hypervisors, networks, credentials and information. Examples of these include current homeland security screening procedures at airports in the US, email filters and firewall rule-sets among others.
Black hat
The second approach assumes that everything can be trusted. It is accompanied by “blacklists” for known-bad people, procedures, web-sites, applications, databases, smart-phones, systems, hypervisors, networks, credentials and information. Examples of these include most restaurants and retail establishments, email filters, and most contemporary antivirus and malware detection engines except Norton.
Other examples
Operating systems are designed on the principal that users are not competent enough to avoid making a mess of things. Kernel-mode operations in operating systems are reserved for highly privileged procedures and applications, including such things as device drivers, network drivers, storage drivers, memory management and schedulers to name but a few. Poor old users are subject to the whims of less- and least- privilege by design. The same is true for networking equipment and software, databases, user accounts, directories, web-applications, databases and entire swaths of enterprise-class applications.
Perspectives
The white-hat/black-hat discussions are just another indication of split-seams in the age-old approaches to security. And, the debate is going to become more poignant for those organizations that are proceeding to outsource more of their IT to “the cloud.”
While it is possible to lock-down your applications, interfaces, management interfaces, input-validations, implement squeaky-clean “good-coding”, change management practices, and whatever combination of white-list/black-list you’d like, you cannot control some of the things that happen at your Cloud provider, including:
• Hypervisors from being hi-jacked at your Cloud provider
• Hijacked root privileges of co-tenants from overwriting your database tables
• Pornography from enduing-up on your web-site
• Your data from being siphoned-off by criminal gangs.
Reputation: the missing component
Although there is much discussion about which approach is better, the debate is flawed: it misses the key issue of reputation from which we then create “known-bad” or “known-good” mental-markers that are the basis for black-lists and white-lists. For example, if you had a reputation-scale for Cloud-providers, as in a simply rated consumer-reports article, would this make it easier to make a more optimal decision?
This does not mean that “white-lists” or “black-list” are not useful. Indeed, where “bad” is known, Black-lists serve a very useful purpose to augment known-good “white-lists.” However, the reality is that a combination of such lists (Black- and White- lists) cannot account for 100 percent of interactions, people, software, or Cloud providers. In most circumstances you’ll have both white- and black- lists accounting for about 33 percent of a known universe, especially for large universes with imperfect information which is what the Internet is. In “best-case” scenarios, you’ll cover 66 percent of a smaller known universe using such lists.
Why reputation
The unknown 67 percent (or 33 percent if you are fortunate) is why “reputational analysis” becomes the default procedure for weeding through the unknown, especially large unknown sets.
As an example, rudimentary reputation-analysis has been practiced for generations of societies and these have proven very useful, and mainly successful. Moreover, reputation has been the default for most of our recorded history. Reputation-based interactions in business, politics, religion, the arts and sciences are the basis for that which we trust: and it is my bet that this is where we are headed for information security.
For example, one of the leading reputation solutions on the market is the Symantec Norton security tools that use reputation-analysis to populate both white- and black- lists based on reputational-evidence. Although this represents a breakthrough for information security tools, there is so much more that can be done with reputational security and governance.
Can you imagine what can be different if reputational analysis is baked into others, such as:
• Two-factor authentication tokens
• Firewalls
• Cryptography
• Virtual-private networks
• User accounts and directories
• Intrusion detection engines
• URL engines
• DNS services
While waiting for such services to “come-to-market” as it were, do your due-diligence, and then do your own due-care, whichever approaches you decide to settle on, but remember: it’s all about reputation, and your reputation in the Cloud needs to be controlled.
My bet is that “reputation-based” risk management, information security and governance is what makes-the-difference for people, organizations and governments. Hopefully we’ll get beyond the chimera of White-list/Black-list long-enough to understand how the lists were created. After all, it is what goes into making up the lists that has to be trusted, or not.
Related research
Application Security: Whitelist Vs. Blacklist