It is the best of times, it is the worst of times for IT. The wealth of technologies and service models has been excellent for us, but at the same time it has brought feelings of distrust amongst the consumers. This complexity has increased our exposure to failures and threats, creating a lack of trust in the technologies. An EMC and RSA global IT trust curve survey finds that 61% of organizations have suffered unplanned downtime, a security breach, or data loss at least once in the last 12 months. Clearly a different kind of solution would be needed in such an environment; a solution that is built on the strong foundation of a trusted infrastructure.
Definition of Trust
Here at EMC several of us have been tasked with defining Trusted Infrastructure. On our first day on the job we decided to check out what the industry has to say about Trusted Infrastructure. The most respectable definition I came across was from the Trusted Computing Group (or TCG), a non-profit organization that defines security specifications. Here is their definition:
I found their definition to be extremely profound, and it seems to resonate with our customers as well. You will be able to see the definition play out to a Trust model that I will write about in subsequent weeks. Let’s examine the definition in more detail. “Predictable” means that there needs to be a declarative prediction on the capabilities of the infrastructure. “Evidence” means that there must be a level of transparency (else who would you trust if the tree really falls?). Properties mean that we have an agreed upon taxonomy, with declarative properties to characterize the infrastructure. I would also add ‘automation’ in the list, which TCG group does acknowledge further down in the discussion. So, thanks to TCG, we have a basic construct of the definition of Trusted Infrastructure, on which we can start building a more detailed taxonomy.
TCG then goes even deeper to define a standard specification for Trust, as in Trusted Platform Module or TPM that establishes Hardware based root of trust. A TPM is a microcontroller that can securely store artifacts used to authenticate the identity of a platform (a PC or a server). I have worked on TPM-based solutions and will attest to the fact that TPM is a very sound and popular standard. Calling TPM the answer to Trust is (in my honest opinion) being too myopic about TCG’s own definition. There are more dimensions at play when ensuring ‘predictability’ than just those securing the Identity and integrity of the system. For example, system availability plays a key role in achieving predictable service levels. To some, compliance may also mean predictable behavior. Predictably low power consumption is increasingly becoming an aspect of trust. You get the big picture: Trusted Infrastructure is more than securing identity, and it is more than securing the device.
Envisioning a Trusted Infrastructure
We have always had spot solutions for security, availability, and other dimensions we may want to put under the Trust umbrella. This begs the question: how is Trusted Infrastructure different from any other infrastructure? The difference is in the breadth and depth of integration between the infrastructure and the trust modules. A Trusted Infrastructure will have trust built in (rather than added in as an afterthought) and will be broadly usable (rather than available on only a locked platform.)
Let us examine the TPM referenced earlier. The microcontroller is designed in and built as part of the motherboard of the device. In addition, it has a well-defined interface and protocol that is abstracted from higher level stacks. The rest is history… TPM has more than two billion end point deployments and an abundance of usage innovations on the abstraction layer of this built-in device. My only issue with TPM is that it only supports certain aspects of trust, while being called a trusted module. In other words, it is missing richness of taxonomy in its implementation.
The vision of Trusted Infrastructure must have three essential factors to be successful. First, we will need a taxonomy which goes clearly beyond security and covers all relevant aspect of a predictable system. Next, the taxonomy of trust needs to be highly integrated with the infrastructure. Finally, we will need to agree upon an adequately open abstraction layer to expose such taxonomy.
Because EMC is where data lives (pardon the marketing plug). All jokes aside, storage infrastructure has to lead such a transformation given its role in protecting and exposing data. Data is in its ephemeral state in both computing and networking, dwarfing its relative risk compared to storage.
Next, EMC has the biggest arsenal of tools to protect your data. RSA covers the security tools, whereas the EMC Data Protection tools provide the coverage for Availability, Recoverability, and Business Continuity.
Thirdly, success will require market adoption from higher level stacks. The EMC Federation has the richest implementation of an end-to-end infrastructure stack, with VMWare and Pivotal as part of our family.
Last but not the least; we have a tradition of creating open platforms, with the choice of integration with heterogeneous systems. Having VMWare and Pivotal will help us in time to market, but we know our customers are looking for a multi-vendor / multi-cloud solution.
Look for more details on the topic from EMC in the months (and years) to come.