C I/O perspective: Why should CIOs care about I/O?



Here at QLogic there are two sides to my role. Like most CIOs, I am the consumer of IT Solutions. The other side of my role is also to create and communicate the I/O solutions for our IT customers. That’s to be the theme of ‘C I/O perspective’ – expressing the value proposition and best practices of I/O solutions for IT shops. With that, let me dive into the discussion on why CIOs should care about I/O.

TCO and Service Levels

The CIO’s report card is based upon achieving the right balance between low Total Cost of Ownership (TCO) and consistent Service Levels. Hitting the best Service Levels without any credence to cost is not sustainable, but endless cost cuts with little attention to Service Levels could devastate the business.

Traditionally, Enterprise ITs and Service Providers juggled with CPU, memory, and storage as three main variables that optimize the infrastructure. However, with the increasing VM density in a Virtualized environment and the advent of multi-tenancy in a Cloud environment, I/O constraints are taking center stage now. The impact of properly controlling network and storage I/O at various levels of the environment continues to become increasingly important. Balance In fact, I can argue that I/O may be the most important control variable in data centers today, as it controls the entry to the Network and Storage – which takes the lion’s share of the data center infrastructure cost.


Industry perspective on I/O

If you don’t believe, listen to the collective voice of the End Users in Open Data Center Alliance (ODCA). ODCA is an End User community, comprised of large Enterprises (like Disney and BMW) and Service Providers (like Verizon and TSystem), that pull together requirements on the most burning problems in the Cloud with a goal of resolving those issues in an Open and Multi-Vendor manner. One of the first few Usage Models that they rolled out was on I/O Controls. Lack of granular control, per ODCA, could result in uncontrolled contention between VMs (i.e., a noisy neighbor) and failure to meet quality of service (QoS) targets for an application or workload. To compensate for the fear of lack of QoS, most IT organizations overprovision their infrastructure, which results in higher Total Cost of Ownership or TCO. ODCA

How QLogic IT addresses this problem

I would classify our IT as a typical mid-size IT. We have about 300 servers (fully virtualized) and 1000+ VMs, which host engineering, office, and business workloads. We were Virtualization early-adopters and reaping reasonable benefits from it. However, the cost of maintaining separate SAN and Network fabric was high. So, we followed the industry best practice of moving to 10G and a converged network. However, our Server utilization continued to be low. We were overprovisioning servers and I/O to maintain the desired Service Levels. Dev / Test environments were maintained on separate Server clusters, so as to not interfere with Production Service Levels, but this again added to inefficiencies. Mission Critical Databases were particularly overprovisioned for the same reason.

Our solution was as such: we adopted hardware-based network partitioning (or NPAR, available in QLogic FastLinQ 10GbE Adapters). I will save the details of NPAR for future blogs. NPAR based solution on I/O control helped us in the following ways.

  1. We increased server consolidation by 20% by provisioning VMs more deterministically based on guaranteed QoS provided by NPAR. We were able to get rid of a separate Dev / Test cluster, further easing the maintenance pain. I highly recommend reading the complete implementation Case Study published.NPAR
  2. We have achieved predictable database (and other critical applications) performance by provisioning on NPAR based guaranteed I/O partitions. This ensured predictable performance (and thereby Service Levels) without the need for QoSoverprovisioning. Our lab test found significant performance degradation when using a Network adapter without NPAR based partitioning. I encourage you to read the complete whitepaper for details.



Hopefully, I have convinced you of the need for better I/O control in your IT shop. It goes a long way in achieving desired service levels, while keeping TCO low. You get the ability to control I/O partitions (NPAR) for free with QLogic Network Adapters. Although our IT implementation was on VMware, you can implement on any Hypervisor, even bare metal. Drop me a comment if you have questions.

Look out for more C I/O perspective in future blogs!


Nikhil Sharma


Twitter: @NikhilS2000

Infrastructure that we can trust



It is the best of times, it is the worst of times for IT. The wealth of technologies and service models has been excellent for us, but at the same time it has brought feelings of distrust amongst the consumers. This complexity has increased our exposure to failures and threats, creating a lack of trust in the technologies. An EMC and RSA global IT trust curve survey finds that 61% of organizations have suffered unplanned downtime, a security breach, or data loss at least once in the last 12 months. Clearly a different kind of solution would be needed in such an environment; a solution that is built on the strong foundation of a trusted infrastructure.

Definition of Trust

I am part of a group at EMC that has been tasked with defining Trusted Infrastructure. On our first day on the job we decided to check out what the industry has to say about “trusted infrastructure”. The most respectable definition I came across was from the Trusted Computing Group (or TCG), a non-profit organization that defines security specifications. Here is their definition:          TCG

I found their definition to be extremely profound. It seems to resonate with our customers as well, as it is about the broad predictable behavior of the infrastructure rather than being too narrowly focused on security. Let’s examine this in more detail. “Predictable” means that there needs to be a declarative prediction on the capabilities of the infrastructure, capabilities that go beyond security and that cover other infrastructure dimensions, like Availability and Recoverability. “Evidence” means that there must be a level of transparency (else who would you trust that the fallen tree in the forest really made any sound!). Properties mean that we have an agreed upon taxonomy, with declarative properties to characterize the infrastructure. I would also add ‘automation’ to the list, which TCG does acknowledge further down in their discussion (please see reference to Security Automation in TCG presentation ‘Where Trust Begins’). So, thanks to TCG, we have a basic construct of the definition of Trusted Infrastructure, on which we can start building a more detailed taxonomy.

TCG then goes even deeper to define a standard specification for Trust, as in Trusted Platform Module or TPM that establishes Hardware based root of trust. A TPM is a microcontroller that can securely store artifacts used to authenticate the identity of a platform (a PC or a server). HW root of trustI have worked on TPM-based solutions and will attest to the fact that TPM is a very sound and popular standard. Calling TPM the answer to Trust is (in my honest opinion) being too myopic about TCG’s own definition. There are more dimensions at play when ensuring ‘predictability’ than just those securing the Identity and integrity of the system. For example, system availability plays a key role in achieving predictable service levels. To some, compliance may also mean predictable behavior. Predictably low power consumption is increasingly becoming an aspect of trust. You get the big picture: Trusted Infrastructure is more than securing identity, and it is more than securing the device.

Envisioning a Trusted Infrastructure

We have always had spot solutions for security, availability, and other dimensions we may want to put under the Trust umbrella. This begs the question: how is Trusted Infrastructure different from any other infrastructure? The difference is in the breadth and depth of integration between the infrastructure and the trust services (Trust services enable security and availability features). A Trusted Infrastructure will have trust built in (rather than added in as an afterthought) and will be broadly usable (rather than available on only a locked platform.)

Trust layers

Trusted Infrastructure must have three essential factors to be successful. First, we will need a taxonomy which goes clearly beyond security and covers all relevant aspects of a predictable system. Next, the services, that deliver the taxonomy to the end users, need to be highly integrated with the infrastructure. Finally, we will need to agree upon an open abstraction layer, Trust APIs, to expose such taxonomy for use in higher level stacks, like Hypervisor or Cloud OS.

Why EMC?

Because EMC is where data lives (pardon the marketing plug). All jokes aside, storage infrastructure has to lead such a transformation given its role in protecting and exposing data. Data is in its ephemeral state in both computing and networking, dwarfing its relative risk compared to storage.

Next, EMC has the biggest arsenal of tools to protect your data. RSA covers the security tools, whereas the EMC Data Protection tools provide the coverage for Availability, Recoverability, and Business Continuity.

Thirdly, success will require market adoption from higher level stacks. The EMC Federation has the richest implementation of an end-to-end infrastructure stack, with VMWare and Pivotal as part of our family.

Last but not the least; we have a tradition of creating open platforms, with the choice of integration with heterogeneous systems. Having VMWare and Pivotal will help us in time to market, but we know our customers are looking for a multi-vendor / multi-cloud solution.


Hopefully I was able to seed some thoughts on the definition and the vision of a Trusted Infrastructure with this post. The Trusted Infrastructure team, while moving forward, will define the taxonomy in depth and work both internally, within EMC on a roadmap, and externally, on industry adoption of the taxonomy. I am eager to share our experiences as we go further down through this endeavor. Look for more details on the topic in the future blog posts!


Nikhil Sharma


Twitter: @NikhilS2000

EMC and Open Data Center Alliance



A couple of weeks ago, I represented EMC at ODCA’s annual event called Forecast. Open Data Center Alliance (or ODCA) is a worldwide organization comprising of Enterprise end users and providers, with both coming together to enable open and interoperable cloud. I personally am a big fan of ODCA and have mentioned many times in my previous entries the importance of the organization in solutionization. Here is a brief perspective on ODCA and how EMC is collaborating to come up with open and interoperable solutions.

ODCA’s working model

I’ll start with a brief overview of the working model, for those who may not be familiar with ODCA. ODCA operates at many levels of the solution adoption life cycle. It starts with the documentation of usage models by stating the requirements for the solutions. There is a big catalog of usage models that the organization has already published, and the organization continues to refine existing usage models as well as add new. The members take the requirements and add them to their RFP process in order to encourage the providers to align their offerings. To prove newer concepts, the members may work with providers on Proof of Concepts. ODCA announced the formation of Solution planning and a deployment work-stream, marking a higher focus on solution adoption.

Forecast ‘14

ODCA comes together with its annual event in San Francisco to share its usages, PoCs, and partner solutions. I had described, in an earlier blog, the event as ‘right-sized’ for the likes of cloud technologists like me. By ‘Right-sized’ I mean targeted learning + targeted business meetings + targeted networking. Targeted learning because I get insightful discussions in my area from a mix of cloud consumers, providers, and thought leaders.  Furthermore, I prefer targeted meetings and networking as most ODCA attendees tend to be actual end users or IT technologists of cloud computing.

Forecast ’14 didn’t disappoint me, even with my already heightened expectations. This year’s keynotes had a higher energy than year’s past with the inclusion of speakers like Jonathan Bryce, Executive Director of OpenStack foundation, and David Linthicum, the visionary Cloud Analyst from Gigaom Research. The tech talks were richer in content as well, with insights to how ODCA usage models could be used in real life applications and in conjunction with industry standards. An interesting Tech talk to reference here would be the work TSystem did with the Tosca model to showcase cross cloud VM migration. TOSCA is an open standard from OASIS that defines the interoperable description of services and applications hosted on the cloud. The PoC recommended the reference model for such a migration, using Tosca specification, and published the output for members to make use of.

The NDA 1:1’s were my favorite part of the event. I see others use these 1:1’s for sales and marketing, but I figure sale’s guys already get plenty of other opportunities to do that. These opportunities, in my mind, are best used to discuss strategic technology trends and directions. For example, I got a wealth of insights on my latest passion for Trusted Cloud Infrastructure. IT Director for BMW defined, for me, the use case and importance of trust in ‘connected cars’. SVP of Cloud, CapGemini, highlighted the importance of trust metrics in a managed Cloud environment. These are the kind of interactions that separates ODCA from other venues, in my mind.

EMC solutions alignment to ODCA

I introduced ODCA audience to the EMC Hybrid Cloud in my sponsored keynote at Forecast. The keynote highlighted how the EMC Hybrid Cloud addressed the requirement of interoperable cloud as stated in the usage model ‘VM Interoperability in a Hybrid Cloud Environment’. The solution supports the three ODCA usage scenarios, plus a whole lot more.

Usage scenario 1: Interoperability, which includes interconnectability and portability.

Usage scenario 2: Move or copy between private and public cloud.

Usage scenario 3: Leverage common operations across cloud

In its current release, the EMC Hybrid Cloud supports said usage scenarios between the Private Cloud and vCloud Air public cloud on the VMWare stack. In future releases, however, they plan to open it up to multiple providers and support a multi-cloud environment, much like OpenStack.

Apart from the hybrid cloud, another usage alignment to highlight would be on Scale out Storage. Isilon 7.1 supports many of the requirements listed in the usage model. For example, when ODCA calls out requirements for multi-protocol support, not only does Isilon support block and file, but it also supports HDFS. With HDFS support a user could run Hadoop queries on top of data stored in any file format. Also added is support for various security and availability requirements. So it is not surprising that we have a popular product, as it meets the requirements of end users, as stated in the ODCA usage model.

Stay tuned for a lot more products and solutions, aligned or guided by ODCA requirements, in the near future.


I feel inclined to call it a wrap for now, but look out for continued references to ODCA in my future writings. It is natural that EMC, in its quest for interoperable cloud, will continue to foster close relationship and alignment with ODCA. I wish ODCA the very best in its cause and vision, and wish to work together to see this vision through completion.

Nikhil Sharma


Twitter: @NikhilS2000

What is in the future from EMC on OpenStack



In many of my blogs, I have put an emphasis on EMC and the EMC Federation’s continued commitment to OpenStack. However, most of the blogs and talks have focused on current offerings, which are primarily drivers. This time I will give some insights on EMC’s strategy for the future. As with any discussions about the future, it is hard to unravel the specifics of the plan of action, so I will talk about broad areas of EMC’s focus. Since we have already contributed to Juno, which is the next release of OpenStack, I will be able to give examples and specifics citing our Juno contributions.

Cinder drivers

Drivers will continue to be our core output, providing the gateway for OpenStack users to consume EMC storage. As stated in my previous blog, all EMC business units are committed to providing cadence of driver releases. In Juno, we introduced a new member of our family to the community: we released the driver for XtremIO. This comes on top of refreshes from VMAX and VNX. The VMAX driver added the ‘Create volume from snapshot’ and ‘extend volume’ functionalities, which were previously missing. VMAX also added additional functionalities on storage groups, striped volumes, and FAST policies. VNX added a slew of new functionalities on top of core functionalities required by OpenStack. VNX Juno

The key to delivering drivers is not just protocol conversion, but also exposing underlying capabilities of storage arrays. This is the reason drivers are developed by individual storage business units at EMC, rather than a separate centralized organization. The VNX direct driver, for example, added security and high availability features support listed above. Expect this to be part of our driver strategy moving forward.

Advanced Data Services

Data services are arguably where EMC has the most to offer, starting with the secret sauces on Data Protection. I believe that EMC’s leadership and contributions in this area are going to be a key part to the adoption of OpenStack in Enterprise. Starting in Juno, EMC’s Xing Yang proposed and led an initiative within Cinder to develop Consistency Groups. Consistency Groups are a grouping mechanism used to track interdependencies between VMs, for disaster recovery and the ability to capture and validate application (VM) requirements. If and when the contributions get accepted, it will immediately allow us to create dependencies for snapshots. In the future, we will be able to use it for other activities like backup, restore, and possibly replication.

Another data service that EMC, along with NetApp, have been actively working on is a File Service called Manila. Manila is a multitenant, secure file share as a service, and it has NFS and CIFS protocol support for right now. EMC had shown a demo prototype of the service, on VNX, at the last OpenStack summit and at EMC World. EMC has also contributed Manila driver for VNX. We just got the exciting news that Manila has been accepted as an incubation project for Juno, which means that it will likely be an integrated project in the K release. Manila


Lastly, my personal favorite is going to be the focus on Solutions. We have hired Kenneth Hui, a prominent member of the OpenStack community, to help in creating EMC’s enterprise grade OpenStack solutions strategy. EMC is committed to helping our customer be successful with the deployment and operations of their OpenStack cloud. Stay tuned for information on this topic in the next few months. To start with, I had mentioned in my earlier blog that EMC is working on fixing Nova (compute) bugs on block volume backed Live Migration solution. We have fixed those bugs and contributed them back to the community, and the fixes have been reviewed and accepted. Yes – you can now do live migration using block volumes, while using standard iSCSI or Fibre Channel protocols.

That was a brief glimpse of what you can expect from EMC on OpenStack, in the near future. Drop in a note if you want to discuss details. Hopefully you are as excited as I am on the Solutionization of OpenStack. Go stackers!

Nikhil Sharma


Twitter: @NikhilS2000

OpenStack Icehouse storage solutions from EMC


, , ,

OpenStack Icehouse storage solutions from EMC

This is an update to the blog I had published in January, which described EMC solutions for OpenStack Havana release. This is an update for IceHouse, the current release of OpenStack. It has been an exciting ride since Havana for us. Many proof of concepts, deployments, solutions (one on Live Migration mentioned in my earlier blog) have arose, but the most exciting has been planning for a robust foundation for the future, and we will start to see some in the Juno time frame. Stay tuned for my next blog, where I’ll elaborate on that.

EMC’s commitment

Let me start with a general update on the EMC federation.  We continue to show strong commitment to OpenStack across the board. VMW jumped from 7th largest contributor in Havana to the 4th largest in IceHouse. The contributions are across the board, but the most salient are the enhancements to Enterprise usage models, like the Nova and Cinder support for policy based placement of workloads and Ceilometer collecting vCenter stats. Cloud Foundry enhanced its ecosystem with IBM, HP, and Canonical support and distribution of Cloud Foundry on OpenStack. You could read my earlier blog, on the inevitable progression of Cloud Foundry, for details.

EMC extended support for its core products, VNX and VMAX, by contributing the drivers. We also open sourced the driver for our software defined storage platform, ViPR, to Github. Many platforms, namely ScaleIO, XtremIO, and Isilon have released beta drivers and are in the middle of active proof of concepts.

Cinder plugins

Details on the Cinder drivers

SMIS-S driver for VNX and VMAX:Cinder SMI-S

EMC updated the SMI-S driver, which was originally introduced for Grizzly back in early 2013. The driver added the support for the Fibre Channel. Although EMC was one of its early advocates (along with Brocade) and participated in the Cinder design for the Fibre Channel, we had not contributed the driver support to OpenStack. With Icehouse, we are in the OpenStack trunk, instead of the Github release of the past. We also added support for newer Cinder features like zoning in the release, though I have heard that Cinder zoning features have many bugs that are being worked.  VMAX is missing two features in Cinder, namely ‘Create volume from snapshot’ and ‘Extend volume’. Both these features are added in the next release of Juno, but if you do need them in Icehouse – please contact your EMC rep.

You can find EMC SMI-S driver at the OpenStack site at:


VNX Direct driver:

CLI direct

Many large scale providers, for the sake of simplicity, prefer to make direct calls to storage arrays, rather than going through SMI-S layer. For this reason this driver is added in IceHouse, based on popular demand from end users. The direct driver that is contributed to OpenStack is 2.0 and supports all minimum features required in Cinder. However, 3.0 (released in Github) is a lot richer in functionality, like support for High Availability features of Service Processor toggle and target check, plus added security features. All the 3.0 features are being contributed to the Juno release.

By the way – if you are deploying VNX on OpenStack, my advice would be to use VNX Direct Driver instead of SMI-S. VNX team plans to use Direct driver for future releases and may discontinue releases of SMI-S driver starting Juno. VMAX driver will continue to use SMI-S.

Here are the links to direct drivers.

VNX direct driver 2.0 contributed to Icehouse:


VNX direct driver 3.0 for Icehouse on Github:


ViPR Cinder driver: The new ViPR driver primarily adds support for ViPR 1.1 and ViPR2.0. This includes added features like support for multi tenancy, support for commodity HW, Geo Scale deployments, and IPv6 support.

The ViPR driver can be found at the following Github repository:


That’s a wrap for now. Hopefully this gives you a summary of what’s new from EMC for the OpenStack Icehouse release. Stay tuned for the coverage on our contributions to OpenStack’s upcoming release, Juno, in the next blog.

Nikhil Sharma


Twitter: @NikhilS2000

EMC customers can choose any cloud


, , , ,

This is a reprint of my blog that was published in EMC executive ‘Reflections‘ blog site.

EMC leaders have time and time again emphasized a core aspect of the EMC Federation strategy and functioning model: “(we) offer best-of-breed, integrated, technology while preserving customers’ ability to choose…” In this blog, I reflect upon how this strategy applies to allowing customers the ability to choose Open Source Cloud OS – OpenStack.

Our customers are free to choose any Open Source Cloud OS

OpenStack is making rapid strides in both private and public cloud markets. The earlier skepticism on hype is waning as we see the momentum translate to tangible deployments. Let’s go over some numbers if you need convincing. A recent OpenStack user survey showed 512 total deployments; 209 of these being production deployments. This momentum seems even more impactful within the EMC customer base. A recent EMC pre-sales/field survey has revealed that 50% of customers are running OpenStack today; 53% of those are in production. So it is not surprising that EMC’s strategy of ‘providing of our customers with choice’ resulted in OpenStack being a viable option across our federated family. Let me explain this in more detail.

Pivotal Customers Can Choose OpenStack

Pivotal offers a comprehensive set of application and data services that run on top of PaaS (platform-as-a-service) called Cloud Foundry®. Cloud Foundry is an open source platform that can run on any cloud infrastructure like VMware, OpenStack, or Amazon Web Services. Even though the Pivotal distribution of Cloud Foundry runs on VMware vSphere, there are many prominent members of the Cloud Foundry community who offer Cloud Foundry PaaS on OpenStack. Piston Cloud, which had originally contributed Cloud Foundry interface for OpenStack, now offers PaaS on Piston Cloud, which is based on OpenStack. IBM, with BlueMix PaaS cloud, and HP, with Cloud Application PaaS, have recently announced Cloud Foundry running on OpenStack infrastructure. Not only is it being offered by many prominent vendors, you can also choose to run Pivotal applications and big data services on any of these multi-cloud platforms. Watch out for the inevitable open source progression: OpenStack to Cloud Foundry!

VMware Customers Can Choose OpenStack

Many are perplexed by the complex relationship between VMware and OpenStack. I think the relationship is largely complementary; OpenStack is an open and flexible framework to pull all the cloud IaaS components together, whereas VMware provides the best-of-breed cloud components. As we see a rising number of OpenStack production deployments, there is also a broad demand for the integration of the two for Enterprise class solutions and use cases. It is for this reason that VMware is heavily investing in the integration, giving its customers the choice of having an open framework with best-of-breed cloud components. VMware is now the fifth largest contributor of OpenStack code in the current IceHouse release.

Many immediately get network virtualization as an integration point, as VMware NSX is one of the founders of OpenStack networking, but the integration points are far more pervasive than just networking. You can now trigger complex vCenter functions like vMotion off the OpenStack compute module, Nova. vSphere storage policies and advanced vSAN capabilities can be facilitated through the OpenStack storage module, Cinder. vCOPs integration allows monitoring and troubleshooting through OpenStack. There are many more, and you can be rest assured that the roadmap will continue to get richer with time.

EMC Customers Can Choose OpenStack

EMC’s strategy for OpenStack is twofold:

  1. OpenStack projects allow vendors to add capabilities through “plugin” architecture. Every storage business unit at EMC is committed to providing direct plugins for OpenStack. We currently offer plugins for VNX, VMAX, and ViPR; plugins from Isilon, ScaleIO, and XtremIO are already available to customers for beta evaluation. OpenStack’s roots were with object storage over commodity hardware. However, as deployments mature, we now see it running a broad mix of production workloads. Plugins give customers the choice of a variety of EMC storage with appropriate service levels to run their workloads.
  2. ViPR is EMC’s software-defined storage platform and hence complements OpenStack by providing rich automation features, as well as integrated management of objects, blocks, and later files. In other words, you will get integrated access to EMC (what you get from direct plugins) and non-EMC storage, and on top of that you should be able to get enhanced automation features like provisioning, masking/zoning, FAST and VPLEX volumes, as well as rich data services. You can check-out my blog on ‘ViPR in the stack’ for more details.

Returning to my original point, EMC customers can choose any cloud. I hope to have shown you that OpenStack represents as a viable choice across each of the EMC federation’s horizontal stacks.

Introduction of OpenStack with EMC



Here is a recording of my EMC World 2014 session on OpenStack.

Short abstract of the session:

OpenStack is fast evolving as a framework for building and deploying both private and public clouds. To EMC, OpenStack represents a viable choice for cloud computing that we offer to our customers. This session covers the storage solutions available from EMC and the value proposition they offer. This session also provides a preview to enterprise class use cases EMC is working on for the future.




State of the solution from OpenStack Atlanta Summit 2014


, ,

Hello, OpenStack fans! Did you miss the summit by any chance? Don’t worry, here is my spin on it. If you don’t care for my commentary, you can always check out OpenStack.org for their coverage. As always, they cover it well with recordings of keynotes and sessions.

Adoption momentum continues

Let’s start with the highlights from the user survey. The momentum for adoption continues, although it has understandably cooled off from what we said in the October 2013 survey. Total deployments jumped from 387 in October to 506, while Summary of OS surveyproduction deployments increased to 209. For the first time, ‘Avoid vendor lock-in’ has replaced ‘cost’ as the top business driver for deployments. Nova service was the most popular (196 deployments), but Cinder (151) and Neutron (135) were close. This showed that most deployments are not only Nova, but in conjunction with Cinder and Neutron. I was surprised to see a relatively low number of Swift deployments (101). Perhaps this number does not account for the Swift compatible deployments, just the open source module.

Keynotes were bold on the adoption momentum as well, with many notable end users on stage. Organization’s chair, Jonathan Bryce, announced the launch of two major programs geared towards aiding deployments. An OpenStack Marketplace is established with the goal of aiding in solution display and selection. Super users were introduced for knowledge sharing and collaborative problem solving. The theme of IceHouse is also largely aligned to adoption theme, with features focused on ease of deployment, tighter integration, and reliability.

Takeaways from the session

The sessions on networking track seemed ultra-popular, and were overflowing with attendees. (Don’t have any statistics to support it, just a random observation.) The user survey numbers on Neutron adoption seemed to support this hypothesis on popularity. Dev/QA had 169 Neutron deployments; 156 PoCs; 156 production deployments. Watch out for some excellent solutions from VMW NSX in this area.

Move towards Enterprise class solutions, like High Availability, was a clear theme that no one could miss. Just glance through the session agenda and you will see what I am saying. VMWare sessions were very popular for this reason, emphasizing the message of running OpenStack on top of VMware vSphere® + NSX™ to build rich OpenStack clouds on top of their existing IT infrastructure.

Another segment, promoting solutions with higher service levels were the Telcos. They came in number (ATT, Ericsson, TSystem…), and showed off their presence describing use cases requiring Telco grade service levels and scale. Network Function Virtualization (NFV) use case is gaining traction, as many Telco users and OEMs described the use case.

EMC sessions

EMC participated in three sessions:

Laying Cinder blocks: Jim Ruddy defined the use of ViPR, Software defined storage, with Cinder. This brought about the new role of how cloud architects would define policies, through a Software defined tool like ViPR, in a heterogeneous environment, to be consumed through the service model of OpenStack. Jim made the session hilarious through “Seinfield” analogies. Check out Jim’s blog for the message, demo, and humor.

Storage consideration for OpenStack: A panel discussion where Brian Gracely defined the importance of integrated data types for the execution of complex use cases.

Introducing Manila: An incubation proposal for a File Share service; Shamail Tahir demonstrated a prototype of VNX driver delivering file share service. Isilon driver was also mentioned.

Takeaways from design sessions

My attention on the design session was limited to storage, where the focus was – you guessed it, High Availability. Xing Yang led a design proposal of adding ‘Consistency Groups’ in Cinder. The session was very well received, with broad support for the feature. The discussion was centered on the use cases. Proposal was to implement it for Snapshot in the first phase, followed by volume replication and backup in the next.

A separate session proposed incremental backup, which struggled with scope and feasibility. There was also a proposal on Mirroring, which has a good chance of making it into Juno.

Finally (I saved the best for last), here’s a mention of my personal highlight of the summit. I was pleasantly surprised to see a broad emphasis on policy based automation. Neutron had design sessions on group policies and policy abstractions. Swift has proposed policies based implementations where availability can be configured, i.e. you can reduce the number of copies or implement an alternate method for availability. Then, there was a proposal for an incubation project called Congress, a cross module policy engine. Hallelujah!

Nikhil Sharma


Twitter: @NikhilS2000

Preview of EMC World OpenStack demo#2: block volume backed live migration



As I stated in the last blog, I am hosting a session at the upcoming EMC World on May 5th and 6th. The session gives an overview of OpenStack with EMC. Here is a sneak peak of my demo#2 that I plan to showcase in the session. This one is an Enterprise class use case – block backed live migration using KVM and OpenStack.

Use Case

The use case for Live Migration that I hear from OpenStack customers is tied to host maintenance. Users want to migrate the VM or instance over to a temporary host, for the sake of maintenance / upgrade of the source host; after the maintenance is complete they would migrate the VM back. Enterprises are very much used to such use cases in VMWare world, through vMotion. Underlying VMFS file system in vSPhere makes vMotion easier and transparent to various storage systems and protocols. KVM being an ephemeral hypervisor needs external file system support for persistent storage, hence Live Migration is dependent on Libvirt implementation. This is doable and stable using shared file system, like NFS. Good news is that in Havana release, OpenStack Nova supports ‘block volume backed Live Migration’. With this feature one can theoretically migrate VM without moving or copying volume. But … This involves careful handshake between Nova libvirt and Cinder to attach / detach volume from one host to another, and that implementation is buggy for iSCSI and Fibre Channel. We are actively working on fixing these bugs and should have a firm time frame for bug fix in another month or so.

The Demo

In spite of the bugs we were able to pull together a concept demo in our lab through some Nova workaround. Before going into the demo, would like to give a big shout out for Xing Yang, Consultant Technologist in Advanced Development team in Office of CTO. Thanks to Xing we have made big strides in driving clarity of the use case and code, and were able to pull this demo for you all. Thanks for your hard work Xing!

Here is how the demo goes. Block LiveM


We first go through the motions in Horizon to create a volume from standard Cirros image, then launch an instance from newly create volume. The instance is running on Source host, …124 in the demo. We then SSH into the source host, …124, do a “virsh list” to see the running VM and then start the live migration.

We then come back to Horizon UI to see the migration in progress. Just to make it interesting we go to the VNX Unisphere tool and check out the volume being migrated. Interestingly Unisphere sees our LUN associated with two storage groups – …124 source as expected, but also by a …125 target. So while migration is in progress VNX tool can see both source and target association to LUN. Migration should end quickly, as it is only Ephemeral transfer, the volume does not move. After migration Horizon UI shows the instance now running on …125 target host.

We now SSH in to target host …125 and login to the VM without any issues. By this time supposedly the maintenance of our source host is complete. We give the live migration command again to move the VM back to the source host …124. Use case complete!


Host maintenance is a valuable use case asked by many of our customers. Many stackers had to leverage shared storage, like NFS, for this use case in the past BUT this will change soon once these Nova bugs get fixed. It is just another choice of use case that EMC is helping enable. Enjoy the demo and watch out for updates in the future on bug fixes for block backed live migration.

Nikhil Sharma


Twitter: @NikhilS2000

Preview of ScaleIO and OpenStack demo at EMC World



I am hosting a session at the upcoming EMC World on May 5th and 6th. The session gives an overview of OpenStack with EMC; it presents EMC wide strategy and solutions for OpenStack. I plan to show couple of demos in the session to showcase EMC providing the choice of solutions for both next generation scale out workloads and Enterprise class workloads.

Let me give you a glimpse of the first demo in this blog. It is about ScaleIO being configured and executing a scale out workload. But before I start, I want to give due credit to the creator of this use case – Steve Thellen, Principle Software Engineer, EMSD (VNX/VMAX) Advanced Development team. Steve put this platform together for a project to show the integration of EMC storage platform with OpenStack, for scale out workloads. I was able to pull the demo off Steve’s setup, and put a spin on it. Let’s check it out.

The demo starts with the creation of ScaleIO volume on commodity SSD based local storage. I then attach the volume to a compute instance. And then wallah, you are ready to roll. Of course this assumes that OpenStack, ScaleIO, ScaleIO drivers have already been installed, send me a note if you are looking for installation instructions. Below is the technical implementation of such ScaleIO setup with OpenStack Nova through the ScaleIO Cinder function. ScaleIO has a driver for both client (SDC) and remote storage (SDS), but for this setup I only use client (host) side driver.


I then use VDBench tool to mimic the configuration of a high IO workload. Such workloads would be sensitive to cost ($/GB), Performance (latency and Aggregate IOPS), and scale out management. Hence making it ideal for ScaleIO. Check out my previous blog on Application Characterization for detailed concepts and rationale for such analysis.

SSD vol

I run two scripts one against the ScaleIO volumes configured to utilize locally attached SSDs; and second one with remote iSCSI capacity storage to show the contrast. The script has a read percentage of 90%, xfer size of 4k, and random seek percentage of 100%. The test is a simple one that only uses a couple of OpenStack instances spread across multiple hosts designed to exercise basic functionality. Here are the net results, as the terminal output in the Camtasia demo is not clear.

vdbench output

As you can see, ScaleIO delivers excellent results on IOPS, throughput, and response times. Not shown in the demo, but the IOPS and throughput increased drastically by adding more nodes. However the key takeaway from the demo is more than that. Messages are that you should get good results if you map your workload service levels to the storage platform; secondly you now have the choice of running ScaleIO, software based storage, on OpenStack.

Enjoy the demo! I will review the second demo in my next blog.

Nikhil Sharma


Twitter: @NikhilS2000


Get every new post delivered to your Inbox.

Join 40 other followers