I am hosting a session at the upcoming EMC World on May 5th and 6th. The session gives an overview of OpenStack with EMC; it presents EMC wide strategy and solutions for OpenStack. I plan to show couple of demos in the session to showcase EMC providing the choice of solutions for both next generation scale out workloads and Enterprise class workloads.
Let me give you a glimpse of the first demo in this blog. It is about ScaleIO being configured and executing a scale out workload. But before I start, I want to give due credit to the creator of this use case – Steve Thellen, Principle Software Engineer, EMSD (VNX/VMAX) Advanced Development team. Steve put this platform together for a project to show the integration of EMC storage platform with OpenStack, for scale out workloads. I was able to pull the demo off Steve’s setup, and put a spin on it. Let’s check it out.
The demo starts with the creation of ScaleIO volume on commodity SSD based local storage. I then attach the volume to a compute instance. And then wallah, you are ready to roll. Of course this assumes that OpenStack, ScaleIO, ScaleIO drivers have already been installed, send me a note if you are looking for installation instructions. Below is the technical implementation of such ScaleIO setup with OpenStack Nova through the ScaleIO Cinder function. ScaleIO has a driver for both client (SDC) and remote storage (SDS), but for this setup I only use client (host) side driver.
I then use VDBench tool to mimic the configuration of a high IO workload. Such workloads would be sensitive to cost ($/GB), Performance (latency and Aggregate IOPS), and scale out management. Hence making it ideal for ScaleIO. Check out my previous blog on Application Characterization for detailed concepts and rationale for such analysis.
I run two scripts one against the ScaleIO volumes configured to utilize locally attached SSDs; and second one with remote iSCSI capacity storage to show the contrast. The script has a read percentage of 90%, xfer size of 4k, and random seek percentage of 100%. The test is a simple one that only uses a couple of OpenStack instances spread across multiple hosts designed to exercise basic functionality. Here are the net results, as the terminal output in the Camtasia demo is not clear.
As you can see, ScaleIO delivers excellent results on IOPS, throughput, and response times. Not shown in the demo, but the IOPS and throughput increased drastically by adding more nodes. However the key takeaway from the demo is more than that. Messages are that you should get good results if you map your workload service levels to the storage platform; secondly you now have the choice of running ScaleIO, software based storage, on OpenStack.
Enjoy the demo! I will review the second demo in my next blog.