EMC and Open Data Center Alliance

Tags

,

A couple of weeks ago, I represented EMC at ODCA’s annual event called Forecast. Open Data Center Alliance (or ODCA) is a worldwide organization comprising of Enterprise end users and providers, with both coming together to enable open and interoperable cloud. I personally am a big fan of ODCA and have mentioned many times in my previous entries the importance of the organization in solutionization. Here is a brief perspective on ODCA and how EMC is collaborating to come up with open and interoperable solutions.

ODCA’s working model

I’ll start with a brief overview of the working model, for those who may not be familiar with ODCA. ODCA operates at many levels of the solution adoption life cycle. It starts with the documentation of usage models by stating the requirements for the solutions. There is a big catalog of usage models that the organization has already published, and the organization continues to refine existing usage models as well as add new. The members take the requirements and add them to their RFP process in order to encourage the providers to align their offerings. To prove newer concepts, the members may work with providers on Proof of Concepts. ODCA announced the formation of Solution planning and a deployment work-stream, marking a higher focus on solution adoption.

Forecast ‘14

ODCA comes together with its annual event in San Francisco to share its usages, PoCs, and partner solutions. I had described, in an earlier blog, the event as ‘right-sized’ for the likes of cloud technologists like me. By ‘Right-sized’ I mean targeted learning + targeted business meetings + targeted networking. Targeted learning because I get insightful discussions in my area from a mix of cloud consumers, providers, and thought leaders.  Furthermore, I prefer targeted meetings and networking as most ODCA attendees tend to be actual end users or IT technologists of cloud computing.

Forecast ’14 didn’t disappoint me, even with my already heightened expectations. This year’s keynotes had a higher energy than year’s past with the inclusion of speakers like Jonathan Bryce, Executive Director of OpenStack foundation, and David Linthicum, the visionary Cloud Analyst from Gigaom Research. The tech talks were richer in content as well, with insights to how ODCA usage models could be used in real life applications and in conjunction with industry standards. An interesting Tech talk to reference here would be the work TSystem did with the Tosca model to showcase cross cloud VM migration. TOSCA is an open standard from OASIS that defines the interoperable description of services and applications hosted on the cloud. The PoC recommended the reference model for such a migration, using Tosca specification, and published the output for members to make use of.

The NDA 1:1’s were my favorite part of the event. I see others use these 1:1’s for sales and marketing, but I figure sale’s guys already get plenty of other opportunities to do that. These opportunities, in my mind, are best used to discuss strategic technology trends and directions. For example, I got a wealth of insights on my latest passion for Trusted Cloud Infrastructure. IT Director for BMW defined, for me, the use case and importance of trust in ‘connected cars’. SVP of Cloud, CapGemini, highlighted the importance of trust metrics in a managed Cloud environment. These are the kind of interactions that separates ODCA from other venues, in my mind.

EMC solutions alignment to ODCA

I introduced ODCA audience to the EMC Hybrid Cloud in my sponsored keynote at Forecast. The keynote highlighted how the EMC Hybrid Cloud addressed the requirement of interoperable cloud as stated in the usage model ‘VM Interoperability in a Hybrid Cloud Environment’. The solution supports the three ODCA usage scenarios, plus a whole lot more.

Usage scenario 1: Interoperability, which includes interconnectability and portability.

Usage scenario 2: Move or copy between private and public cloud.

Usage scenario 3: Leverage common operations across cloud

In its current release, the EMC Hybrid Cloud supports said usage scenarios between the Private Cloud and vCloud Air public cloud on the VMWare stack. In future releases, however, they plan to open it up to multiple providers and support a multi-cloud environment, much like OpenStack.

Apart from the hybrid cloud, another usage alignment to highlight would be on Scale out Storage. Isilon 7.1 supports many of the requirements listed in the usage model. For example, when ODCA calls out requirements for multi-protocol support, not only does Isilon support block and file, but it also supports HDFS. With HDFS support a user could run Hadoop queries on top of data stored in any file format. Also added is support for various security and availability requirements. So it is not surprising that we have a popular product, as it meets the requirements of end users, as stated in the ODCA usage model.

Stay tuned for a lot more products and solutions, aligned or guided by ODCA requirements, in the near future.

Conclusion

I feel inclined to call it a wrap for now, but look out for continued references to ODCA in my future writings. It is natural that EMC, in its quest for interoperable cloud, will continue to foster close relationship and alignment with ODCA. I wish ODCA the very best in its cause and vision, and wish to work together to see this vision through completion.

Nikhil Sharma

http://SolutionizeIT.com/

Twitter: @NikhilS2000

What is in the future from EMC on OpenStack

Tags

,

In many of my blogs, I have put an emphasis on EMC and the EMC Federation’s continued commitment to OpenStack. However, most of the blogs and talks have focused on current offerings, which are primarily drivers. This time I will give some insights on EMC’s strategy for the future. As with any discussions about the future, it is hard to unravel the specifics of the plan of action, so I will talk about broad areas of EMC’s focus. Since we have already contributed to Juno, which is the next release of OpenStack, I will be able to give examples and specifics citing our Juno contributions.

Cinder drivers

Drivers will continue to be our core output, providing the gateway for OpenStack users to consume EMC storage. As stated in my previous blog, all EMC business units are committed to providing cadence of driver releases. In Juno, we introduced a new member of our family to the community: we released the driver for XtremIO. This comes on top of refreshes from VMAX and VNX. The VMAX driver added the ‘Create volume from snapshot’ and ‘extend volume’ functionalities, which were previously missing. VMAX also added additional functionalities on storage groups, striped volumes, and FAST policies. VNX added a slew of new functionalities on top of core functionalities required by OpenStack. VNX Juno

The key to delivering drivers is not just protocol conversion, but also exposing underlying capabilities of storage arrays. This is the reason drivers are developed by individual storage business units at EMC, rather than a separate centralized organization. The VNX direct driver, for example, added security and high availability features support listed above. Expect this to be part of our driver strategy moving forward.

Advanced Data Services

Data services are arguably where EMC has the most to offer, starting with the secret sauces on Data Protection. I believe that EMC’s leadership and contributions in this area are going to be a key part to the adoption of OpenStack in Enterprise. Starting in Juno, EMC’s Xing Yang proposed and led an initiative within Cinder to develop Consistency Groups. Consistency Groups are a grouping mechanism used to track interdependencies between VMs, for disaster recovery and the ability to capture and validate application (VM) requirements. If and when the contributions get accepted, it will immediately allow us to create dependencies for snapshots. In the future, we will be able to use it for other activities like backup, restore, and possibly replication.

Another data service that EMC, along with NetApp, have been actively working on is a File Service called Manila. Manila is a multitenant, secure file share as a service, and it has NFS and CIFS protocol support for right now. EMC had shown a demo prototype of the service, on VNX, at the last OpenStack summit and at EMC World. EMC has also contributed Manila driver for VNX. We just got the exciting news that Manila has been accepted as an incubation project for Juno, which means that it will likely be an integrated project in the K release. Manila

Solutions

Lastly, my personal favorite is going to be the focus on Solutions. We have hired Kenneth Hui, a prominent member of the OpenStack community, to help in creating EMC’s enterprise grade OpenStack solutions strategy. EMC is committed to helping our customer be successful with the deployment and operations of their OpenStack cloud. Stay tuned for information on this topic in the next few months. To start with, I had mentioned in my earlier blog that EMC is working on fixing Nova (compute) bugs on block volume backed Live Migration solution. We have fixed those bugs and contributed them back to the community, and the fixes have been reviewed and accepted. Yes – you can now do live migration using block volumes, while using standard iSCSI or Fibre Channel protocols.

That was a brief glimpse of what you can expect from EMC on OpenStack, in the near future. Drop in a note if you want to discuss details. Hopefully you are as excited as I am on the Solutionization of OpenStack. Go stackers!

Nikhil Sharma

http://SolutionizeIT.com/

Twitter: @NikhilS2000

OpenStack Icehouse storage solutions from EMC

Tags

, , ,

OpenStack Icehouse storage solutions from EMC

This is an update to the blog I had published in January, which described EMC solutions for OpenStack Havana release. This is an update for IceHouse, the current release of OpenStack. It has been an exciting ride since Havana for us. Many proof of concepts, deployments, solutions (one on Live Migration mentioned in my earlier blog) have arose, but the most exciting has been planning for a robust foundation for the future, and we will start to see some in the Juno time frame. Stay tuned for my next blog, where I’ll elaborate on that.

EMC’s commitment

Let me start with a general update on the EMC federation.  We continue to show strong commitment to OpenStack across the board. VMW jumped from 7th largest contributor in Havana to the 4th largest in IceHouse. The contributions are across the board, but the most salient are the enhancements to Enterprise usage models, like the Nova and Cinder support for policy based placement of workloads and Ceilometer collecting vCenter stats. Cloud Foundry enhanced its ecosystem with IBM, HP, and Canonical support and distribution of Cloud Foundry on OpenStack. You could read my earlier blog, on the inevitable progression of Cloud Foundry, for details.

EMC extended support for its core products, VNX and VMAX, by contributing the drivers. We also open sourced the driver for our software defined storage platform, ViPR, to Github. Many platforms, namely ScaleIO, XtremIO, and Isilon have released beta drivers and are in the middle of active proof of concepts.

Cinder plugins

Details on the Cinder drivers

SMIS-S driver for VNX and VMAX:Cinder SMI-S

EMC updated the SMI-S driver, which was originally introduced for Grizzly back in early 2013. The driver added the support for the Fibre Channel. Although EMC was one of its early advocates (along with Brocade) and participated in the Cinder design for the Fibre Channel, we had not contributed the driver support to OpenStack. With Icehouse, we are in the OpenStack trunk, instead of the Github release of the past. We also added support for newer Cinder features like zoning in the release, though I have heard that Cinder zoning features have many bugs that are being worked.  VMAX is missing two features in Cinder, namely ‘Create volume from snapshot’ and ‘Extend volume’. Both these features are added in the next release of Juno, but if you do need them in Icehouse – please contact your EMC rep.

You can find EMC SMI-S driver at the OpenStack site at:

https://github.com/openstack/cinder/tree/stable/icehouse/cinder/volume/drivers/emc

VNX Direct driver:

CLI direct

Many large scale providers, for the sake of simplicity, prefer to make direct calls to storage arrays, rather than going through SMI-S layer. For this reason this driver is added in IceHouse, based on popular demand from end users. The direct driver that is contributed to OpenStack is 2.0 and supports all minimum features required in Cinder. However, 3.0 (released in Github) is a lot richer in functionality, like support for High Availability features of Service Processor toggle and target check, plus added security features. All the 3.0 features are being contributed to the Juno release.

By the way – if you are deploying VNX on OpenStack, my advice would be to use VNX Direct Driver instead of SMI-S. VNX team plans to use Direct driver for future releases and may discontinue releases of SMI-S driver starting Juno. VMAX driver will continue to use SMI-S.

Here are the links to direct drivers.

VNX direct driver 2.0 contributed to Icehouse:

https://github.com/openstack/cinder/tree/stable/icehouse/cinder/volume/drivers/emc

VNX direct driver 3.0 for Icehouse on Github:

https://github.com/emc-openstack/vnx-direct-driver

ViPR Cinder driver: The new ViPR driver primarily adds support for ViPR 1.1 and ViPR2.0. This includes added features like support for multi tenancy, support for commodity HW, Geo Scale deployments, and IPv6 support.

The ViPR driver can be found at the following Github repository:

https://github.com/emcvipr/controller-openstack-cinder

That’s a wrap for now. Hopefully this gives you a summary of what’s new from EMC for the OpenStack Icehouse release. Stay tuned for the coverage on our contributions to OpenStack’s upcoming release, Juno, in the next blog.

Nikhil Sharma

http://SolutionizeIT.com/

Twitter: @NikhilS2000

EMC customers can choose any cloud

Tags

, , , ,

This is a reprint of my blog that was published in EMC executive ‘Reflections‘ blog site.

EMC leaders have time and time again emphasized a core aspect of the EMC Federation strategy and functioning model: “(we) offer best-of-breed, integrated, technology while preserving customers’ ability to choose…” In this blog, I reflect upon how this strategy applies to allowing customers the ability to choose Open Source Cloud OS – OpenStack.

Our customers are free to choose any Open Source Cloud OS

OpenStack is making rapid strides in both private and public cloud markets. The earlier skepticism on hype is waning as we see the momentum translate to tangible deployments. Let’s go over some numbers if you need convincing. A recent OpenStack user survey showed 512 total deployments; 209 of these being production deployments. This momentum seems even more impactful within the EMC customer base. A recent EMC pre-sales/field survey has revealed that 50% of customers are running OpenStack today; 53% of those are in production. So it is not surprising that EMC’s strategy of ‘providing of our customers with choice’ resulted in OpenStack being a viable option across our federated family. Let me explain this in more detail.

Pivotal Customers Can Choose OpenStack

Pivotal offers a comprehensive set of application and data services that run on top of PaaS (platform-as-a-service) called Cloud Foundry®. Cloud Foundry is an open source platform that can run on any cloud infrastructure like VMware, OpenStack, or Amazon Web Services. Even though the Pivotal distribution of Cloud Foundry runs on VMware vSphere, there are many prominent members of the Cloud Foundry community who offer Cloud Foundry PaaS on OpenStack. Piston Cloud, which had originally contributed Cloud Foundry interface for OpenStack, now offers PaaS on Piston Cloud, which is based on OpenStack. IBM, with BlueMix PaaS cloud, and HP, with Cloud Application PaaS, have recently announced Cloud Foundry running on OpenStack infrastructure. Not only is it being offered by many prominent vendors, you can also choose to run Pivotal applications and big data services on any of these multi-cloud platforms. Watch out for the inevitable open source progression: OpenStack to Cloud Foundry!

VMware Customers Can Choose OpenStack

Many are perplexed by the complex relationship between VMware and OpenStack. I think the relationship is largely complementary; OpenStack is an open and flexible framework to pull all the cloud IaaS components together, whereas VMware provides the best-of-breed cloud components. As we see a rising number of OpenStack production deployments, there is also a broad demand for the integration of the two for Enterprise class solutions and use cases. It is for this reason that VMware is heavily investing in the integration, giving its customers the choice of having an open framework with best-of-breed cloud components. VMware is now the fifth largest contributor of OpenStack code in the current IceHouse release.

Many immediately get network virtualization as an integration point, as VMware NSX is one of the founders of OpenStack networking, but the integration points are far more pervasive than just networking. You can now trigger complex vCenter functions like vMotion off the OpenStack compute module, Nova. vSphere storage policies and advanced vSAN capabilities can be facilitated through the OpenStack storage module, Cinder. vCOPs integration allows monitoring and troubleshooting through OpenStack. There are many more, and you can be rest assured that the roadmap will continue to get richer with time.

EMC Customers Can Choose OpenStack

EMC’s strategy for OpenStack is twofold:

  1. OpenStack projects allow vendors to add capabilities through “plugin” architecture. Every storage business unit at EMC is committed to providing direct plugins for OpenStack. We currently offer plugins for VNX, VMAX, and ViPR; plugins from Isilon, ScaleIO, and XtremIO are already available to customers for beta evaluation. OpenStack’s roots were with object storage over commodity hardware. However, as deployments mature, we now see it running a broad mix of production workloads. Plugins give customers the choice of a variety of EMC storage with appropriate service levels to run their workloads.
  2. ViPR is EMC’s software-defined storage platform and hence complements OpenStack by providing rich automation features, as well as integrated management of objects, blocks, and later files. In other words, you will get integrated access to EMC (what you get from direct plugins) and non-EMC storage, and on top of that you should be able to get enhanced automation features like provisioning, masking/zoning, FAST and VPLEX volumes, as well as rich data services. You can check-out my blog on ‘ViPR in the stack’ for more details.

Returning to my original point, EMC customers can choose any cloud. I hope to have shown you that OpenStack represents as a viable choice across each of the EMC federation’s horizontal stacks.

Introduction of OpenStack with EMC

Tags

,

Here is a recording of my EMC World 2014 session on OpenStack.

Short abstract of the session:

OpenStack is fast evolving as a framework for building and deploying both private and public clouds. To EMC, OpenStack represents a viable choice for cloud computing that we offer to our customers. This session covers the storage solutions available from EMC and the value proposition they offer. This session also provides a preview to enterprise class use cases EMC is working on for the future.

http://t.co/C8plg0hz4q

 

 

State of the solution from OpenStack Atlanta Summit 2014

Tags

, ,

Hello, OpenStack fans! Did you miss the summit by any chance? Don’t worry, here is my spin on it. If you don’t care for my commentary, you can always check out OpenStack.org for their coverage. As always, they cover it well with recordings of keynotes and sessions.

Adoption momentum continues

Let’s start with the highlights from the user survey. The momentum for adoption continues, although it has understandably cooled off from what we said in the October 2013 survey. Total deployments jumped from 387 in October to 506, while Summary of OS surveyproduction deployments increased to 209. For the first time, ‘Avoid vendor lock-in’ has replaced ‘cost’ as the top business driver for deployments. Nova service was the most popular (196 deployments), but Cinder (151) and Neutron (135) were close. This showed that most deployments are not only Nova, but in conjunction with Cinder and Neutron. I was surprised to see a relatively low number of Swift deployments (101). Perhaps this number does not account for the Swift compatible deployments, just the open source module.

Keynotes were bold on the adoption momentum as well, with many notable end users on stage. Organization’s chair, Jonathan Bryce, announced the launch of two major programs geared towards aiding deployments. An OpenStack Marketplace is established with the goal of aiding in solution display and selection. Super users were introduced for knowledge sharing and collaborative problem solving. The theme of IceHouse is also largely aligned to adoption theme, with features focused on ease of deployment, tighter integration, and reliability.

Takeaways from the session

The sessions on networking track seemed ultra-popular, and were overflowing with attendees. (Don’t have any statistics to support it, just a random observation.) The user survey numbers on Neutron adoption seemed to support this hypothesis on popularity. Dev/QA had 169 Neutron deployments; 156 PoCs; 156 production deployments. Watch out for some excellent solutions from VMW NSX in this area.

Move towards Enterprise class solutions, like High Availability, was a clear theme that no one could miss. Just glance through the session agenda and you will see what I am saying. VMWare sessions were very popular for this reason, emphasizing the message of running OpenStack on top of VMware vSphere® + NSX™ to build rich OpenStack clouds on top of their existing IT infrastructure.

Another segment, promoting solutions with higher service levels were the Telcos. They came in number (ATT, Ericsson, TSystem…), and showed off their presence describing use cases requiring Telco grade service levels and scale. Network Function Virtualization (NFV) use case is gaining traction, as many Telco users and OEMs described the use case.

EMC sessions

EMC participated in three sessions:

Laying Cinder blocks: Jim Ruddy defined the use of ViPR, Software defined storage, with Cinder. This brought about the new role of how cloud architects would define policies, through a Software defined tool like ViPR, in a heterogeneous environment, to be consumed through the service model of OpenStack. Jim made the session hilarious through “Seinfield” analogies. Check out Jim’s blog for the message, demo, and humor.

Storage consideration for OpenStack: A panel discussion where Brian Gracely defined the importance of integrated data types for the execution of complex use cases.

Introducing Manila: An incubation proposal for a File Share service; Shamail Tahir demonstrated a prototype of VNX driver delivering file share service. Isilon driver was also mentioned.

Takeaways from design sessions

My attention on the design session was limited to storage, where the focus was – you guessed it, High Availability. Xing Yang led a design proposal of adding ‘Consistency Groups’ in Cinder. The session was very well received, with broad support for the feature. The discussion was centered on the use cases. Proposal was to implement it for Snapshot in the first phase, followed by volume replication and backup in the next.

A separate session proposed incremental backup, which struggled with scope and feasibility. There was also a proposal on Mirroring, which has a good chance of making it into Juno.

Finally (I saved the best for last), here’s a mention of my personal highlight of the summit. I was pleasantly surprised to see a broad emphasis on policy based automation. Neutron had design sessions on group policies and policy abstractions. Swift has proposed policies based implementations where availability can be configured, i.e. you can reduce the number of copies or implement an alternate method for availability. Then, there was a proposal for an incubation project called Congress, a cross module policy engine. Hallelujah!

Nikhil Sharma

http://SolutionizeIT.com/

Twitter: @NikhilS2000

Preview of EMC World OpenStack demo#2: block volume backed live migration

Tags

,

As I stated in the last blog, I am hosting a session at the upcoming EMC World on May 5th and 6th. The session gives an overview of OpenStack with EMC. Here is a sneak peak of my demo#2 that I plan to showcase in the session. This one is an Enterprise class use case – block backed live migration using KVM and OpenStack.

Use Case

The use case for Live Migration that I hear from OpenStack customers is tied to host maintenance. Users want to migrate the VM or instance over to a temporary host, for the sake of maintenance / upgrade of the source host; after the maintenance is complete they would migrate the VM back. Enterprises are very much used to such use cases in VMWare world, through vMotion. Underlying VMFS file system in vSPhere makes vMotion easier and transparent to various storage systems and protocols. KVM being an ephemeral hypervisor needs external file system support for persistent storage, hence Live Migration is dependent on Libvirt implementation. This is doable and stable using shared file system, like NFS. Good news is that in Havana release, OpenStack Nova supports ‘block volume backed Live Migration’. With this feature one can theoretically migrate VM without moving or copying volume. But … This involves careful handshake between Nova libvirt and Cinder to attach / detach volume from one host to another, and that implementation is buggy for iSCSI and Fibre Channel. We are actively working on fixing these bugs and should have a firm time frame for bug fix in another month or so.

The Demo

In spite of the bugs we were able to pull together a concept demo in our lab through some Nova workaround. Before going into the demo, would like to give a big shout out for Xing Yang, Consultant Technologist in Advanced Development team in Office of CTO. Thanks to Xing we have made big strides in driving clarity of the use case and code, and were able to pull this demo for you all. Thanks for your hard work Xing!

Here is how the demo goes. Block LiveM

 

We first go through the motions in Horizon to create a volume from standard Cirros image, then launch an instance from newly create volume. The instance is running on Source host, …124 in the demo. We then SSH into the source host, …124, do a “virsh list” to see the running VM and then start the live migration.

We then come back to Horizon UI to see the migration in progress. Just to make it interesting we go to the VNX Unisphere tool and check out the volume being migrated. Interestingly Unisphere sees our LUN associated with two storage groups – …124 source as expected, but also by a …125 target. So while migration is in progress VNX tool can see both source and target association to LUN. Migration should end quickly, as it is only Ephemeral transfer, the volume does not move. After migration Horizon UI shows the instance now running on …125 target host.

We now SSH in to target host …125 and login to the VM without any issues. By this time supposedly the maintenance of our source host is complete. We give the live migration command again to move the VM back to the source host …124. Use case complete!

Conclusion

Host maintenance is a valuable use case asked by many of our customers. Many stackers had to leverage shared storage, like NFS, for this use case in the past BUT this will change soon once these Nova bugs get fixed. It is just another choice of use case that EMC is helping enable. Enjoy the demo and watch out for updates in the future on bug fixes for block backed live migration.

Nikhil Sharma

http://SolutionizeIT.com/

Twitter: @NikhilS2000

Preview of ScaleIO and OpenStack demo at EMC World

Tags

,

I am hosting a session at the upcoming EMC World on May 5th and 6th. The session gives an overview of OpenStack with EMC; it presents EMC wide strategy and solutions for OpenStack. I plan to show couple of demos in the session to showcase EMC providing the choice of solutions for both next generation scale out workloads and Enterprise class workloads.

Let me give you a glimpse of the first demo in this blog. It is about ScaleIO being configured and executing a scale out workload. But before I start, I want to give due credit to the creator of this use case – Steve Thellen, Principle Software Engineer, EMSD (VNX/VMAX) Advanced Development team. Steve put this platform together for a project to show the integration of EMC storage platform with OpenStack, for scale out workloads. I was able to pull the demo off Steve’s setup, and put a spin on it. Let’s check it out.

The demo starts with the creation of ScaleIO volume on commodity SSD based local storage. I then attach the volume to a compute instance. And then wallah, you are ready to roll. Of course this assumes that OpenStack, ScaleIO, ScaleIO drivers have already been installed, send me a note if you are looking for installation instructions. Below is the technical implementation of such ScaleIO setup with OpenStack Nova through the ScaleIO Cinder function. ScaleIO has a driver for both client (SDC) and remote storage (SDS), but for this setup I only use client (host) side driver.

SDC

I then use VDBench tool to mimic the configuration of a high IO workload. Such workloads would be sensitive to cost ($/GB), Performance (latency and Aggregate IOPS), and scale out management. Hence making it ideal for ScaleIO. Check out my previous blog on Application Characterization for detailed concepts and rationale for such analysis.

SSD vol

I run two scripts one against the ScaleIO volumes configured to utilize locally attached SSDs; and second one with remote iSCSI capacity storage to show the contrast. The script has a read percentage of 90%, xfer size of 4k, and random seek percentage of 100%. The test is a simple one that only uses a couple of OpenStack instances spread across multiple hosts designed to exercise basic functionality. Here are the net results, as the terminal output in the Camtasia demo is not clear.

vdbench output

As you can see, ScaleIO delivers excellent results on IOPS, throughput, and response times. Not shown in the demo, but the IOPS and throughput increased drastically by adding more nodes. However the key takeaway from the demo is more than that. Messages are that you should get good results if you map your workload service levels to the storage platform; secondly you now have the choice of running ScaleIO, software based storage, on OpenStack.

Enjoy the demo! I will review the second demo in my next blog.

Nikhil Sharma

http://SolutionizeIT.com/

Twitter: @NikhilS2000

Application characterization

Tags

, ,

Much of today’s chatter in the cloud has been about Infrastructure as a Service and, more recently, Platform as a Service. We love the two cloud models because they are dynamic and efficient solutions for the need of application workloads. However, we can get so enamored by the response (of IaaS and PaaS) that we sometimes lose sight of the original question: What are the Service Levels required by the Application workloads? Steve Todd, an EMC fellow, writes about this in his recent blog, describing it as the biggest problem VMWare is addressing. I will go farther than Steve to say that this is the largest issue EMC Federation is addressing, at all the different levels of the platform. In this blog, I will elaborate on the need for Application characterization on service levels and how EMC is addressing this need.

Problem statement PPG     Enterprise and providers have many workloads in their data center, and these workloads have different requirements from the infrastructure they consume. However, there is a lack of semantics to define and characterize an application workload. There are times when end users are guessing what infrastructure is to be provisioned. Expensive benchmarks are needed to optimize the infrastructure, but most need to determine the infrastructure a priori. There are times when costly re-architecture needs to occur to align with the required service levels. We see this specifically with OpenStack, where users start off with commodity hardware only to revert back to reliable storage with costly reimplementation. Another facet of this problem occurs when users move to the cloud with no clear way of defining the application workload to the provider. This problem has become more severe today than ever before. The new kinds of application workloads are emerging with mobile computing (MBaaS), scale out and big data applications, etc… The platform or infrastructure itself is going through unprecedented evolution with the advent of what IDC describes as the third platform. Storage, for example, can be cached in a flash attached to PCIe, or ephemeral at the compute, or at the hot edge of shared storage with all flash arrays, or hybrid arrays or scale out arrays or cold edge of the array or glacier edge in the cloud… N-N problem

We see this as an NxN problem. If there are N number of application workloads in a data center and N number of the types of infrastructure to provision them on, an IT administrator may have NxN possible combinations to evaluate for provisioning.  The variable N is increasing every day, leading to unsolvable NxN combinations. There is no common semantic to describe the problem, let alone solve it.

Service Level Objectives

The path EMC has chosen to resolve the above NxN issue is to characterize and manage the applications with Service Level objectives. Application Characteristics service levels Each workload can be assessed on the dials of the service level objectives, like the ones shown in the picture. Now, rather than determining and optimizing exact infrastructure, the end user focuses only on the rating of the service level dials, bringing the NxN problem down to a manageable number of service level dials. Solution implementation will also become easier, as there are now discreet design targets to shoot for. Let us take the spider chart visualization of a few workloads to illustrate thERPe point. Examples are derived from EMC Product Positioning guide and are meant to be representative and not exact.

The ERP OLTP workload tends to have critical requirements for Performance and Availability. For this reason, Service Levels for IOPS, Latency, QoS, and RAS are rated as Platinum+ (4-5 on the spider chart). Increasingly integrating the application and database layers into the overall IT stack is gaining momentum and deemed critical for this case. Cost is not a concern, hence the service levels for $/GB and $/IOPS are rated as Silver (2 in the spider chart).

I will take Big Data Hadoop as the next example to contrast a workload representinhadoopg the newer 3rd platform. Typically, Hadoop workloads place high value on Cost ($/GB) and Elasticity (Scale out/Scale down) and associate lower priority on performance (IOPS) and availability (QoS, RAS). Of course, this is just an approximate depiction; I have seen some Hadoop implementations requiring higher performance and availability. We have two distinct spider charts, obviously leading to two different storage infrastructures with the closest match. This was a simple example to prove a point; in reality, you may have thousands of workloads, making such manual selection virtually impossible.

How will the solution work?

Management by Service Level objectives is elegant, but unless it could be automated, it is not a solution. We need an abstraction layer and open interface for automation. Software defined storage, with ViPR, is a perfect fit to be the arbiter between the service levels required by the workloads and the service levels provisioned by the storage. ViPR already provides the capability of policy based provisioning. In the future, it will incorporate the interface for service level objectives, and will provision based on those objectives from a virtual pool of heterogeneous arrays. SDS to workload If you are wondering how you can ease your infrastructure decision making before ViPR automations comes through, you may be able to organize your plans based on recommendations by our EMC Product positioning guide at http://www.emc.com/ppg/index.htm. EMC solutions aside, coming up with an industry accepted definition of service levels is also critical for end users to fairly assess various cloud services offered by the industry. To that end, Open Data Center Alliance – a global association of Enterprise and providers- has made recommendations for the standard definitions of service attributes for Infrastructure as a Service. The alliance definitely has the broad representation and muscle to make such an adoption successful, but only time will tell.

Conclusion

Much has been said about EMC federation’s cloud offerings, from storage (EMC II) to infrastructure (VMW) to platform (Pivotal). However, the key to its success lies in the fundamentals of understanding the workload and provisioning accordingly. You will hear more announcements along these lines in the months and years to come.

Nikhil Sharma

http://SolutionizeIT.com/

Twitter: @NikhilS2000

The inevitable open source progression: OpenStack to Cloud Foundry

Tags

, , ,

In recent posts, I have described the EMC federation’s strategy of providing our customers with a choice of solutions through horizontal stacks. EMC storage platforms work equally well with VMWare, OpenStack, and other clouds. VMWare works well on EMC storage and other platforms.

Pivotal has taken this mantra of horizontal stack further  by  offering a comprehensive, multi-cloud Enterprise PaaS (Platform as a Service) comprised of a set of application and data services that run on top of Pivotal CF™, which is the leading enterprise distribution of the Cloud Foundry® platform. Cloud Foundry is an open source platform and has been in the news lately with the announcement from Pivotal to move to an independent governance model, similar to what OpenStack did back in 2012. In this blog I will introduce Platform as a Service (PaaS) horizontal stack – Cloud Foundry and how it offers the choice of platform on OpenStack cloud.

PaaS is hot

In case you have missed it, PaaS has been on a tear lately in the press. Let me highlight couple of salient data points from a recent cloud industry survey conducted by GigaOM Research and VC North Bridge.

OSS user survey

In my mind, there is a correlation between the above two trends. IT efficiency and scale are the main drivers behind the current popularity of Infrastructure as a Service, or IaaS. However, as cloud infrastructure starts to mature, the focus shifts to businesses, demanding agility and innovation through cloud. Hence, PaaS is emerging to make a major impact on both private and public cloud infrastructures, helping companies to decrease time-to-market significantly for product development and create the applications that people use each day.

The progression from IaaS to PaaS seems only natural, as the success of Platform requires a flexible and scalable infrastructure. Thereby, it’s not a surprise that lately the OpenStack community has been humming with the discussions on PaaS. I am not going to get into the debates in this post, if you are interested I would suggest Jesse Proudman’s post for a good summary. There is one thing clear from the intensity of these debates – the industry is gearing to make the progression from IaaS to PaaS.

Hottest PaaS in town – Cloud Foundry

Ready to seize and shape the trend is the darling of open source PaaS – Cloud Foundry. Cloud Foundry is showing many of the same patterns we saw with the early momentum of OpenStack. Take a look.

CF growth

I can argue that on couple of points above, Cloud Foundry is seeing better success than OpenStack did early-on. After the initial OpenStack deployments at Rackspace and NASA, there was a long time lag in the next wave of noteworthy deployments. Cloud Foundry can already claim some very large scale production deployments at both private and public clouds. It can also claim proven integration with both VMW and OpenStack infrastructure. This is a big achievement, knowing that the future is multi-cloud. In fact I am ready to declare the progression of Open Source Software. If OpenStack is the new Linux of Open Source, Cloud Foundry will be the next OpenStack of Open Source.

OSS progression

Cloud Foundry architecture and OpenStack integration

CF Arch

The picture shows the architecture, modeled on large scale distributed services. I will avoid the details of the architecture, which can be found on the Pivotal site, but will instead focus on the key integration points with OpenStack. The Cloud Foundry deployments are done through a tool called Bosh. Stackers can think of Bosh as the equivalent of Puppet or Chef or Juju, with added features for runtime management. The core Bosh engine (and Cloud Foundry as a whole) is abstracted from any particular IaaS through a Cloud Provider Interface or CPI. IaaS interfaces are implemented as plugins to BOSH. There are plugins available for OpenStack, VMW vSphere, and Amazon Web Services. The plugin compatibility is maintained by following and enforcing 14 imperatives defined in the CPI interface.

Services beyond the infrastructure can be plugged into Cloud Foundry, and thereby to your application, through a Service Broker. Service Brokers provides an interface to all native and external third party services. Service Processes run on service nodes or with external as-a-service providers (example: database, email etc.), so advanced OpenStack modules, like Trove database service, can be integrated into Cloud Foundry through a Service Broker. These services are presented and offered to the applications via a Service Catalog.

I will save more detailed explanations for later blogs, but hopefully it gives the stackers an idea of the interface points for OpenStack.

Cloud Foundry deployment solutions

Here are three paths to Cloud Foundry solutions; the options here are very similar to that of OpenStack solutions.

  1. Open Source Software: You can download Cloud Foundry, along with its relevant interfaces, plugins, and services, directly from Github. In doing so, you will have to ‘do it yourself’ (DIY). You can also download the plugin for OpenStack from Github. By the way, an interesting fact for stackers, parts of the CF community infrastructure are run by Piston Cloud, through an integrated VMW + OpenStack solution.
  2. Cloud Foundry Distributions: Pivotal offers supported Cloud Foundry software such as Pivotal CF, which is the leading Enterprise distribution. Canonical announced that the next major release of Ubuntu will provide an integrated OpenStack hosted PaaS from Cloud Foundry.
  3. Turnkey products or platforms powered with Cloud Foundry + OpenStack: These are abundant now. Examples of systems offered are Piston Cloud, IBM BlueMix PaaS cloud, HP Cloud Application PaaS… And a long list managed providers like IBM SoftLayer, BlueBox managed hosting, along with Anynines in Europe…

Wrap

And that’s a wrap! Look for deep dives on the intersections and usage of Cloud Foundry and OpenStack in the future. I will leave you with the parting thought on the progression of open source software – Cloud Foundry, the next OpenStack of the cloud platform! Noodle over it and send me your opinions.

Nikhil Sharma

http://SolutionizeIT.com/

Twitter: @NikhilS2000

Follow

Get every new post delivered to your Inbox.

Join 34 other followers