Tech Ads

Back to Article List

Originally published December 2008 [ Publisher Link ]

Cloud computing standards: Deploying and scaling services without lock-in


The software as a service approach already has a series of bodies dedicated to ensuring services themselves are interoperable amongst one another. There is the World Wide Web consortium (W3C) which oversees standards like XML and WSDL, as well as OASIS which sets the course for WS-* standards. Initiatives like these have helped mitigate the risk for both customers and vendors, encouraging the software as a service paradigm since applications are not locked into a particular technology. However, until recently there was one area related to service applications that was unaddressed, one having to do with deploying and scaling services.

Once the hurdle of having software enabled as a service is crossed, application interoperability becomes a non-issue, but what happens once a software service is incapable of handling demand with its initial hardware provisions? This inevitably takes us to the analysis of data center infrastructure -- or hosting providers.

Even for non-service designs, deploying and scaling applications beyond their initial stage is a process which often entails a mix of both hardware and software technology, requiring everything from virtualized operating systems and clustered middleware products to load balancers and custom application modifications, all to accommodate increasing demand.

In the software as a service model, rolling out this type of infrastructure may be prohibitive for all but the biggest organizations. But providers have emerged that allow the smallest of organizations to expand application capacity on an as-needed basis, under the utility model of 'pay as you use'. Some of these providers include Amazon's EC2 service and Google's App engine service, as well as specialized software products by companies like 3Tera, RightScale and Elastra, to name a few.

And its herein that lies the importance of standardization in the areas of deployment and scalability for cloud computing. A software service might gain from all the standards developed to ensure application inter-operability, but as soon as it attempts to gain deployment and scalability features, provider lock-in occurs.

This lock-in is currently unavoidable, since this 'pay as you use' model requires a mix of hardware and software elements that occur at a different level than a standard operating system or standard service application. The alternate or 'flat fee' model -- which is still the most prevalent -- is standardized in the sense that hosting providers offer the same type of operating systems and hardware, leaving customers with various choices to park their software services. Which is not the case with 'pay as you use' providers.

One of the first initiatives to tackle standards in this area was the Open Virtual Machine Format Specification (OVF) created in September 2007 by the Distributed Management Task Force and backed by companies like Dell, HP, IBM, Microsoft, VMware and XenSource. This standard allows developers to install pre-configured applications and easily replicate them -- leading to a scalable solutions -- without the threat of using a proprietary hardware and software architecture that might be supported by only a few hosting providers.

The concept of a virtual machine is deeply rooted in virtualization technology, whereby multiple operating systems can run on the same server, nevertheless many of the 'pay as you use' providers use a similar concept in their architectures. Related initiatives to OVF include projects like Kensho , which is a set of open-source tools capable of exporting and importing instances to various virtual appliances based on this standard.

A more holistic standard to cloud computing has been that of Cloudware, undertaken by 3Tera. But unlike OVF which is concentrated on virtual machines alone, Cloudware is focused on streamlining things like database integration and replication into a cloud environment.

Though Cloudware is still in its initial phases, it has garnered attention among various 'pay as you use' providers with good reason. A look at some of these providers can shed light on the fragmented approach each one uses. For example, Amazon's EC2 services uses the concept of 'Amazon Machine Image (AMI)' and Google's App Engine uses certain hard CPU and data quotas, whereas products made by 3Tera, RightScale and Elastra also have marked differences in the way they allow customers to deploy and scale applications.

Though none of these issues put into question the effectiveness of each provider or solution, they do raise the specter of vendor lock-in. If an application is designed around any of the current 'pay as you use' cloud computing providers, you have very little leeway for changing. Since each provider's hardware and software architecture is different, your applications become more insensitive to service levels and price -- its not as simple as changing to one of the many 'flat fee' providers which offer the same operating systems.

So if your service applications stand to gain from the deployment and scalability features of these 'pay as you use' cloud computing providers, take a close look at where they stand in terms of standards. Otherwise, your application and your organization rely on a single provider, with little if any alternatives in the face of low quality service or price increments.


Originally published December 2008 [ Publisher Link ]

Back to Article List