Categories
Cloud Computing

The way to the holy IaaS grail

In 2014, cloud computing finally arrived in Germany. A current study by Crisp Research under 716 IT decision makers shows a representative picture of the cloud adoption in the DACH market. For 19 percent of the sample cloud computing is a regular part on the IT agenda and production environments. 56 percent of the companies are already planning and implementing cloud services and technologies and using it for first projects and workloads. Crisp Research forecasts that German companies will spend around 10.9 billion EUR on cloud services, technologies as well as integration and consulting in 2015. Therefore, more and more companies evaluate the use of infrastructure-as-a-service (IaaS). For German IT decision makers this raises the question which selection criteria they must consider. Which deployment model is the right one? Is a US provider in general unsecure? Is a German provider mandatory? Which possibilities stay after Snowden and Co.?

Capacity Planning, Local or Global, Service?

Before using IaaS there is the fundamental question how and for which purpose the cloud infrastructure will be used. In this context the capacity planning plays a decisive role. In most cases companies know their applications and workloads and thus can estimate how scalable the infrastructure regarding performance and availability must be. However, scalability must also be considered from a global point of view. If the company focuses mainly on the German or DACH market a local provider with a data center in Germany is enough to serve the customers. If the company wants to expand in global markets in the midterm, a provider with a global footprint is to recommend who also operates data centers in the target markets. Following questions are:

  • What is the purpose of using IaaS?
  • Which capacities are necessary for the workloads?
  • What kind of reach is required? Local or global?

Talking about scalability the term “hyper scaler” is often used. These are provider whose cloud infrastructure theoretically is capable to scale endlessly. To these belong Amazon Web Services, Microsoft Azure and Google. The term endlessly should be treat with caution. Even the Big Boys hit the wall. Finally the virtual infrastructure based on physical systems and hardware doesn’t scale.

Companies who have a global strategy to grow into their target markets in the midterm should concentrate on an international operating provider. Among the above named Amazon AWS, Google, and Microsoft also HP, IBM (Softlayer) or Rackspace come into play which are operating a public or managed cloud offering. Who sets on a “global scaler” from the beginning gets an advantage later on. The virtual infrastructure and the running applications and workloads on top of it can be deployed easier to accelerate the time to market.

Cloud connectivity (low latency, high throughput and availability) should also not be underestimated. Is it enough that the provider and its data centers are only able to serve the German market or exists a worldwide-distributed infrastructure of data centers, which are linked to each other?

Two more parameters are the cloud model and the related type of service. Furthermore, hybrid and multi cloud scenarios should be considered. Following questions are:

  • Which cloud model should be considered?
  • Self-service or managed service?
  • Hybrid and multi cloud?

Current offerings distinguish public, hosted and managed private clouds. Public clouds are built on a shared infrastructure and are mainly used by service providers. Customers share the same physical infrastructure and are logically separated based on a virtualized security infrastructure. Web applications are an ideal use case for public clouds since standardized infrastructure and services are sufficient. A hosted cloud model transfers the ideas of the public cloud into a hosted version administered by a local provider. All customers are located on the same physical infrastructure and are virtual separated from each other. In most cases the cloud provider operates a local data center. A managed private cloud is an advanced version of a hosted cloud. It is especially attractive to those companies who want to avoid the public cloud model (shared infrastructure, multi-tenancy) but do not have the financial resources and the knowledge to run a cloud in the own IT infrastructure. In this case, the provider operates an exclusive and dedicated area on its physical infrastructure for the customers. The customer is able to use the managed private cloud exactly like a public cloud but on a non-shared infrastructure, which is located in a provider’s data center. In addition, the provider offers consultancy services to help the customer to transfer its applications and systems in the cloud or to develop them from scratch.

These hyper scaler respectively global scalers named above are mainly public cloud provider. Offering a self-service model the customers are responsible for building and operating the virtual infrastructure respectively the applications running on top of the infrastructure. In particular cloud player like Amazon AWS, Microsoft Azure and Google GCE are offering their infrastructure services based on a public cloud model and a self-service. Partner networks are helping the customers to build and run the virtual infrastructure, applications and workloads. Public cloud IaaS offerings with a self-service are very limited in Germany. The only providers are ProfitBricks and JiffyBox by domainFactory. However, JiffyBox’s focus is on webhosting and not enterprise workloads. CloudSigma from Switzerland should be named as a native provider in DACH. This German reality also reflects the provider’s strategies. The very first German public IaaS provider ScaleUp Technologies (2009) completely renewed its business model by focusing on managed hosting plus consultancy services.

Consultancy is the keyword in Germany. This is the biggest differentiator to the international markets. German companies prefer hosted and managed cloud environments including extensive service and value-added services. In this area providers like T-Systems, Dimension Data, Cancom, Pironet NDH or Claranet are present. HP has also recognized this trend and serves consultancy services in addition to its OpenStack-based HP Helion Cloud offering.

Hybrid- und multi-cloud environments shouldn’t be neglected in the future. A hybrid cloud connects a private cloud with the resources of a public cloud. In this case, a company operates an own cloud and uses the scalability and economies of scale from a public cloud provider to get further resources like compute, storage or other services on demand. A multi-cloud concept extends the hybrid cloud idea with the number of clouds that are connected. More precisely, it is about n-clouds that are connected, integrated or used in any form. For example, cloud infrastructures are connected so that the applications can use several infrastructure or services in parallel, depending on the capacity utilization or according to the current prices. Even the distributed or parallel storage of data is possible in order to ensure the availability of the data. It is not necessary that a company connect each cloud that is used to run a multi-cloud scenario. If more than two SaaS applications are part of the cloud environment it is basically already a multi-cloud setup.

At application level Amazon AWS doesn’t offer extensive hybrid cloud functionalities at present but is ever expanding. Google doesn’t offer any hybrid cloud capabilities. Because of public and private cloud solutions Microsoft and HP are able to offer hybrid cloud scenarios on a global scale. In addition, Microsoft has the Cloud OS Partner Network, which enables companies to build Microsoft based hybrid clouds together with a hosting provider. As a German provider T-Systems has the capabilities to build hybrid clouds on a local level as well as on a global scale. Local providers like Pironet NDH are offering hybrid capabilities on German ground.

Legend: Data Privacy and Data Security

Since Edward Snowden and the NSA scandal happened many legends have been created around data privacy and data security. Providers, especially from Germany, advertise with a higher security and protection against espionage and other attacks when the data is stored in a German data center. The confusion. When it comes to security, two different terms are frequently being mixed: data security and data privacy.

Data security means the implementation of all technical and organizational procedures in order to ensure confidentially, availability and integrity for all IT systems. Public cloud providers by far offer better security than a small business is able to achieve. This is due to the investments that cloud providers are making to build and maintain their cloud infrastructures. In addition, they employ staff with the right mix of skills and have created appropriate organizational structures. For this reason, they are annually investing billions of US dollars. There are only few companies outside of the IT industry that are able to achieve the same level of IT security.

Data privacy is about the protection of personal rights and privacy during the data processing. This topic leads to the biggest headaches for most companies, due to the fact that the legislative authority can’t take it easy. This means that a customer has to audit the cloud provider in compliance with the local federal data protection act. In this case, it is advisable to use the expert report of a public auditor since it is time and resource consuming for a public cloud provider to be audited by each of its customers. Data privacy is a very important topic; after all, it is about a sensitive dataset. However, it is essentially a topic of legal interest that must be ensured by data security procedures.

A German data center as a protection against the espionage of friendly countries is and will stay a myth. When there’s a will, there’s a way. When an attacker wants to get the data it is only about the criminal energy he is willing to undertake and the funds he is able to invest. If the technical challenges are too high, there is still the human factor as an option – and a human is generally “purchasable”.

However, US American cloud players have recognized the concerns of German companies and have announced or started to offer their services from German data centers. Among other Salesforce (partnership with T-Systems), VMware, Oracle and Amazon Web Services. Nevertheless, a German data center has nothing to do with a higher data security. It just fulfills

  • The technical challenges of cloud connectivity (less latency, high throughput and availability).
  • The regulatory framework of the German data privacy level.

Technical Challenge

During the general technical assessment of an IaaS provider the following characteristics should be considered:

  • Scale-up or scale-out infrastructure
  • Container support for better portability
  • OpenStack compatibility for hybrid and multi cloud scenarios

Scalability is the characteristic to increase the overall performance of a system by adding more resources like complete computing units or granular units like CPU or RAM. Using this approach the system performance is capable to grow linear with the increasing demand. So, unexpected load peaks can be absorbed and the system doesn’t break down. Scalability differs scale-up and scale-out. Scale-out (horizontal scalability) increases the system performance by adding complete compute units (virtual machines) to the overall system. In contrast, scale-up (vertical scalability) increases the system performance by adding further granular resources to the overall system. These resources can be storage, CPU or RAM. Taking a closer look on the top cloud applications, these are mainly developed by startups, uncritical workloads or developments from scratch. Attention should be paid to the scale-out concept, which makes it complicated for enterprises to move their applications and systems into the cloud. At the end of the day, the customer has to develop everything from scratch since a non-distributed developed system doesn’t work, as it should run on a distributed scale-out cloud infrastructure.

IT decision makers should consider that their IT architects are detaching from the underlying infrastructure in the future to move applications and workloads over different providers without borders. Container technologies like Docker make this possible. From the IT decision makers point of view thus the selection of a provider that supports e.g. Docker is a strategic tool to optimize modern application deployments. Docker helps to ensure the portability of an application to increase the availability and decrease the overall risk.

Hybrid and multi cloud scenarios are not only a trend but reflects the reality. Cloud provider should act in terms of their customers and instead of using a proprietary technology also set on open source technologies respectively a de-facto standard like OpenStack. Thus they enable the interoperability between cloud service provider and creating the requirements for a comprehensive ecosystem, in which users are getting a better comparability as well as the capabilities to build and manage truly multi cloud environments. This is the groundwork to empower IT buyer to benefit from the strength of individual provider and the best offerings on the market. Open approaches like OpenStack are fostering the prospective ability to act of IT buyer across provider and data center borders. This makes OpenStack to an important cloud-sourcing driver.

Each way is an individual path

Depending on the requirements the way to the holy IaaS grail can become very rocky. In particular, enterprise workloads are more difficult to handle as novel web applications. Regardless of this, it must be considered that applications, which are running on IaaS must be developed from scratch. This depends on the particular provider. But in most cases this is necessary in order to use the specific provider occurrences. Mastering the individual path the following point of views can help:

  • Know and understand the own applications and workloads
  • Perform a data classification
  • Don’t confuse data privacy with data security
  • Evaluate the cloud model: Self-service or managed service
  • Check hybrid and multi cloud scenarios
  • Estimate local and global operating distance
  • Don’t underestimate cloud connectivity
  • Evaluate container technologies for technological liberty of applications
  • Consider OpenStack compatibility

By Rene Buest

Rene Buest is Gartner Analyst covering Infrastructure Services & Digital Operations. Prior to that he was Director of Technology Research at Arago, Senior Analyst and Cloud Practice Lead at Crisp Research, Principal Analyst at New Age Disruption and member of the worldwide Gigaom Research Analyst Network. Rene is considered as top cloud computing analyst in Germany and one of the worldwide top analysts in this area. In addition, he is one of the world’s top cloud computing influencers and belongs to the top 100 cloud computing experts on Twitter and Google+. Since the mid-90s he is focused on the strategic use of information technology in businesses and the IT impact on our society as well as disruptive technologies.

Rene Buest is the author of numerous professional technology articles. He regularly writes for well-known IT publications like Computerwoche, CIO Magazin, LANline as well as Silicon.de and is cited in German and international media – including New York Times, Forbes Magazin, Handelsblatt, Frankfurter Allgemeine Zeitung, Wirtschaftswoche, Computerwoche, CIO, Manager Magazin and Harvard Business Manager. Furthermore Rene Buest is speaker and participant of experts rounds. He is founder of CloudUser.de and writes about cloud computing, IT infrastructure, technologies, management and strategies. He holds a diploma in computer engineering from the Hochschule Bremen (Dipl.-Informatiker (FH)) as well as a M.Sc. in IT-Management and Information Systems from the FHDW Paderborn.