Categories
Cloud Computing Internet of Things

IoT-Backend: The Evolution of Public Cloud Providers in the Internet of Things (IoT)

The Internet of Things (IoT) has jumbled the agenda of CIOs and CTOs faster than expected and with a breathtaking velocity. As of shortly cloud, big data and social topics occupied center stage. However, in the meantime we are talking more and more about the interconnection of physical objects like human beings, sensors, household items, cars, industrial facilities etc. Who might think that the “Big 4” now disappear from the radar is wrong. Quite the contrary is the case. Cloud infrastructure and platforms belong to the central drivers behind IoT services since they are offering the perfect preconditions to serve as vital enabler and backend services.

Public Cloud Workloads: 2015 vs. 2020

The demand for public cloud services shows an increasing momentum. On the one hand it is due to the requirement of CIOs to run their applications more agile and flexible. On the other hand most of the public cloud providers are addressing the needs of their potential customers. Among the varying workload categories that are running on public IaaS platforms standard web applications (42 percent) still represent the major part. By far mobile applications (22 percent), media streaming (17 percent) and analytics services (12 percent) follow. Enterprise applications (4 percent) and IoT services are still playing a minor part.

The reason for the current segmentation: websites, backend services as well as content streaming (music, videos, etc.) are perfect for the public cloud. On the other hand enterprises are still sticking in the middle of their digital transformation and evaluate providers as well as technologies for the successful change. IoT projects are still in the beginning or among the idea generation. Thus in 2015, IoT workloads are only a small proportion on public cloud environments.

Until 2020 this ratio will significantly change. Along with the increasing cloud knowledge within the enterprises IT and the ever-expanding market maturity of public cloud environments for enterprise applications the proportion of this category will increase worldwide from 4 percent to 12 percent. Accordingly, the proportion of web and mobile applications as well as content streaming will decrease. Instead worldwide IoT workloads will almost represent a quarter (23 percent) on public IaaS platforms like AWS, Azure and Co.

Public Cloud Provider: The perfect IoT-Backend

The Internet of Things will quickly become a key factor for the future competitiveness of enterprises. Thus, CIOs have to deal with the necessary technologies to support their enterprise business technology strategy. Public cloud environments – infrastructure (IaaS) as well as platforms (PaaS) offer perfect preconditions to serve as supporting backend environments for IoT services and devices. The leading public cloud providers already have prepared their environments with the key features to develop into an IoT backend. The central elements of a holistic IoT backend are characterized as follows (excerpt):

  • Global scalability
  • Connectivity/ Connectivity management
  • Service portfolio and APIs
  • Special services for specific industries
  • Platform scalability
  • Openness
  • Data analytics
  • Security & Identity management
  • Policy control
  • Device management
  • Asset and Event management
  • Central hub

Public cloud based infrastructure-as-a-service (IaaS) will mainly be used to provide compute and storage capacities for IoT deployments. IaaS provides enterprises and developers inexpensive and almost infinite resources to run IoT workloads and store the generated data. Platform-as-a-service (PaaS) offerings will benefit from the IoT market as they provide enterprises faster access to software development tools, frameworks and APIs. PaaS platforms could be used to develop control systems to manage IoT applications, IoT backend services and IoT frontends as well as to integrate with third party solutions to build a complete “IoT value chain”. Even the software-a-as-service (SaaS) market will benefit from the IoT market growth. User-friendly SaaS solutions will facilitate users, executives, managers as well as end customers and partners to analyze and share the data generated by interconnected devices, sensors etc.

Use Cases in the Internet of Things

digitalSTROM + Microsoft Azure
digitalSTROM is one of the pioneers in the IoT market. As a provider of smart home technologies the vendor from Switzerland has developed an intelligent solution for connecting homes to communicate with several devices over the power supply line via smartphone apps. Lego kind bricks form the foundation. Each connected device can be addressed over a single brick, which holds the intelligence of the device. digitalSTROM early evaluated the potentials of a public cloud environment for its IoT offering. Microsoft Azure provides the technological foundation.

General Electric (GE) + Amazon Web Services
General Electric (GE) has created an own IoT factory (platform) within the AWS GovCloud (US) region to interconnect humans, simulator, products, sensors etc. with each other. The goal is to improve collaboration, prototyping and product development. GE’s decision for the AWS GovCloud was to fulfill legal and compliance regulations. One customer who already profits from the IoT factory is E.ON. When the demand for energy increases in the past GE typically tried to sell E.ON more turbines. In the course of the digital transformation GE early started to change its business model. GE is using operational data of turbines to optimize the energy efficiency by performing comprehensive analyzes and simulation. E.ON gets real-time access to the interconnected turbines to control the energy management on demand.

ThyssenKrupp + Microsoft Azure
Together with CGI ThyssenKrupp has developed a solution to interconnect thousands of sensors and systems within its elevators over the Microsoft Azure cloud. For this purpose they are using Azure IoT services. The solution provides ThyssenKrupp several information from the elevators to monitor the engine temperature, the lift hole calibration, the cabin velocity, the door functionality and more. ThyssenKrupp records the data, transfers it to the cloud and combines it in a single dashboard based on two data types. Alarm signals that indicate urgent problems and events that are only stored for administrative reasons. Engineers get real-time access to the elevators data to immediately make their diagnostics.

IoT-Backend: Service Portfolio and Development Capacities are central

All use cases above show three key developments that determine the next five years and will significantly influence the IaaS market:

  1. IoT applications are a central driver behind IaaS adoption.
  2. Development tools, APIs, and value added services are central decision criteria for a public cloud environment.
  3. Developer and programming skills are crucial.

Thus, several public cloud providers should question themselves whether they have the potential respectively the preconditions to develop their offering further to become an IoT backend. Only the ones who provide services and have development capacities (tools, SDKs, frameworks) in the portfolio will be able to play a central role in the profitable IoT market and being considered as the infrastructure base for novel enterprise and mobile workloads. Note: more and more public cloud infrastructure is used as an enabler and backend infrastructure for IoT offerings.

Various enablement services are available in the public cloud market that can be used to develop an IoT backend infrastructure.

Amazon AWS services for the Internet of Things:

  • AWS Mobile Services
  • Amazon Cognito
  • Simple Notification Service
  • Mobile Analytics
  • Mobile Push
  • Mobile SDKs
  • Amazon Kinesis

Microsoft Azure IoT-Services:

  • Azure Event Hubs
  • Azure DocumentDB
  • Azure Stream Analytics
  • Azure Notification Hubs
  • Azure Machine Learning
  • Azure HDInsight
  • Microsoft Power BI

Amazon AWS didn’t start any noteworthy marketing for the Internet of Things so far. Only a sub website explains the idea of IoT and what kind of existing AWS cloud services should be considered. Even with Amazon Kinesis – predestinated for IoT applications – AWS is taking it easy. However, taking a look under the hood of IoT solutions one realize that many cloud based IoT solutions are delivered via the Amazon cloud.

Microsoft considers the Internet of Things as a strategic growth market and has created Microsoft Azure IoT Services, a specific area within the Azure portfolio. However, so far this is only a best off of existing Azure cloud services that are encapsulating a specific functionality for the Internet of Things.

Public Cloud Providers continuously need to expand their Portfolio

From a strategy perspective IoT use cases are following the top-down cloud strategy approach. In this case the potentials of the cloud are considered and based on that a new use case is created. This will significantly change the ratio from bottom-up to more top-down use cases in the next years. (Today’s ratio is about 10 percent (top-down) to 90 percent (bottom-up)) More and more enterprises will start to identify and evaluate IoT use cases to enhance their products with sensors and machine-2-machine communication. The market behaviors we see for fitness wearable’s (wristbands and devices people are using to quantify themselves) today will exponentially escalate to other industries.

So, the majorities of the cloud providers are under pressure and can’t rest on their existing portfolio. Instead they need to increase their attractiveness by serving their existing customer base as well as potential new customers with IoT enablement services in terms of microservices and cloud modules. Because the growth of the cloud and the progress of the Internet of Things are closely bound together.

Categories
Cloud Computing

Cloud Marketplace: A means to execute the Bottom-Up Cloud Strategy

Worldwide many CIOs are still looking for the right answer to let their companies benefit from cloud computing capabilities. Basically all kinds of organizations are the right candidates for a bottom-up cloud strategy – the migration of existing applications and workloads into the cloud. However, this approach isn’t much innovative but it offers a relatively high value proposition with low risk. An analysis of market-ready cloud marketplaces show promising capabilities to implement the bottom-up cloud strategy in the near term.

The Cloud drives the evolution of IT purchasing

In the blog post “Top-Down vs. Bottom-Up Cloud Strategy: Two Ways – One Goal” two strategy approaches are discussed that can be used to benefit from public cloud infrastructure. One conclusion was that top-down strategies remain reserved for innovators. Bottom-up strategies are mainly realized in the context of existing workloads to move them into the cloud. CIOs of prestigious companies worldwide are still searching for best practices to heave their legacy enterprise workloads to the cloud.

Looking at the general purchasing behavior of IT resources unveils a disruptive change. Besides CIOs, IT infrastructure manager and IT buyers also department managers are asking for their right to be heard or are going their own ways. Driver behind this development: the public cloud. Its self-service leads to a vanishing significance of classical IT purchasing. Obtaining hardware and software licenses from distributors and resellers will become less important in the future. Self-service makes it convenient and easy at once to get access to infrastructure resources as well as software. Thus, distributors and resellers systematically have to rethink their business models. Those system houses and system integrators who still haven’t start their cloud transformation so far are at risk to disappear from the market in the next three to five years. So long!

Besides self-service the public cloud offers primarily one thing: Choice! More than ever before. On the one hand there is the ever-growing variety of sources of supply – solutions of providers. On the other hand the different deployment models the public cloud can be connected with. Hybrid and multi cloud scenarios are the reality.

The next step of evolution is well underway – cloud marketplaces. Implemented by the operators the right way they are offering IT buyers an ideal central platform for purchasing IT resources. At the same time they are supporting CIOs to push their cloud transformation with a bottom-up strategy.

Bottom-Up Cloud Strategy: Cloud Provider Marketplaces support the implementation

The bottom-up cloud strategy helps companies to move existing legacy or enterprise applications into the cloud to benefit from the cloud’s capabilities without thinking about innovations or to change the business model. It is mainly about efficiency, costs and flexibility.

In this strategy approach the infrastructure is not the central point but rather a means to the end. Finally, the software needs to be operated somewhere. In most cases the purpose is to continue to use the existing software in the cloud. At application level cloud marketplaces can support to be successful in the short term. More importantly they are facing current challenges and requirements of companies. These are:

  • The distributed purchase of software across the entire organization is difficult.
  • The demand for accessing software in the short term – e.g. for testing purposes – increases.
  • Individual employees and departments are asking for a catalog of categorized and approved software solutions.
  • A centrally organized cloud marketplace helps to work against shadow IT.

In light of the fact that still a vast number of valid software licenses are used in on premise infrastructure underlines the importance of Bring Your Own License (BYOL). BYOL is a concept by which a company continues using its existing software licenses legally at the cloud provider’s infrastructure.

In respect of supporting the bottom-up cloud strategy by a cloud marketplace, experience has shown that cloud provider owned and operated marketplaces like Amazon AWS Marketplace or Microsoft Azure Marketplace are playing an outstanding role. Both are offering the necessary technology and excellence to make it easy for customers and partners to decide for running applications in the cloud.

Technical advantage, simplicity and most of all an extensive choice are the key success factors of cloud marketplaces of public cloud providers. The AWS Marketplace already offers 2100+ solutions, the Azure Marketplace even 3000+ solutions. Some mouse clicks and the applications are deployed on the cloud infrastructure – incl. BYOL. Thus, the infrastructure is becoming easily usable for all users.

The providers are investing a lot in the development of their marketplaces – with good reason. Amazon AWS’ and Microsoft Azure’s own cloud marketplaces are having a strategic importance. These are the ideal tools to maneuver new customers to the infrastructure and to increase the revenue with existing customers.

Cloud marketplaces operated by public cloud providers are constantly becoming more popular – this with verifiable numbers. The marketplaces are having a big market maturity and are offering a big and wide variety of solutions. Against this backdrop, CIOs who are planning to migrate their existing applications into the cloud should intensively deal with cloud marketplaces. Because these kinds of marketplaces are more than just an add-on – they can support to accelerate the cloud migration.

Categories
Cloud Computing

Top-Down vs. Bottom-Up Cloud Strategy: Two Ways – One Goal

Along with the steady growth of the cloud the question for appropriate cloud uses cases rises. After companies like Pinterest, Airbnb, Foursquare, Wooga, Netflix and many others have shown how cloud infrastructure and platforms can be used to create new or even disruptive business models, more and more CEOs and CIOs would like to benefit from the cloud characteristics. The issue: established companies run many legacy enterprise applications that cannot be moved into the cloud based on its existing form. For many decision makers this raises the question whether they should follow a top-down or bottom-up strategy.

Cloud Strategy: Top-Down vs. Bottom-Up

An IT strategy has the duty to support the corporate strategy at the best. Thus, in line with the increasing digitalization of society and economy the value proposition and meaning of IT significantly rises. This means that the impact of IT on the corporate strategy will become more important in the future. Assuming that cloud infrastructure, platforms and services are the technological fundament of the digital transformation, it is consequent that the cloud strategy has a direct impact on the IT strategy.

This raises the question how far cloud services are able to support the corporate strategy whether direct or indirect. It is not mandatory that this is reflecting in numbers. If a company – for instance – is able to let its employees work more flexible – based on a software-as-a-service (SaaS) solution – than it has done something for the productivity, which has a positive effect on the company. However, it is important to understand that cloud infrastructure and platforms just serve as a foundation on which companies get the capabilities to create innovation. The cloud is just a vehicle.

Two approaches can be used to get a better understanding about the impact of cloud computing on the corporate strategy:

  • Top-Down Cloud Strategy
    During the top-down approach the possibilities of cloud computing are analyzed and a concrete use case is defined. So an innovation or idea is created that is enabled based on cloud computing. On this basis the cloud strategy is created.
  • Bottom-Up Cloud Strategy
    During the bottom-up approach an existing use case is implemented regarding the possibilities of cloud computing. This means it is analyzed how the cloud can help to support the needs of the use case. Thereof the respective cloud strategy is extrapolated.

Mainly the top-down approach comes up with new business models or disruptive ideas. The development is done on the green field within the cloud and mostly belongs to innovators. The bottom-up approach follows the goal to move an existing system or application into the cloud or redevelop it there. In this case it is mostly about to keep an existing IT resource alive or in best case to optimize it.

Bottom-Up: Migration of Enterprise Applications

Existing companies prefer to follow a bottom-up strategy in order to quickly benefit from the cloud capabilities. However, the devil is in the details. Legacy or classical enterprise applications are not developed to run on a distributed infrastructure – thus a cloud infrastructure. This means that it is not natural for them to scale out and they are only able to scale up at the utmost by using e.g. several Java threads on one single system. If this system fails also the application is not available anymore. Every application that should run in the cloud thus also has to follow the characteristics of the cloud and needs to be developed for this purpose. The challenge: Companies still lack of appropriate staff with the right cloud skills. In addition, companies are intensively discussing “Data Gravity”. This is about the inertia respectively the difficulty of moving data. Either because of the size of the data volume or because of a legal condition that requires to store the data in the own environment.

Vendors have recognized the lack of knowledge as well as the “Data Gravity” and try to support the bottom-up strategy with new solutions. Based on the “NetApp Private Storage” NetApp allows companies to balance the “Data Gravity” between public cloud services and the own control level. Companies have to fulfill several governance and compliance policies and thus have to keep their data under control. One solution is to let the cloud services access the data in a hybrid cloud model without moving them. Thus in this scenario the data is not directly stored in the cloud of the provider. Instead the cloud services are accessing the data via a direct connection when processing them. NetApp enables this scenario in cooperation with Amazon Web Services. For example, Amazon EC2 instances can be used to process the data that is stored in an Equinix colocation data center – the connection is established via AWS Direct Connect.

Another challenge with not cloud ready enterprise applications in the public cloud is the data level – when the data should leave the cloud of a provider. The reason is that cloud native storage types (object storage, block storage) are not compatible with common on-premise storage communication protocols (iSCSI, NFS, CIFS). In cooperation with Amazon Web Services, NetApp Cloud ONTAP tries to find a remedy. As some kind of NAS storage the data is stored on an Amazon Elastic Block Storage (EBS) SSD. In this case Cloud ONTAP serves as a storage controller and ensures the access of not cloud ready enterprise applications to the data. Due to the compatibility to common communication protocols the data can be moved easier.

VMware vCloud Air targets at companies with existing enterprise applications. The vCloud AIR public cloud platform based on the vSphere technology and is compatible to on-premise vSphere environments. So existing workloads and virtual machines can be moved back and forth between VMware’s public cloud and a virtualized on-premise infrastructure.

ProfitBricks tries to support companies with its Live Vertical Scaling concept. In this case a single server can be vertically extended with further resources – like amount of CPU cores or RAM – without rebooting the server. Thus the performance of a running virtual server can be enhanced without making changes to the application. The best foundation for this is a LAMP stack (Linux, Apache, MySQL, PHP) since e.g. a MySQL database recognizes new resources without any adjustments and a reboot of the host system and is able to use the added performance immediately. To make this possible ProfitBricks did modifications at operating system and hypervisor (KVM) level – that are transparent for the user. The customers only have to use the provided reference operating system image that includes the Live Vertical Scaling functionality.

A random bottom-up use case of enterprise applications in the public cloud can be find at Amazon Web Services. However, this example also shows that the meaning of system integrators in the cloud is rising.

  • Amazon Web Services & Kempinski Hotels
    The hotel chain Kempinski Hotels has migrated the majority of its core applications and departments among others finance, accounting and training to the Amazon cloud infrastructure. Together with system integrator Cloudreach a VPN connection was established between the own data center in Genf and the Amazon cloud over which the 81 hotels worldwide are now provided. Furthermore, Kempinski plans to completely shut down the own data center to move 100 percent of its IT infrastructure to the public cloud.

Top-Down: Greenfield Approach

In contrast to just keep enterprise applications the Greenfield approach follows the top-down strategy. In this case an application or a business model is developed from scratch and the system is matched to the requirements and characteristics of the cloud. In this context we are talking about a cloud native application that is considering the scalability and high-availability from the beginning. The application is able to independently start additional virtual machines if more performance is necessary (scalability) respectively to shut them down when they are not needed anymore. It is the same when a virtual machine fails. In this case the application also independently takes care that another virtual machine is starting as a substitute (high-availability). Thus, the application is able to work on any virtual machine of a cloud infrastructure. One reason is that any machine can fail at any time and a substitute needs to be started. In addition, also the data that are processed by an application are not stored at a single location anymore but are stored distributed over the cloud.

Startups and innovative companies are aware of this complexity what the following uses cases display.

  • Amazon Web Services & Netflix
    Netflix is one of the lighthouse projects on the Amazon cloud. The streaming provider is using each characteristics of the cloud what is reflected by the high availability as well as the performance of the platform. As one of the pioneers on the Amazon infrastructure Netflix has developed its own tools – Netflix Simian Army – from the very beginning to master the complexity.

However, the recent past has shown that innovative business models not necessarily have to be implemented on a public cloud and that the greenfield approach not only belongs to startups.

  • T-Systems & Runtastic
    Runtastic is an apps provider for persistence, strength & toning, health & wellness as well as fitness and helps users to reach their health and fitness goals. The company has a massive growth. After 100.000 downloads in 2010 and 50 million downloads in 2013, the number of downloads reached over 110 million until today. Furthermore, Runtastic counts 50 million users worldwide. Basically the numbers speak for an ideal public cloud scenario. However, due to technical reasons Runtastic decided for T-Systems and runs its infrastructure in two data centers in a colocation IaaS hybrid model.
  • Claranet & Leica
    Last year camera manufacturer Leica has launched “Leica Fotopark”, an online photo service to manage, edit as well as print and share photos. Managed cloud provider Claranet is responsible for the development and operations of the infrastructure. “Leica Fotopark” is running on a scale out environment based on a converged infrastructure and a software defined storage. The agile operations model is based on the DevOps concept.

Greenfield vs. Enterprise Applications: The Bottom-Line

Whether a company decides for the top-down or bottom-up cloud strategy depends on its individual situation and the current state of knowledge. The fact is that both variants help to let the IT infrastructure, IT organization and the whole company become more agile and scalable. However, only a top-down approach leads to innovation and new business models. Nevertheless, one has to consider that e.g. for the development and operations of the Netflix platform an excellent understanding for cloud architectures is necessary, which is still few and far between on the current market.

Regardless of their strategy and especially with regard to the Internet of Things (IoT) and the essential Digital Infrastructure Fabric (DIF) companies should focus on a cloud infrastructure. Those offer the ideal preconditions for the backend operations of IoT solutions as well as the exchange for sensors, embedded systems and mobile applications. In addition, a few providers support with ready micro services to simplify the development and to accelerate the time to market. Furthermore, a worldwide spanning infrastructure of datacenters offers a global scalability and helps to expand to new countries quick.

Categories
Cloud Computing

Hybrid and Multi Cloud: The real value of Public Cloud Infrastructure

Since the beginning of cloud computing the hybrid cloud is on everyone’s lips. Praised as the universal remedy by vendors, consultants as well as analysts the combination of various cloud deployment models is permanently in the focus during discussions, at panels and conversations with CIOs and IT infrastructure manager. The core questions that needs to be clarified: What are the benefits and do credible hybrid use cases indeed exist, which can be used as best practice guidance notes. This analysis is giving answers to these questions and also describes the ideas behind multi cloud scenarios.

Hybrid Cloud: Driver behind the Public Cloud

Many developers and startups bless the public cloud to escape from high and incalculable upfront costs into infrastructure resources (server, storage, software). Examples like Pinterest or Netflix are showing real use cases and confirm the true benefit. Without the public cloud Pinterest would have never experienced such growth in a short time. Also Netflix benefits from the scalable access to public cloud infrastructure. In the 4th quarter 2014 Netflix has delivered 7.8 billion hours of videos. This is a data traffic of 24,021,900 terabytes of data.

However, what these prime examples are hiding: All of them are green field approaches – like almost every workload that is developed as a native web application on public cloud infrastructure and just represent the tip of the iceberg. However, the reality in the corporate world unveils a completely different truth. Inside the iceberg you find aplenty of legacy applications that are not ready to be operate in the public cloud at the present stage. Furthermore, requirements and scenarios exist for which the use of the public cloud is ineligible. In addition, most of the infrastructure manager and architects know their workloads and its demand very good. Provider should finally accept this and admit that the public cloud in most cases is too expensive for static workloads and other deployment models are more attractive.

By definition, the hybrid cloud sphere of activity is limited to connect a private cloud with the resources of a public cloud. In this case, a company is running an own cloud infrastructure and uses the scalability of a public cloud provider to get further resources like compute, storage or other services on demand. With the rise of further cloud deployment models other hybrid cloud scenarios have been developed that include hosted private and managed private cloud. In particular, for most static workloads – these where the requirements of the infrastructure on average are known – an external static hosted infrastructure fits very well. Variations because of marketing campaigns or the Christmas season – that are occurring periodically – can be compensated by dynamically add further resources from a public cloud.

This approach can be mapped to many other scenarios. In this case, not only pure infrastructure resources like virtual machines, storage or databases must be in the foreground. Even the hybrid use of value added services from the public cloud providers within self-developed applications should be considered, to use a ready function instead of developing it on the own again or benefit from external innovations immediately. With this approach the public cloud offers companies a real value without outsourcing the whole IT environment.

Real hybrid cloud use cases can be find at Microsoft, Rackspace, VMware and Pironet NDH:

  • Microsoft Azure + Lufthansa Systems
    For expanding the internal private cloud and the worldwide datacenter capacities Lufthansa sets on Microsoft Azure. One of the first hybrid cloud scenarios was a disaster recovery concept whereby Microsoft SQL Server databases are mirrored to Microsoft Azure in a Microsoft datacenter. In case of an error within the Lufthansa environment the databases are operated in a Microsoft datacenter without interruption. Furthermore, the own infrastructure resources are extended by Microsoft’s worldwide datacenters to deliver customers a consistent service offering without building own infrastructure resources globally.
  • Rackspace + CERN
    As part of its OpenLap partnership the CERN is using a public cloud infrastructure from Rackspace to get compute resources on demand. This happens typically if physicist needs more compute, as the local OpenStack infrastructure is able to deliver. CERN is experiencing this regularly during scientific conferences when the last data of the LHC and its experiments are being analyzed. Applications with a small I/O rate are well suited to be outsourced to Rackspace’s public cloud infrastructure.
  • Pironet NDH + Malteser
    As part of the “Smart.IT” project Malteser Deutschland sets on a hybrid cloud approach. At this, applications in the own datacenter are combined with communication services like Microsoft Office 365, SharePoint, Lync and Exchange from a public cloud. Applications that are critical in terms of data-protection law – like electronic patient record – are being used from a private cloud in a Pironet datacenter.
  • VMware + Colt + Sega Europe
    Since the beginning of 2012 gaming manufacturer Sega Europe sets on a hybrid cloud to give external testers access to new games. Previously this was realized via a VPN connection into the company’s own network. Meanwhile Sega is running an own private cloud to provide development and test systems for internal projects. This private cloud is directly connected with a VMware based infrastructure in a Colt datacenter. Thus, on the one hand Sega can get further resources in order to compensate peak loads from a public cloud. On the other hand the game testers get a special testing area over it. Thus, the testers don’t have to access the Sega corporate network anymore but testing on servers within the public cloud. When the tests are finished the no more needed servers are shutting down by Sega IT without the intervention of Colt.

Multi Cloud: Automotive Industry as the role model

In the course of the continuously propagation of the hybrid cloud also multi cloud scenarios are moving in the focus. For a better understanding of the multi cloud, it helps to consider the supply chain model of the automotive industry as an example. The automaker sets on various (sometimes redundant) suppliers, which provide him with single components, assemblies or ready systems. In the end the automaker assembles the just in time delivered parts within the own assembly factory.

The multi cloud respectively the hybrid cloud are adopting the idea from the automotive industry by working together with more than one cloud provider (cloud supplier) and integrating everything with the own cloud application respectively the own cloud infrastructure in the end.

As part of the cloud supply chain three delivery tiers exist that can be used to develop an own cloud application or to build an own cloud infrastructure:

  • Micro Service: Micro Services are granular services like Microsoft Azure DocumentDB and Microsoft Azure Scheduler or Amazon Route 53 and Amazon SQS that can be used to develop an own cloud native application. Micro Services can also be integrated as part of an existing application, which is running on an own infrastructure and thus is extended by the function of the Micro Service.
  • Module: A Module encapsulates a scenario for a specific use case and thus provides a ready usable part for an application. To these belong e.g. Microsoft Azure Learning Machine and Microsoft Azure IoT. Modules can be used like Micro Services for development purposes respectively for the integration into applications. However, compared to Micro Services they are providing a greater functionality.
  • Complete System: A Complete System is about a SaaS service, thus an entire application that can directly be used within the company. However, it still needs to be integrated with other existing systems.

In a multi cloud model an enterprise cloud infrastructure respectively a cloud application can fall back on more than one cloud supplier and thus integrate various Micro Services, Modules and Complete Systems of different providers. For this model a company develops most of the infrastructure/ application on its own and extends the architecture with additional external services whose effort would be much too big to redevelop it on its own.

However, this leads to higher costs at cloud management level (supplier management) as well as at integration level. Solutions like SixSq Slipstream or Flexiant Concerto are specialized on multi cloud management and support during the usage and management of cloud infrastructure across providers. On the contrary Elastic.io works on several cloud layers, across various providers and supports as a central connector to make cloud integration easier.

The cloud supply chain is an important part of the Digital Infrastructure Fabric (DIF) and should be considered in any case to benefit from the variety of different cloud infrastructure, platforms and applications. The only disadvantage is that the value added services (Micro Services, Modules) named above are still only available in the portfolios of Amazon Web Services and Microsoft Azure. In the course of the rapid development of use cases for the Internet of Things (IoT), IoT platforms and mobile backend infrastructure are taking an ever-growing significance. Ready solutions (Cloud Modules) are helping potential customers to reduce the development effort and giving impulses for new ideas.

Infrastructure providers whose portfolios still focus on pure infrastructure resources like servers (virtual machines, bare metal), storage and some databases will disappear from the screen in the midterm. Only the ones who enhance their infrastructure with enablement services for web applications, mobile and IoT applications will remain competitive.

Categories
Cloud Computing

Top 10 Cloud Trends for 2015

In 2015, German companies are going to invest around 10.9 billion euro in cloud services, technologies as well as integration and consulting. Although, the German market has developed quite slow as compared to international standard. However in 2015, also this market will mature. The reasons can be find in this article. Crisp Research has identified the drivers behind this development and deducted the top 10 trends of the cloud market for 2015.

1. Cloud Ecosystems and Marketplaces

This year cloud ecosystems and marketplaces are becoming more popular. For some time the Deutsche Telekom Business Marketplace, Deutsche Börse Cloud Exchange or the German Business are present. Service providers are offering marketplaces to increase the scope of their services. However, the buyer side is still not keen. This has several reasons. The lack of integration and less demand are just two reasons. However, along with the cloud market maturity the demand could rise. Cloud marketplaces are part of the logical development of the cloud to give IT buyer a more convenient access to categorize IT resources. Distributors also had understood the importance of cloud marketplaces and are in motion to offer own marketplaces in order to preserve the attraction in the channel. Vendors like the startup Basaas are offering a „Business App Store as a Service“ concept, which can be used to create multi-tenant public cloud marketplaces or internal business app stores.

Integration is a technical challenge and is not easy to solve. However, with a powerful ecosystem of providers under the lead of a neutral marketplace operator, the necessary strengths could be bundle to ensure a holistic integration of services in order to take the biggest burden from the buyer side.

2. Secret Winner: Consultants and Integrators

Complexity. IaaS providers are keeping it a secret. However, for some customers this already ended in a catastrophe. IaaS looks quite simple on paper. But to start a virtual machine with an application on it has basically nothing to do with a cloud architecture. In order to run a scalable and failure-resistant IT infrastructure in the cloud more than administration know-how is necessary. Developer skills and comprehension of the cloud concept are basic skills. Modern IT infrastructures for cloud services are developed like an application. For this purpose, providers like Amazon Web Services, Microsoft Azure, Rackspace or HP are providing building blocks of higher value added services to exactly achieving the scalability and failure-resistance, since this is the responsibility of the customer and not of the cloud provider. ProfitBricks provides “Live Vertical Scaling” setting on a scale-up principle that can be used without special cloud developer skills.

The challenge for a majority of CIOs is that their IT teams lack of the necessary cloud skills or still not enough investments in advanced trainings have been taken. However, this means that a big market (2.9 billion EUR in 2015) opens for consultants and system integrators. But also classical system houses and managed services providers can benefit from this knowledge gap when they are able to transform themselves fast enough. The direkt gruppe and TecRacer are two cloud system integrators from Germany that have impressively shown that they are able to handle public cloud projects.

3. Multi Cloud as a long runner

The multi cloud is an abiding theme. Eventually its importance is propagandizes for years. However, besides the growing demand for cloud services on the buyer side and the increasing maturity level on the vendor side, the area of use for cloud spanning deployments is constantly increasing. This is not only due to offerings like Equinix Cloud Exchange that is enabling direct connections between several cloud providers and the own enterprise IT infrastructure. Based on APIs a central portal can be developed that offers IT buyers a consistent access on IT resources of various providers.

Within the multi cloud context OpenStack and technologies like SaltStack and Docker are playing a central role. The world wide propagation of OpenStack rises continuously. Already 46 percent of all deployments are in production environments – of this 45 percent are on premise private clouds. In Germany also already one third (29.8 percent) of the cloud using companies are dealing actively with OpenStack. In parallel with the increasing importance of OpenStack the relevance of OpenStack for cloud sourcing in the context of multi cloud infrastructure is growing to ensure the interoperability between a various of cloud providers.

To support DevOps strategies and to relinquish writing comprehensive Puppet or Chef scripts, SaltStack is used more often for the configuration management of big and distributed cloud infrastructure. In this context the Docker container wave will grow in 2015. Until December 2014 the Docker Engine was already downloaded 102.5 million times. This is a growth by 18.8 percent within a year. In addition, the team announced extensions for multi container, to support the orchestration of applications across several infrastructures. In the context of container technologies it is worth to take a look at GiantSwarm from Germany. They have developed a micro service infrastructure based on container.

4. Public Clouds are on the rise

In the past public cloud providers faced a barrage of criticism. However, in 2015 they will experience a distinct number of new customers. One reason is the groundwork of necessary requirements they did in the recent past in order to address also enterprise customers. Another reason is the strategic change of managed cloud providers and already cloud transformed system houses with own data centers.

Public cloud players like Amazon AWS or Microsoft Azure massively have made prices spiral downwards. Of course also the customer side have recognized this. Local managed cloud providers (MCP) are getting more and more in a price Q&A with their customers – an unpleasant situation. Virtual machines and storage are sold on a competitive price a small provider is not able to keep up.

The strategies are changing that – for certain situations – MCPs are falling back on public cloud infrastructure to offer their customers lower costs on the infrastructure level. Thus, they have to create partnerships and build knowledge for the respective cloud infrastructure in order to run and maintain the virtual infrastructure and not only offer consulting services. At the same time they also benefit from new functions by the public cloud providers and the global reach. A provider with a data center in a local market is only able to exactly serve this market. However, customers have the demand to enter new target markets without big additional efforts in the short-term. Public cloud provider’s data centers are represented in many regions worldwide and offer exactly these capabilities. MCPs still keep their local data centers to offer customers services referring to local requirements (e.g. legal subjects). In this context hybrid scenarios are playing a major role by which the multi cloud has a priority.

5. Cloud Connectivity and Performance

Because of the continuous shift of mission critical data, applications and processes to external cloud infrastructure leads to the fact that CIOs not only rethink their operational IT concepts (public, private, hybrid) but also have to change their network architectures and connection strategies. A crucial competitive advantage is the selection of the right location. Modern business applications are already provided over cloud infrastructures. From a CIOs today point of view a stable and performant connection to systems and services is essential. This trend will strengthen on and on. Based on direct connect connections like AWS Direct Connect or Microsoft Express Route this can be handled more easily. In this case direct network connections are established between a public cloud provider and an enterprise IT infrastructure in a data center of a colocation provider.

The ever increasing data traffic requires a reliable and in particular stable connectivity in order to get access to the data and information at all times. This becomes more important when business critical processes and applications are outsourced to the cloud infrastructure. The access has to be ensured at any time and with low latency. Otherwise this could lead to essential financial and image damages. The quality of a cloud services significantly depends on its connectivity and the backend performance. Here an essential and important characteristic is the connectivity of the data center to guarantee the customer a stable and reliable access to the cloud services at all time. Data centers are the logistics centers of the future and experience as a logistical data vehicle its heyday.

6. Mobile Backend Development

The digital transformation is affecting each part of our life. Around 95 percent of all smartphone applications are connected to services that are running on servers, which are distributed over data centers worldwide. In addition, without a direct and mostly constant connection these apps are not functional.

This means that modern mobile applications without a stable and global oriented backend infrastructure are not working anymore. This is equally with services in the Internet of Things (IoT). A mix consisting of distributed intelligence on the device and at the backend infrastructure ensures a holistic communication. In addition, the backend infrastructure ensures the holistic connection between among all devices.

For this a public cloud infrastructure provides the ideal foundation. On the one hand the leading providers are offering the global reach. On the other hand they already have ready micro services in their portfolios, which represent specific functionalities that don’t need to be developed from scratch. These services can be used within the own backend service. Other providers of mobile-backend-as-a-services (MBaaS) or IoT platforms have been specialized on the enablement of mobile backend or IoT services. Examples are Apinauten, Parse (now part of Facebook) and Kinvey.

7. Cloud goes Vertical

In the first phase of the cloud providers of software-as-a-service (SaaS) applications concentrated on general respectively horizontal solutions like productivity suites or CRM systems. The needs of single industries weren’t consider very much. One reason was the lack of cloud ready ISVs (Independent Software Vendor), which didn’t find their way into the cloud.

With the emerging cloud transformation of ISVs and the continuous entrance of new vendors the SaaS market growth and with that the offering of vertical solutions tailored for specific industries. Examples are Opower and Enercast in the area of Smart Energy, Hope Cloud for the hotel industry and trecker.com in the agricultural sector.

One example for the importance of verticals is Salesforce. Besides investments in further horizontal offerings Salesforce is trying to make its platform more attractive specifically for single industries like the financial sector or the automotive industry.

8. The Channel to turn on the gas

The majority of the channel has recognized that it needs to demonstrate its abilities in cloud times. First of all the big distributors started initiatives to preserve respectively increase their attractiveness on the customers side (reseller like system houses). 2015 can mark a watershed. At all events a practical test.

The success of distributors is directly connected with the successful cloud transformation of system houses. Many system houses are not able to make this way on their own and need help from the distributors. Different cloud scenarios will show which services are still being purchased from the distributors and which services are directly sourced from the cloud providers.

The whole channel needs to rethink itself and its business model and to align it to the cloud. Except for hard- and software to build private or managed private clouds the access to public clouds via a self-service is a cakewalk. For some target groups the system house and thus the distributor won’t have any relevance. Other customers still need help on their way to the cloud. If the channel is not able to help someone else will do it.

9. Price vs. Feature War

In the past price reductions for virtual machines (VM) and storage hit the headlines. Amazon AWS went first and after a short time Microsoft and Google followed. Microsoft even announced to follow each price reduction by Amazon.

It seems that the providers reached their economic border and that the price war is over for now. Instead features and new services are coming to the fore to ensure differentiation. These include more powerful VMs or the expansion of the portfolio of value-added services. For a good reason – pure infrastructure like VMs of storage are no longer a differentiator in the IaaS market. Vertical services are the future of IaaS in the cloud.

Although the IaaS market is getting its real pace now, however, infrastructure is commodity and doesn’t have much potential for innovation. We have reached a point in the cloud where it is about to use a cloud infrastructure to create services on top of it. Thus, besides virtual compute and storage enterprises and developers need value-added services like Amazon SWF or Azure Machine Learning in order to run the own offering at speed, scale and failure-resistance – and to use it for mobile and IoT products.

10. Cloud Security

The attacks on JP Morgan, Xbox and Sony last year have shown that each company is a potential target for cyber attacks. Whether it is because of fun (“lulz”), financial interests or motivated by political reasons, the potential of threats increases constantly. Here it shouldn’t be neglected that mostly the big cases appear in the media. Attacks on SMEs are unmentioned or worse, the victims didn’t realized it or if too late.

One doesn’t need to be part of the Sony executive board to realize that a successful attack is a big threat! If it’s about the reputation because of stolen customer data or sensitive company information – digital data have become a precious good that needs to be protected. It is just a matter of time until one is getting into the crosshairs of hackers or political motivated extremists and intelligence agencies. This must not happen in 2015. However, the ongoing digitalization leads to a higher connectivity that hacker avails oneself to plan his attacks.

Compared to standard security solutions like firewalls or email security, Crisp Research estimates that more investments in higher-value security services like data leak prevention (DLP) are taking place in 2015. In addition, CISOs have to address strategies to avert DDoS attacks.

Categories
Cloud Computing

Study: OpenStack in the Enterprise (DACH Market)

OpenStack is making big headlines these days. The open source cloud management framework is no longer an infant technology only suited to proof of concepts for service providers, academic institutions and other “early users”.

Over the last 12 months OpenStack has gained serious momentum among CTOs and experienced cloud architects. But what about the typical corporate CIO? What are the key use cases, potential benefits and main challenges when it comes to implement OpenStack within a complex Enterprise-IT environment? How far have CIOs and data center managers in the DACH region pushed their evaluation and proof of concepts around the new cloud technology? Where can we find the first real world implementations of OpenStack in the German speaking market?

The given survey presents the first empirical findings and answers to the above raised questions regarding the enterprise adoption of OpenStack. In cooperation with HP Germany Crisp Research has conducted 716 interviews with CIOs from the DACH region across various industries. The interviews were collected and analyzed between July and October 2014.

If you are interested in the executive version of the OpenStack DACH study get in touch with me via my Crisp Research contact details.

Categories
Cloud Computing

Cloud Market 2015: The Hunger Games are over.

Last year, the cloud market gave us great pleasure with a lot of thrilling news. Lots of new data centers and innovative services show that the topic has been established in the market. The hunger games are finally over. Although, the German market has developed quite slow as compared to international standard. However, an adoption rate of almost 75 percent shows a positive trend – underwritten by two credible reasons. The providers are finally addressing the needs and requirements of their potential customers. At the same time more and more users jump on the cloud bandwagon.

Cloud providers at a glance

In 2015, cloud providers will enjoy a large clientele in Germany. For that, the majority of the providers have strategically positioned themselves with a German data center to empower local customers to physically store their data and to fulfill the requirements of the German Federal Data Protection Act (BDSG).

  • Amazon Web Services made the biggest step from all US American providers. A region especially for the German market shows an acknowledgement of the IaaS market leader to Germany. At the same time Amazon has strategically positioned in central Europe and also enhanced the attraction for customers in adjoining countries. From a technological point of view (reduction of latency etc.) this is not a neglectable step. Services especially for enterprises (AWS Directory Service, AWS CloudTrail, AWS Config, AWS Key Management Service, AWS CloudHSM) show that Amazon has been developed from a startup enabler to a real alternative for enterprises. This Amazon has underwritten with significant German enterprise reference customer (like Talanx, Kärcher and Software AG). However, Amazon still lacks of powerful hybrid cloud functionalities at application level and need to improve. After all, enterprises won’t go for a pure public cloud approach in the future.
  • Microsoft’s “Cloud-First” strategy pays off. In particular, the introduction of Azure IaaS resources was an important step. Besides an existing customer base in Germany, Microsoft has the advantage to support all cloud operation models. Alongside Azure public cloud also hosted models (Cloud OS Partner Network, Azure Pack) as well as private cloud solutions (Windows Server, System Center, Azure Pack) are available, customer can use to build a hybrid scenario. In addition, rumors from 2013 grow stronger that Microsoft will open a German data center in 2015 to offer cloud services under German law.
  • ProfitBricks, one of the few IaaS public cloud providers originally from Germany growth and thriving. Besides a new data center location in Germany (Frankfurt) several new employees in 2014 show that the startup develops well. An update of its Data Center Designer (WYSIWYG editor) underwrites the technology progress. Compared to other IaaS providers like Amazon or Microsoft there is still a lack of a portfolio for value added services. This has to compensate with a convincing and powerful network of partners.
  • Last year Rackspace started to refocus from public IaaS to managed cloud services to bethink itself on one of its strength – the “Fanatical Support”. However, when it comes to trends like OpenStack or DevOps, Rackspace is forward pressing. After all no company can’t afford to focus on this technologies and services in the future in order to offer its developers more liberty to create new digital applications faster and more efficient.
  • At the end of 2014, IBM announced an official Softlayer data center in Frankfurt. As part of the global data center strategy this happened in cooperation with colocation provider Equinix. The Softlayer cloud offers the benefit to provide bare metal resources (physical server) like virtual machines.

Even if the market and the providers made a good progress, Crisp Research has identified following challenges that still need to be addressed (excerpt):

  • The importance of hybrid capabilities and interfaces (APIs) for multi cloud approaches is getting bigger.
  • Standards like OpenStack and OpenDaylight must be supported.
  • Advanced functionalities for the enterprise IT (end-to-end security, governance, compliance et.al.) are needed.
  • There is a big need for more cloud connectivity based on cloud hubs within colocation data centers.
  • Price transparency has to improve significantly.
  • Ease of use needs to have a high priority.
  • Enablement services for the Internet of Things (IoT). Only the ones with an appropriate service portfolio will be in the vanguard in the long term.

Depending on the provider’s portfolio the requirements above are fulfilled partly or predominantly. However, what all providers have in common is how to address their target groups. Historically, direct dealings in Germany are difficult within the IT market. New potential customers are mainly addressed with the aid of partners or distributors.

View of the users

More than 74 percent of German companies are planning, implementing and using cloud services and technologies in their production environments. This is a strong signal that cloud computing has finally arrived in Germany. For 19 percent of German IT decision makers cloud computing is an integral part of their agenda and operations. Another 56 percent of the companies are planning and implementing or using cloud as part of first projects or workloads. Here, hybrid and multi cloud infrastructures play a central role to ensure integration on data, application and process level.

This leads to the question why now – after more than 10 years? After all, Amazon AWS started in 2006 and Salesforce was already founded in 1999. One reason is the fundamentally slow adoption of new technologies – this arises from caution and German efficiency. The majority of German companies are usually waiting until new technologies have been settled and prove the successful usage. Traditionally, early adopters are very few in Germany.

But this is not the main reason. The cloud market had to develop. When Salesforce and later Amazon AWS entered the market not many services were available that fulfilled the requirements or were an equal substitute for existing on premise solutions. For this reason IT decision makers still set on well-tried solutions at that time. In addition, there was no need to change something, which was down to the fact that the benefits of the cloud weren’t clear respectively the providers didn’t clarify it good enough. Another reason is the fact that sustainable changes in the IT industry are happening in decades and not in a couple of years or months. For all those IT decision makers, which set on classical IT solutions during the first two cloud phases, the amortization period and IT lifecycles are ending now. The ones, who have to renew hard- and software solutions today, have cloud services on their list for the IT environments.

Essential reasons that defer cloud transformation are (excerpt):

  • Insecurity due to misinformation from many providers that sold virtualization as cloud.
  • Legal topics had to clarified.
  • The providers had to build trust.
  • Cloud knowledge was few and far between. Lacks of knowledge, complexity and integration problems are still the core issues.
  • Applications and systems have to be developed in and for the cloud from scratch.
  • There were no competitive cloud services from German providers.
  • There were no data centers in Germany to fulfill the German Federal Data Protection Act (BDSG) and other laws.

German companies are halfway through their cloud transformation process. Meanwhile they are looking to multi cloud environments based on infrastructure, platforms and services from various providers. This part of the Digital Infrastructure Fabric (DIF) is the foundation of their individual digital strategy, on which new business models and digital products e.g. for the Internet of Things can be operated.

Categories
Cloud Computing

The way to the holy IaaS grail

In 2014, cloud computing finally arrived in Germany. A current study by Crisp Research under 716 IT decision makers shows a representative picture of the cloud adoption in the DACH market. For 19 percent of the sample cloud computing is a regular part on the IT agenda and production environments. 56 percent of the companies are already planning and implementing cloud services and technologies and using it for first projects and workloads. Crisp Research forecasts that German companies will spend around 10.9 billion EUR on cloud services, technologies as well as integration and consulting in 2015. Therefore, more and more companies evaluate the use of infrastructure-as-a-service (IaaS). For German IT decision makers this raises the question which selection criteria they must consider. Which deployment model is the right one? Is a US provider in general unsecure? Is a German provider mandatory? Which possibilities stay after Snowden and Co.?

Capacity Planning, Local or Global, Service?

Before using IaaS there is the fundamental question how and for which purpose the cloud infrastructure will be used. In this context the capacity planning plays a decisive role. In most cases companies know their applications and workloads and thus can estimate how scalable the infrastructure regarding performance and availability must be. However, scalability must also be considered from a global point of view. If the company focuses mainly on the German or DACH market a local provider with a data center in Germany is enough to serve the customers. If the company wants to expand in global markets in the midterm, a provider with a global footprint is to recommend who also operates data centers in the target markets. Following questions are:

  • What is the purpose of using IaaS?
  • Which capacities are necessary for the workloads?
  • What kind of reach is required? Local or global?

Talking about scalability the term “hyper scaler” is often used. These are provider whose cloud infrastructure theoretically is capable to scale endlessly. To these belong Amazon Web Services, Microsoft Azure and Google. The term endlessly should be treat with caution. Even the Big Boys hit the wall. Finally the virtual infrastructure based on physical systems and hardware doesn’t scale.

Companies who have a global strategy to grow into their target markets in the midterm should concentrate on an international operating provider. Among the above named Amazon AWS, Google, and Microsoft also HP, IBM (Softlayer) or Rackspace come into play which are operating a public or managed cloud offering. Who sets on a “global scaler” from the beginning gets an advantage later on. The virtual infrastructure and the running applications and workloads on top of it can be deployed easier to accelerate the time to market.

Cloud connectivity (low latency, high throughput and availability) should also not be underestimated. Is it enough that the provider and its data centers are only able to serve the German market or exists a worldwide-distributed infrastructure of data centers, which are linked to each other?

Two more parameters are the cloud model and the related type of service. Furthermore, hybrid and multi cloud scenarios should be considered. Following questions are:

  • Which cloud model should be considered?
  • Self-service or managed service?
  • Hybrid and multi cloud?

Current offerings distinguish public, hosted and managed private clouds. Public clouds are built on a shared infrastructure and are mainly used by service providers. Customers share the same physical infrastructure and are logically separated based on a virtualized security infrastructure. Web applications are an ideal use case for public clouds since standardized infrastructure and services are sufficient. A hosted cloud model transfers the ideas of the public cloud into a hosted version administered by a local provider. All customers are located on the same physical infrastructure and are virtual separated from each other. In most cases the cloud provider operates a local data center. A managed private cloud is an advanced version of a hosted cloud. It is especially attractive to those companies who want to avoid the public cloud model (shared infrastructure, multi-tenancy) but do not have the financial resources and the knowledge to run a cloud in the own IT infrastructure. In this case, the provider operates an exclusive and dedicated area on its physical infrastructure for the customers. The customer is able to use the managed private cloud exactly like a public cloud but on a non-shared infrastructure, which is located in a provider’s data center. In addition, the provider offers consultancy services to help the customer to transfer its applications and systems in the cloud or to develop them from scratch.

These hyper scaler respectively global scalers named above are mainly public cloud provider. Offering a self-service model the customers are responsible for building and operating the virtual infrastructure respectively the applications running on top of the infrastructure. In particular cloud player like Amazon AWS, Microsoft Azure and Google GCE are offering their infrastructure services based on a public cloud model and a self-service. Partner networks are helping the customers to build and run the virtual infrastructure, applications and workloads. Public cloud IaaS offerings with a self-service are very limited in Germany. The only providers are ProfitBricks and JiffyBox by domainFactory. However, JiffyBox’s focus is on webhosting and not enterprise workloads. CloudSigma from Switzerland should be named as a native provider in DACH. This German reality also reflects the provider’s strategies. The very first German public IaaS provider ScaleUp Technologies (2009) completely renewed its business model by focusing on managed hosting plus consultancy services.

Consultancy is the keyword in Germany. This is the biggest differentiator to the international markets. German companies prefer hosted and managed cloud environments including extensive service and value-added services. In this area providers like T-Systems, Dimension Data, Cancom, Pironet NDH or Claranet are present. HP has also recognized this trend and serves consultancy services in addition to its OpenStack-based HP Helion Cloud offering.

Hybrid- und multi-cloud environments shouldn’t be neglected in the future. A hybrid cloud connects a private cloud with the resources of a public cloud. In this case, a company operates an own cloud and uses the scalability and economies of scale from a public cloud provider to get further resources like compute, storage or other services on demand. A multi-cloud concept extends the hybrid cloud idea with the number of clouds that are connected. More precisely, it is about n-clouds that are connected, integrated or used in any form. For example, cloud infrastructures are connected so that the applications can use several infrastructure or services in parallel, depending on the capacity utilization or according to the current prices. Even the distributed or parallel storage of data is possible in order to ensure the availability of the data. It is not necessary that a company connect each cloud that is used to run a multi-cloud scenario. If more than two SaaS applications are part of the cloud environment it is basically already a multi-cloud setup.

At application level Amazon AWS doesn’t offer extensive hybrid cloud functionalities at present but is ever expanding. Google doesn’t offer any hybrid cloud capabilities. Because of public and private cloud solutions Microsoft and HP are able to offer hybrid cloud scenarios on a global scale. In addition, Microsoft has the Cloud OS Partner Network, which enables companies to build Microsoft based hybrid clouds together with a hosting provider. As a German provider T-Systems has the capabilities to build hybrid clouds on a local level as well as on a global scale. Local providers like Pironet NDH are offering hybrid capabilities on German ground.

Legend: Data Privacy and Data Security

Since Edward Snowden and the NSA scandal happened many legends have been created around data privacy and data security. Providers, especially from Germany, advertise with a higher security and protection against espionage and other attacks when the data is stored in a German data center. The confusion. When it comes to security, two different terms are frequently being mixed: data security and data privacy.

Data security means the implementation of all technical and organizational procedures in order to ensure confidentially, availability and integrity for all IT systems. Public cloud providers by far offer better security than a small business is able to achieve. This is due to the investments that cloud providers are making to build and maintain their cloud infrastructures. In addition, they employ staff with the right mix of skills and have created appropriate organizational structures. For this reason, they are annually investing billions of US dollars. There are only few companies outside of the IT industry that are able to achieve the same level of IT security.

Data privacy is about the protection of personal rights and privacy during the data processing. This topic leads to the biggest headaches for most companies, due to the fact that the legislative authority can’t take it easy. This means that a customer has to audit the cloud provider in compliance with the local federal data protection act. In this case, it is advisable to use the expert report of a public auditor since it is time and resource consuming for a public cloud provider to be audited by each of its customers. Data privacy is a very important topic; after all, it is about a sensitive dataset. However, it is essentially a topic of legal interest that must be ensured by data security procedures.

A German data center as a protection against the espionage of friendly countries is and will stay a myth. When there’s a will, there’s a way. When an attacker wants to get the data it is only about the criminal energy he is willing to undertake and the funds he is able to invest. If the technical challenges are too high, there is still the human factor as an option – and a human is generally “purchasable”.

However, US American cloud players have recognized the concerns of German companies and have announced or started to offer their services from German data centers. Among other Salesforce (partnership with T-Systems), VMware, Oracle and Amazon Web Services. Nevertheless, a German data center has nothing to do with a higher data security. It just fulfills

  • The technical challenges of cloud connectivity (less latency, high throughput and availability).
  • The regulatory framework of the German data privacy level.

Technical Challenge

During the general technical assessment of an IaaS provider the following characteristics should be considered:

  • Scale-up or scale-out infrastructure
  • Container support for better portability
  • OpenStack compatibility for hybrid and multi cloud scenarios

Scalability is the characteristic to increase the overall performance of a system by adding more resources like complete computing units or granular units like CPU or RAM. Using this approach the system performance is capable to grow linear with the increasing demand. So, unexpected load peaks can be absorbed and the system doesn’t break down. Scalability differs scale-up and scale-out. Scale-out (horizontal scalability) increases the system performance by adding complete compute units (virtual machines) to the overall system. In contrast, scale-up (vertical scalability) increases the system performance by adding further granular resources to the overall system. These resources can be storage, CPU or RAM. Taking a closer look on the top cloud applications, these are mainly developed by startups, uncritical workloads or developments from scratch. Attention should be paid to the scale-out concept, which makes it complicated for enterprises to move their applications and systems into the cloud. At the end of the day, the customer has to develop everything from scratch since a non-distributed developed system doesn’t work, as it should run on a distributed scale-out cloud infrastructure.

IT decision makers should consider that their IT architects are detaching from the underlying infrastructure in the future to move applications and workloads over different providers without borders. Container technologies like Docker make this possible. From the IT decision makers point of view thus the selection of a provider that supports e.g. Docker is a strategic tool to optimize modern application deployments. Docker helps to ensure the portability of an application to increase the availability and decrease the overall risk.

Hybrid and multi cloud scenarios are not only a trend but reflects the reality. Cloud provider should act in terms of their customers and instead of using a proprietary technology also set on open source technologies respectively a de-facto standard like OpenStack. Thus they enable the interoperability between cloud service provider and creating the requirements for a comprehensive ecosystem, in which users are getting a better comparability as well as the capabilities to build and manage truly multi cloud environments. This is the groundwork to empower IT buyer to benefit from the strength of individual provider and the best offerings on the market. Open approaches like OpenStack are fostering the prospective ability to act of IT buyer across provider and data center borders. This makes OpenStack to an important cloud-sourcing driver.

Each way is an individual path

Depending on the requirements the way to the holy IaaS grail can become very rocky. In particular, enterprise workloads are more difficult to handle as novel web applications. Regardless of this, it must be considered that applications, which are running on IaaS must be developed from scratch. This depends on the particular provider. But in most cases this is necessary in order to use the specific provider occurrences. Mastering the individual path the following point of views can help:

  • Know and understand the own applications and workloads
  • Perform a data classification
  • Don’t confuse data privacy with data security
  • Evaluate the cloud model: Self-service or managed service
  • Check hybrid and multi cloud scenarios
  • Estimate local and global operating distance
  • Don’t underestimate cloud connectivity
  • Evaluate container technologies for technological liberty of applications
  • Consider OpenStack compatibility
Categories
Cloud Computing

Amazon WorkMail: Amazon AWS is moving up the cloud stack

For a long time the Amazon Web Services portfolio was the place to go for developers and startups which used the public cloud infrastructure to start test balloons or try to make their dream come true to become the next billion dollar company. Over the years Amazon understood that startups don’t have the biggest jewel cases and that the real money comes from established companies. New SaaS applications have been released to address enterprises, which still haven’t found their way to the Amazon cloud. The next coup is Amazon WorkMail a managed e-mail and calendar service.

Overview: Amazon WorkMail

Amazon WorkMail is a fully managed e-mail and calendar service like Google Apps for Work or Microsoft Office 365/ Microsoft Exchange. This means that a customer doesn’t have to administrate the e-mail infrastructure and the necessary servers and software and only need to take responsibility for managing the users, email addresses and security policies at user-level.

Amazon WorkMail offers access via a web interface, supports Outlook clients and mobile devices via the Exchange ActiveSync protocol. The administration of all users is handled with the recently released AWS Directory Service.

Amazon WorkMail is integrated with several existing AWS services like AWS Directory Service, AWS Identity and Access Management (IAM), AWS Key Management Service (KMS) and Amazon Simple Email Service (SES). The integration with Amazon WorkDocs (former Amazon Zocalo) allows sending and sharing documents within an email workflow.

E-Mail as a starter drug

E-Mail. A no-brainer? You’d think. However, after IBM and Microsoft recently invested in this topic e-mail is still a slow-burner. E-mail belongs to the category of “low hanging fruits”, thus those products with which it is possible to gain success quick without much effort.

In the case of Amazon WorkMail the portfolio extension is a logical step. The service catalogue development with services like Amazon WorkSpaces (Desktop-as-a-Service) and Amazon WorkDocs (File-Sync and Share) is especially targeting enterprise customers for whom the Amazon cloud wasn’t a contact point so far. This has different reasons. The main reason is that the Amazon cloud infrastructure is a programmable building block and primarily attractive for those who want to develop own web-based applications on it. With the help of “Value-added services” respectively “Enablement services” an additional value can be created out of the pure infrastructure resources like Amazon EC2 (compute) or Amazon S3 (object storage). Because at the end of the day an EC2 instance is just a virtual machine and offers no additional value of its own volition.

Most of the companies, who want to deal with low complexity and less effort on infrastructure level and gain success in the short run, are overstrained with the self-service modus of the Amazon cloud. For this, most of them lack the necessary cloud knowledge and developer skills to ensure scalability and high-availability of the virtual infrastructure. In the meantime the AWS service offering is versatile but still addresses real infrastructure professionals and developer.

The continuous portfolio development lets AWS moving up the cloud stack. After infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS), WorkSpaces, WorkDocs and WorkMail make sure Amazon finally arrived in the software-as-a-service (SaaS) market. Amazon has started to use its own cloud infrastructure to offer higher value services. Oracle is exactly doing the opposite. After the database giant started on the SaaS layer it is now moving down the cloud stack to IaaS.

E-mail is still an important business process in the enterprise world. Thus, it is just a logical step for Amazon also to be part of the game. At the same time WorkMail can act as a starter drug for potential new customers to explore the Amazon cloud and find other benefits. Furthermore, the partner network of system integrators can use Amazon WorkMail to offer its customers a managed e-mail solution. How successful Amazon WorkMail will be remains to be seen. Google Apps, Microsoft Hosted Exchange, Zoho or Mailbox.org (powered by Open-Xchange) are only some mature solutions on the market.

In the end one important issue must be considered: IaaS the Amazon style is the ideal way to develop own web applications and services. Managed cloud services and SaaS are helping to adopt new technologies in the short run. Amazon WorkSpaces, WorkDocs and WorkMail belong to the last category.

Categories
Cloud Computing Open Source

Signature Project: SAP Monsoon

Until now, SAP didn’t make a strong impression in the cloud. The Business-by-Design disaster or the regular changes in the cloud unit’s leadership are only two examples that reveal the desolate situation of the German flagship corporation from Walldorf in this market segment. At the same time, the powerful user group DSAG attempts a riot. The complexity of SAP’s cloud ERP as well as the lack of HANA business cases are some of the issues. The lack of transparency of prices and licenses as well as a sinking appreciation for the maintenance agreements, since the support doesn’t justify the value of the supporting fees, are leading to uncertainty and irritation on the customer side. To add to this, a historically grown and complex IT infrastructure causes a significant efficiency bottleneck in the operations. However, a promising internal cloud project might set the course for the future if it is thoroughly implemented: Monsoon.

In the course of the years, the internal SAP cloud landscape has evolved into massive but very heterogeneous infrastructure boasting an army of physical and virtual machines, petabytes of RAM and petabytes of cloud storage. New functional requirements, changes in technology as well as a number of M&As have resulted in various technology silos, thus greatly complicating any migration efforts. The new highly diverse technology approaches and a mix of VMware vSphere and XEN/KVM distributed over several datacenters worldwide lead to increasingly high complexity during the SAP’s infrastructure operations and maintenance.

The application lifecycle management is the icing on the cake as the installations; respectively, the upgrades are manual, semi-automated or automated, as determined by the age of the respective cloud. This unstructured environment is by far not an SAP-specific problem, but represents rather the reality in middle to big size cloud infrastructure whose growth has not been controlled over the last years.

„Monsoon“ in focus – a standardized and automated cloud infrastructure stack

Even if SAP is in good company with the challenge, this situation leads to vast disadvantages at the infrastructure, application, development and maintenance layers:

  • The time developers wait for new infrastructure resources is too long, leading to delays in the development and support process.
  • Only entire releases can be rolled out, a stumbling block which results in higher expenditures in the upgrade/ update process.
  • IT operations keep their hands on the IT resources and wait for resource allocation approvals by the responsible instances. This affects work performance und leads to poor efficiency.
  • A variety of individual solutions make a largely standardized infrastructure landscape impossible und lead to poor scalability.
  • Technology silos distribute the necessary knowledge across too many heads and exacerbate the difficulties in collaboration during the troubleshooting and optimization of the infrastructure.

SAP addresses these challenges proactively with its project “Monsoon”. Under the command of Jens Fuchs, VP Cloud Platform Services Cloud Infrastructure and Delivery, the various heterogeneous cloud environments are intended to become a single homogeneous cloud infrastructure, which should be extended to all SAP datacenters worldwide. Harmonized cloud architecture, widely supported uniform IaaS management, as well as an automated end-to-end application lifecycle management form the foundation of the “One Cloud”.

As a start, SAP will improve the situation of its in-house developers. The foundation of a more efficient development process is laid out upon standardized infrastructure, streamlining future customer application deployments. For this purpose, “Monsoon” is implemented in DevOps mode so that development and operations of “Monsoon” is split into two teams who work hand in hand to reach a common goal. Developers are getting access to required standardized and on-demand IT resources (virtual machines, developer tools, services) through a self-service portal. Furthermore, this mode enables the introduction of the so called continuous delivery. This means that parts of “Monsoon” have already been implemented and used actively in production while other parts are still in development. After passing through development and testing, components are being directly transferred into the production environment without wait time for a separate release cycle. As a result, innovation growth is fostered.

Open Source and OpenStack are the imperatives

The open source automation solution Chef is the cornerstone of “Monsoon’s” self-service portal, enabling SAP’s developers to deploy and automatically configure the needed infrastructure resources themselves. This also applies to self-developed applications. In general, the “Monsoon” project makes intensive use of open source technologies. In addition to the hypervisors XEN and KVM, other solutions like the container virtualization technology Docker or the platform-as-a-service (PaaS) Cloud Foundry are being utilized.

The anchor of this software-defined infrastructure is OpenStack. The open source project that can be used to build complex and massive scalable cloud computing infrastructure supports IT architects during the orchestration and the management of their cloud environments. Meanwhile, a powerful conglomerate of vendors stand behind the open source solution, trying to position OpenStack and their own services built on OpenStack prominently in the market. Another wave of influence emerges through a range of developers and other interested parties who provide their contributions to the project. At present, around 19,000 individuals from 144 countries participate in OpenStack, signifying that the open source project is also an interest group and a community. The broad support can be verified by a range of service providers and independent software vendors who have developed their services and solutions compatible to the OpenStack APIs. Since its development, OpenStack has continuously evolved into an industry standard and is destined to become the de facto standard for cloud infrastructure.

At the cloud service broker and cloud integration layer, SAP “Monsoon” sets on OpenStack Nova (Compute), Cinder (Block Storage), Neutron (Networking) and Ironic (Bare Metal). OpenStack Ironic facilitates “Monsoon” to deploy physical hosts as easily as virtual machines. Among other things, the cloud service management platform OpenStack is responsible for authentication, metering, as well as billing and orchestration. OpenStack’s infrastructure and automation API helps developers to create their applications for “Monsoon” and deploy them on top. In addition, external APIs like Amazon EC2 can be exploited in order to distribute workloads over several cloud infrastructures (multi cloud).

On the one hand, this open approach gives SAP the ability to build standardized infrastructure, in order to support VMware vSphere alongside OpenStack. On the other hand, it is also possible to execute hybrid deployments for both internal and external customers. The on-demand provisioning for virtual as well as physical hosts completes the hybrid approach. Compared to virtual machines, the higher performance of physical machines shouldn’t be underestimated. HANA will appreciate it.

Study: SAP weigh as a powerful OpenStack partner

SAP’s open source focus on OpenStack is nothing new. First announcements have already been made in July 2014 and show the increasing importance of open source technologies to well-established industry giants.

In the meantime, SAP’s OpenStack engagement also got around on the side of the user. In the context of the very first empirical OpenStack study in the DACH market “OpenStack in the Enterprise”, Crisp Research asked 700+ CIOs about their interests, plans and operational status of OpenStack.

The study concluded that cloud computing has finally arrived in Germany. For 19 percent of the sampled IT decision makers, cloud computing is an inherent part on their IT agenda and the IT production environments. 56 percent of German companies are in the planning or implementation phase and are already using cloud as part of first projects and workloads. One can also say that in 2014 the OpenStack wave has also arrived in Germany. Almost every second cloud user (47 percent) has heard of OpenStack. At present, already 29 percent of the cloud users are actively dealing with the new technology. While 9 percent of the cloud users are still in the information phase, already one in five (19 percent) have started planning and implementing their OpenStack project. However, only two percent of the cloud users are using OpenStack in their production environments. Therefore, OpenStack is only a topic for pioneers.

On the subject of performance ability of OpenStack partners, the study has shown that cloud users supportive of OpenStack appreciate SAPs OpenStack engagement and respectively expect a lot from SAP. Almost half of the sampled IT decision makers attribute “a very strong” performance ability to SAP, IBM and HP.

„Monsoon“ – Implications for SAP and the (internal) Customer

In view of the complexity associated with “Monsoon”, the project rather deserves the name “Mammoth”. To move a tank ship like SAP into calm waters is not an easy task. Encouraging standardization within a very dynamic company will raise anticipated barriers. In particular when further acquisitions are pending, the main challenges are to integrate these into the existing infrastructure. However, “Monsoon” seems to be on the right way to building a foundation for stable and consistent cloud infrastructure operations.

As a start, SAP will benefit organizationally from the project. The company from Walldorf promises its developers time savings of up to 80 percent for the deployment of infrastructure resources. As a consequence, virtualized HANA databases can be provided completely automated, thus decreasing the wait time from about one month to one hour.

In addition to the time advantage, “Monsoon” also helps its developers to focus on their core competencies (software development). In former times, developers were involved in further processes such as configuration and provisioning of the needed infrastructure; now they can independently deploy virtual machines, storage or load balancers in a fully automated way. Besides fostering the adoption of a cost-effective and transparent pay-per-use model where used resources are charged by the hour, standardized infrastructure building blocks also support cost optimization. For this purpose, infrastructure resources are combined into standardized building blocks and provided across the SAP’s worldwide datacenters.

The introduced continuous delivery approach by “Monsoon” is well positioned to gain momentum at SAP. The “Monsoon” cloud platform is regularly extended during operations and SAP is saying good-bye to fixed release cycles.

External customers will benefit from “Monsoon” in the mid-term, as SAP is using collected experiences from the project during the work with its customers and will also flow them into future product deployment (e.g. continuous delivery).

SAP is burning too many executives in the cloud

SAP will not fail with the technical implementation of “Monsoon”. The company is employing too many high-qualified employees who are equipped with the necessary knowledge. However, the ERP giant is incessantly showing signs of weakness on the organizational level. This vehemently raises the question why ambitious employees are never allowed to implement their visions to the end. SAP has burned several of its cloud senior managers (Lars Dalgaard is a prime example). For some reason, committed and talented executives who try to promote something within the company seem to have a tough act to follow.

SAP should start to act according to the terms of its customers. This means not only thinking about the shareholders, but also following a long-term vision (greetings from Amazon’s Jeff Bezos). The “Monsoon” project could be a beginning. Therefore, Jens Fuchs and his team are hopefully allowed to implement the ambitious goal – the internal SAP cloud transformation – successfully to the end.

– – –
Image source: Christian heinze / pixelio.de