Categories
Cloud Computing

Microservice: Cloud and IoT applications force the CIO to create novel IT architectures

The digital transformation challenges CIOs to remodel their existing IT architectures providing their internal customers with a dynamic platform that stands for a better agility and fosters the companies’ innovation capacity. This change calls for a complete rethink of the historically implemented architecture concepts. Even if most of the current attempts are to migrate existing enterprise applications into the cloud, CIOs have to empower their IT teams to consider novel development architectures. Because modern applications and IoT services are innovative and cloud based.

Microservice: Background and Meaning

Typical application architectures, metaphorically speaking, remind of a “monolith”, a big massive stone that is made of one piece. The characteristics of both are the same: heavy, inflexible and not or not easy to modify.

Over the last decades, many, mostly monolithic applications have been developed. This means that an application includes all modules, libraries, and independencies that are necessary to ensure a smooth functionality. This architecture concept implicates a significant drawback. If only a small piece of the application needs to change, the whole application has to be compiled, tested and deployed again. This also implies for all parts of the application that don’t experience any changes. This comes at big costs taking manpower, time and IT resources and in most cases lead to delays. In addition, a monolith makes it difficult to ensure:

  • Scalability
  • Availability
  • Agility
  • Continuous Delivery

CIOs can meet these challenges by changing the application architecture from a big object to an architecture design composed of small independent objects. All parts are integrated with each other providing the overall functionality of the application. The change of one part doesn’t change the characteristics and functionality of other parts. This means that each part works as an independent process, respectively, service. This concept is also known as microservice architecture.

What is a Microservice?

A microservice is an encapsulated functionality and is developed and operated independently. So, it is a small autonomous software component (service) that provides a sub-function within a big distributed software application. Thus, a microservice can be developed and provided independently and scales autonomous.

Application architectures based on microservices, are modularized and thus can be extended with new functionalities easier and faster as well as better maintained during the application lifecycle.

Compared to traditional application architectures, modern cloud based architectures are following a microservice approach. This is because of the cloud characteristics cloud native application architectures have to be adapted to. This means that issues like scalability and high-availability have to be considered from the very beginning. The benefits of microservice architectures are related to the following characteristics:

  • Better scalability: A sub-service of an application is able to scale autonomously if its functionality experienced a higher demand without affecting the remaining parts of the application.
  • Higher availability of the entire application: A sub-service that experiences an error doesn’t affect the entire application but only the functionality it is representing. This means that a sub-failure necessarily doesn’t affect customer-facing functionality if the service represents a backend service.
  • Better agility: Changes, improvements and extensions can be implemented independently from the entire application functionality without affecting other sub-services.
  • Continuous delivery: These changes, improvements and extensions can be conducted on a regular basis without updating the whole application respectively without a major maintenance mode.

Another benefit of microservice architectures: A microservice can be used in more than one application. Developed once it can serve its functionality in several application architectures.

What Provider works with Microservices?

Today, a number of providers already understood the meaning of microservice architectures. However, in particular the big infrastructure players have their difficulties with this transformation. Startups respectively cloud native companies show how it works:

  • Amazon Web Services
    From the very beginning Amazon AWS aligned its cloud infrastructure providing microservices (building blocks). Examples: Amazon S3, Amazon SNS, Amazon ELB, Amazon Kinesis, Amazon DynamoDB, Amazon Redshift
  • Microsoft Azure
    From the very beginning the cloud platform consists of microservices. The Azure Service Fabric exists for a short time offering capabilities for the development of own microservices. Examples: Stream Analytics, Batch, Logic App, Event Hubs, Machine Learning, DocumentDB
  • OpenStack in general
    The community extends the OpenStack portfolio with new microservices with each release mainly for infrastructure operations. Examples: Object Storage, Identity Service, Image Service, Telemetry, Elastic Map Reduce, Cloud Messaging
  • IBM Bluemix
    IBM’s PaaS Bluemix provides an amount of microservices. These are offered directly by IBM or via external partners. Examples: Business Rules, MQ Light, Session Cache, Push, Cloudant NoSQL, Cognitive Insights
  • Heroku/Salesforce
    Heroku’s PaaS offers “Elements”, a marketplace for ready external services that can be integrated as microservices in the own application. Examples: Redis, RabbitMQ, Sendgrid, Raygun.io, cine.io, StatusHub
  • Giant Swarm
    Giant Swarm offers developers an infrastructure for the development, deployment and operations of microservice based application architectures. For this purpose, Giant Swarm is using technologies like Docker and CoreOS.
  • cloudControl
    cloudControl’s PaaS offers “Add-ons”, a marketplace to extend self-developed applications with services from external partners. Examples: ElephantSQL, CloudAMQP, Loader.io, Searchify, Mailgun, Cloudinary

The providers, based on their microservice portfolios, are offering a programmable modular construction system of ready services that are accelerating the development of an application. These are ready building blocks (see hybrid and multi-cloud architectures), whose functionalities don’t have to be developed again. Instead they can be used directly as a “brick” within the own source code.

Example of a Microservice Architecture

Netflix, the video on demand provider, is not only a cloud computing pioneer and one of the absolute role models for IT architects. Under the direction of Arian Cockroft (now Battery Ventures) Netflix has developed an own powerful microservice architecture to operate its video platform high scalable and high available. Services include:

  • Hystrix = Latency and fault tolerance
  • Simian Army = High-availability
  • Asgard = Application deployment
  • Exhibitor = Monitoring, backup and recovery
  • Ribbon = Inter process communication (RPC)
  • Eureka = Load balancing and failover
  • Zuul = Dynamic routing and monitoring
  • Archaius = Configuration management
  • Security_Monkey = Security and tracking services
  • Zeno = In-memory framework

All microservices, Netflix encapsulates within its “Netflix OSS” that can be downloaded as open source from Github.

An example from Germany is Autoscout24. The automotive portal is facing the challenge to replace its 2000 servers that are distributed over 2 data centers, and the currently used technologies based on Microsoft, VMware and Oracle. The goal: a microservice architecture supported by a DevOps model to implement a continuous delivery approach. Thus, Autoscout24 wants to stop its monthly releases and instead provide improvements and extensions on a regular basis. Autoscout24 decided for the Amazon AWS cloud infrastructure and already started the migration phase.

Microservice: The Challenges

Despite the benefits, microservice architectures come along with several challenges. Besides the necessary cloud computing knowledge (concepts, technologies, et.al.) these are:

  • A higher operational complexity since the services are very agile and movable.
  • An additional complexity because of the development of a massive distributed system. This includes latency, availability and fault tolerance.
  • Developer need operational knowledge = DevOps
  • API management and integration play a major role.
  • A complete end-to-end test is mandatory.
  • Ensuring a holistic availability and consistency of the distributed data.
  • Avoiding a high latency of the single services.

The Bottom Line: What CIOs should consider

Today, standard web applications (42 percent) still represent the major part at public IaaS platforms. By far mobile applications (22 percent), media streaming (17 percent) and analytics services (12 percent) follow. Enterprise applications (4 percent) and Internet of Things (IoT) services (3 percent) are still playing a minor part. The reason for the current segmentation: websites, backend services as well as content streaming (music, videos, etc.) are perfect for the public cloud. On the other hand enterprises are still sticking in the middle of their digital transformation and evaluate providers as well as technologies for the successful change. IoT projects are still in the beginning or among the idea generation. Thus in 2015, IoT workloads are only a small proportion on public cloud environments.

Until 2020 this ratio will significantly change. Along with the increasing cloud knowledge within the enterprises IT and the ever-expanding market maturity of public cloud environments for enterprise applications the proportion of this category will increase worldwide from 4 percent to 12 percent. Accordingly, the proportion of web and mobile applications as well as content streaming will decrease. Instead worldwide IoT workloads will almost represent a quarter (23 percent) on public IaaS platforms.

These influences are challenging CIOs to rethink their technical agenda and thinking about a strategy in order to enable their company to keep up with the shift of the market. Therefore they have to react to the end of the application lifecycle early enough by replacing old applications and look for modern application architectures. However, a competitive advantage only exists if things are done differently from the competition and not only better (operational excellence). This means that CIOs have to contribute significantly by developing new business models and develop new products like an IT factory. The wave of new services and applications in the context of the Internet of Things (IoT) is just one opportunity.

Microservice Architecture: The Impact

Microservice architectures support IT departments to respond faster to the requirements of specialty departments to ensure a faster time-to-market. In doing so, independent silos need to be destroyed and a digital umbrella should be stretched over the entire organization. This includes the introduction of the DevOps model to develop microservices in small and distributed teams. Modern development and collaboration tools are enabling this approach for worldwide-distributed teams. This helps to avoid shortage of skilled labor at certain countries by recruiting specialist from all over the world. So, a microservice team with the roles product manager, UX designer, developer, QA engineer and a DB admin could be established across the world, which accesses the cloud platform via pre-defined APIs. Another team composed of system, network and storage administrators operates the cloud platform.

Decision criteria for microservice architectures are:

  • Better scalability of autonomous acting services.
  • Faster response time to new technologies.
  • Each microservice is a single product.
  • The functionality of a single microservice can be used in several other applications.
  • Employment of several distributed teams.
  • Introduction of the continuous delivery model.
  • Faster onboarding of new developers and employees.
  • Microservices can be developed more easily and faster for a specific business purpose.
  • Integration complexity can be reduced since a single service contains less functionality and thus less complexity.
  • Errors can be isolated easier.
  • Small things can be tested easier.

However, the introduction of microservice architectures is not only a change on the technical agenda. Rethinking the enterprise culture and the interdisciplinary communication are essential. This means that also the existing IT and development teams needs to be changed. This can happen either by internal trainings or external recruiting.

Categories
Cloud Computing Open Source

Round 11: OpenStack Kilo

The OpenStack community hits round 11. Last week the newest OpenStack release “Kilo” was announced – with remarkable numbers. Almost 1.500 developers and 169 organizations contributed source code, patches etc. Top supporting companies to OpenStack Kilo include Red Hat, HP, IBM, Mirantis, Rackspace, Yahoo!, NEC, Huawei and SUSE. OpenStack Kilo is characterized by a better interoperability for external drivers, supporting new technologies like container as well as bare-metal concepts.

OpenStack Kilo: New Functions

According to the OpenStack Foundation, almost half of all OpenStack deployments (46 percent) are production environments. Network function virtualization (NFV), for using single virtual network components, is the fastest-growing use case for OpenStack. One of the lighthouse projects is eBay, operating OpenStack at large scale.

Essential new functions of OpenStack Kilo

  • OpenStack Kilo is the first release that fully supports the bare-metal service “Ironic” to run workloads directly on physical machines.
  • The OpenStack object storage service “Swift” supports “Erasure Coding (EC)” to fragment data and store it at distributed locations.
  • The “Keystone” identity service was enhanced with identity federation to support hybrid and multi-cloud scenarios.

New features of the OpenStack Core Projects (excerpts)

  • OpenStack Nova Compute
    Improvements for live updates when a database schema is changed and support the change of resources of a running virtual machine.
  • OpenStack Swift Object Storage
    Support of “Erasure Coding”. Temporary access to objects via an URL and improvements for global cluster replication.
  • OpenStack Cinder Block Storage
    Enhancement to attach a volume to multiple virtual machines to implement high-availability and migration scenarios.
  • OpenStack Neutron Networking
    Extension of network function virtualization (NFV) like port security for OpenVSwitch and VLAN transparency.
  • OpenStack Ironic Bare-Metal
    Ironic supports existing virtual machine workloads as well as new technologies like container (Docker), PaaS and NFV.
  • OpenStack Keystone Identity Service
    The extensions around identity federation help to distribute workloads across public and private clouds to build OpenStack based hybrid and multi-cloud environments.

OpenStack Kilo: Short Analysis and Impact

OpenStack is still growing. Even if a high ratio of NFV use cases shows that OpenStack is mainly used in service provider networks to operate single network components more flexible and cost-effective. However, the new Kilo functions for “federated identity”, “erasure coding” and “bare-metal” will move OpenStack up to the top of the CIO agenda.

The support of “erasure coding” is a long overdue function for Swift Object Storage – even though initial discussions already started for the “Havanna” release in 2013. All big public cloud providers are working with this distribution strategy for years to ensure high-availability of data. The introduction of bare-metal is at the right time. Workload migrations to cloud based infrastructure show with increasing frequency that virtual machines are not suitable for all use cases. Thus, database servers and performance intense workloads are ideally running on physical machines, whereas distributed workloads like application and web servers are good candidates for virtual machines. On a final note, identity federation will help CIOs building seamless OpenStack based hybrid and multi-cloud environments. Users only need a single login to authorize across multiple providers and get access to servers, data and applications in private and public clouds at once.

This begs the question how easy and fast CIOs can benefit from these new functions. The last five years unveiled that using OpenStack implicates a high complexity. This is mainly because OpenStack is organized as a big project composed of several sub-projects. Only the close interaction of all necessary sub-projects to support a specific use case is promising. The majority of CIOs who are working with OpenStack are considering a professional distribution instead of building an own OpenStack version based on the source code of the community trunk. In Germany these are 75 percent of the OpenStack users.

Categories
Cloud Computing Internet of Things

API Economy as a competitive factor: iPaaS in the Age of the Internet of Things (IoT) and Multi-Cloud Environments

What do APIs, integration and complexity have in common? All three are inseparable during the growth process of an IT project. Integration projects among two or multiple IT systems often lead to a delay or even the failure of the whole project. Depending on the company size, on-premise environments mostly consist of a relatively manageable number of applications. However, the use of multiple cloud services and the rise of the Internet of Things (IoT) scales to an excess of integration complexity.

The ever-growing use of cloud services and infrastructure across several providers (multi-cloud) makes a central approach necessary to preserve overview. In addition, it is essential to ensure a seamless integration among all cloud resources and the on-premise environment to avoid system and data silos. The variety of cloud services is rising incessantly.

The cloud supports the Internet of Things and its industrial offshoot – the Industrial Internet. Cloud infrastructure and platforms are providing the perfect foundation for IoT services and IoT platforms and will lead to a phenomenal rise of IoT business models. This will end in a market with ongoing new devices, sensors and IoT solutions whose variety and potential cannot be foreseen. However, the demand for integration also increases. After all, only the connection of various IoT services and devices leads to an actual value. At the same time analytics services need access to the collected data from different sources for analyzing and connection purposes.

The access typically happens via the cloud and IoT service APIs. As a consequence the term API economy comes in the spotlight. Integration Platform-as-a-Services (iPaaS) exposed as good candidates to ensure the access, integration, control and management in the cloud and in the Internet of Things.

iPaaS and API Economy: It’s all about the API

Enterprise Application Integration (EAI) was the central anchor in the age of client server communication to ensure business process integration within the whole value chain. The focus is on the tight interaction of a variety of applications that are distributed over several independent operated platforms. The goal: the uniform and integrated mapping of all business processes in IT applications and thus to avoid data silos.

However, the transition into the cloud age leads to a change in the usage behavior of on-premise interfaces to a mainly consumption of web APIs (Application Programming Interfaces). Finally almost each cloud and web services provider offers a REST or SOAP based API that enables to integrate services in the own application and thus benefit directly from external functions. Along with the increasing consumption of cloud services and the ever-growing momentum of the Internet of Things, the importance of APIs will rise significantly.

API Economy

The cloud native Internet companies are reflecting this trend. APIs are a central competitive factor for players like Salesforce, Twitter, Google, Amazon and Amazon Web Services and represent the lifeline of their success. All mentioned providers have created an own API ecosystem around them, which is used by their customers and partners to develop own offerings.

In this context the term “API economy” is used. The API economy describes the increasing economic potential of APIs. Thanks to mobile, social media and cloud services, APIs are no longer popular only under developers but also find their ways on the memos of CEOs and CIOs who have identified the financial impact. Providers typically benefit from APIs by:

  • Selling (premium) functions within a free of cost service.
  • Charging the sharing of content through an application or service of a partner.

CIOs benefit from the API economy by getting access to a quasi endless choice of applications and services they can use to expand their websites, applications and systems without developing, operating or even maintaining these functionalities on their own. Furthermore, APIs enable partner, customers and communities to get an easy access to own applications, data and systems to let the CIO’s company become a part of the API economy.

Everything works using pretended “simple” API calls. However, the devil is in the details. Integration and API management have a big significance in the API economy.

iPaaS = Integration Platform-as-a-Service

Over the last years many vendors have been originated that are specialist on the API management and integration of different services. These, so called, Integration Platform-as-a-Services (iPaaS) are cloud based integration solutions (in pre cloud times known as “middleware”) that support the interaction between several cloud services. Thus, developers and enterprises get the opportunity to create own “integration flows” that connect multiple cloud services among each other but also on-premise applications.

The iPaaS market splits in two camps: The wild startups and the IT majors who have developed respectively rebuild their portfolios. iPaaS vendors to watch are (excerpt):

  • 3scale
    The 3scale platform consists of two areas. The API Program Management gives an overview and information of the used APIs. The API Performance Management analyzes the API traffic in the cloud as well as in on-premise infrastructure. Together they enable to control and manage the API traffic within an own system and application architecture.
  • elastic.io
    The elastic.io iPaaS is offered as a cloud service as well as an on-premise installation in the own infrastructure. Based on the programming languages Node.js, Java and JSON, elastic.io provides a development framework that can be used to integrate several CRM, financial, ERP and ecommerce cloud services to ensure data integrity. Therefore necessary connectors are provided e.g. for SAP, SugarCRM, Zendesk, Microsoft Dynamics, Hybris and Salesforce.
  • SnapLogic
    The SnapLogic iPaaS is provided as a SaaS solution and helps to integrate data of cloud services as well as to let SaaS applications interact among each other and with on-premise applications. Therefore SnapLogic provides ready connectors (Snaps and Snaplex) that can be used for the integration and data processing. The iPaaS provider primarily focuses on the Internet of Things to connect data, applications and devices among each other.
  • Software AG
    The central parts of Software AGs iPaaS portfolio are webMethods Integration and webMethods API-Management. webMethods Integration Backbone integrates several cloud, mobile, social and big data services as well as solutions from partners via a B2B gateway. webMethods API-Management contains all tasks to get an overview and the control of the own and external used APIs. Among other things the functional range includes design, development, cataloging and version management.
  • Informatica
    The Informatica cloud integration portfolio contains a large service offering specifically for enterprise customers. This includes Informatica Cloud iPaaS, which is responsible for the bidirectional synchronization of objects among cloud and on-premise applications as well as the replication of cloud data and the business process automation. The Integration Services support the consolidation of different cloud and on-premise applications to integrate, process and analyze operational data in real-time.
  • Unify Circuit
    Unify Circuit is a SaaS based collaboration suite that combines voice, video, messaging and screen- and file-sharing – everything organized in “conversations”. However, Unify introduced a new PaaS category – cPaaS (Collaborative Platform-as-a-Service). This is an iPaaS that consolidates PBX, SIP as well as external cloud services like Box.com, Salesforce or Open-Xchange into a uniform collaboration platform. All data is stored at the external partners and is consolidated on the Unify Circuit platform at runtime.

IoT and Multi-Cloud: The future belongs to open platforms

Openness is a highly discussed topic in the IT and especially in the Internet. The past or rather Google have taught us: The future only belongs to open platforms. This is not about openness that should be discussed in terms of open standards – even or especially Google runs diverse proprietary implementations, e.g. Google App Engine.

However, Google understood from the very beginning to position itself as an open platform. Important: Openness in the context of providing access to its own services via APIs. Jeff Jarvis illustrates in his book „What Would Google Do?“ how Google – based on its platform – enables other companies to build own business models and mashups. Not without a cause – of course. This kind of openness and the right use of the API economy quickly lead to dissemination and made Google to a cash cow – via advertising.

Companies like Unify are still far away from the status to become comparable with the Google platform. However, the decision makers at Unify apparently realized that only an open architecture approach helps to turn the company from a provider of integrated communication solutions to a cloud integration provider and thus to become a part of the API economy. For this purpose Unify Circuit doesn’t only consolidates external cloud services on its collaboration platform, but rather enables developers to integrate Circuit’s core functions like voice or video as mashups in their own web applications.

From a CIO perspective integration is crucial to avoid system and data silos. A non-holistic integration of multiple and independent systems can harm the overall process. Therefore it is vital that cloud, IoT and Industrial Internet services are seamlessly integrated among each other and with existing systems to completely support all business processes.

Categories
Cloud Computing @de

Analyst Studienreport: Cloud Price Performance Evaluation

Die aktuelle Angebotssituation macht es für CIOs und CTOs schwierig, einzelne Public Infrastructure-as-a-Service (IaaS) Anbieter zu unterscheiden. Eine schier endlose Anzahl von Paketen, unterschiedliche Abrechnungsmodelle und die zugrundeliegenden Infrastrukturen sind nur mit viel Aufwand zu vergleichen. IT-Entscheider benötigen eine belastbare Entscheidungsgrundlage, um die Kosten und Leistungen einzelner Anbieter untereinander zu vergleichen. Bevor Entscheider größere Workloads auslagern, sollten sie sich über die Implikationen des Preises und der Performance informieren.

Der Preis-Performance-Test “Cloud Price Performance Evaluation” vergleicht die vier relevanten Public Cloud Anbieter Amazon Web Services, Google Cloud Platform, Microsoft Azure und ProfitBricks. Für die Performance-Messung wurde die durchschnittliche Antwortzeit (Response Time) berücksichtigt. Zur Gewährleistung eines objektiven Testszenarios wurde ein realer und standardisierter Workload verwendet, welcher die Anforderungen einer Mehrzahl von Anwendern abdeckt.

Dank einer überdurchschnittlichen Performance und einem starken Preis ist ProfitBricks (93 Prozent des Referenzwertes) der Preis-Performance- Sieger. Auf den weiteren Rängen positionieren sich Google (87 Prozent), Microsoft und Amazon AWS (jeweils 73 Prozent). Dies zeigt, dass Größe kein ausschlaggebender Faktor für ein gutes Preis-Leistungs-Verhältnis ist. Kleine und lokale Anbieter versprechen im Vergleich zu den großen eine gute Kombination aus Preis und Leistung.

Der Studienreport kann kostenlos unter “Cloud Price Performance Evaluation” heruntergeladen werden.

Categories
Cloud Computing Internet of Things

IoT-Backend: The Evolution of Public Cloud Providers in the Internet of Things (IoT)

The Internet of Things (IoT) has jumbled the agenda of CIOs and CTOs faster than expected and with a breathtaking velocity. As of shortly cloud, big data and social topics occupied center stage. However, in the meantime we are talking more and more about the interconnection of physical objects like human beings, sensors, household items, cars, industrial facilities etc. Who might think that the “Big 4” now disappear from the radar is wrong. Quite the contrary is the case. Cloud infrastructure and platforms belong to the central drivers behind IoT services since they are offering the perfect preconditions to serve as vital enabler and backend services.

Public Cloud Workloads: 2015 vs. 2020

The demand for public cloud services shows an increasing momentum. On the one hand it is due to the requirement of CIOs to run their applications more agile and flexible. On the other hand most of the public cloud providers are addressing the needs of their potential customers. Among the varying workload categories that are running on public IaaS platforms standard web applications (42 percent) still represent the major part. By far mobile applications (22 percent), media streaming (17 percent) and analytics services (12 percent) follow. Enterprise applications (4 percent) and IoT services are still playing a minor part.

The reason for the current segmentation: websites, backend services as well as content streaming (music, videos, etc.) are perfect for the public cloud. On the other hand enterprises are still sticking in the middle of their digital transformation and evaluate providers as well as technologies for the successful change. IoT projects are still in the beginning or among the idea generation. Thus in 2015, IoT workloads are only a small proportion on public cloud environments.

Until 2020 this ratio will significantly change. Along with the increasing cloud knowledge within the enterprises IT and the ever-expanding market maturity of public cloud environments for enterprise applications the proportion of this category will increase worldwide from 4 percent to 12 percent. Accordingly, the proportion of web and mobile applications as well as content streaming will decrease. Instead worldwide IoT workloads will almost represent a quarter (23 percent) on public IaaS platforms like AWS, Azure and Co.

Public Cloud Provider: The perfect IoT-Backend

The Internet of Things will quickly become a key factor for the future competitiveness of enterprises. Thus, CIOs have to deal with the necessary technologies to support their enterprise business technology strategy. Public cloud environments – infrastructure (IaaS) as well as platforms (PaaS) offer perfect preconditions to serve as supporting backend environments for IoT services and devices. The leading public cloud providers already have prepared their environments with the key features to develop into an IoT backend. The central elements of a holistic IoT backend are characterized as follows (excerpt):

  • Global scalability
  • Connectivity/ Connectivity management
  • Service portfolio and APIs
  • Special services for specific industries
  • Platform scalability
  • Openness
  • Data analytics
  • Security & Identity management
  • Policy control
  • Device management
  • Asset and Event management
  • Central hub

Public cloud based infrastructure-as-a-service (IaaS) will mainly be used to provide compute and storage capacities for IoT deployments. IaaS provides enterprises and developers inexpensive and almost infinite resources to run IoT workloads and store the generated data. Platform-as-a-service (PaaS) offerings will benefit from the IoT market as they provide enterprises faster access to software development tools, frameworks and APIs. PaaS platforms could be used to develop control systems to manage IoT applications, IoT backend services and IoT frontends as well as to integrate with third party solutions to build a complete “IoT value chain”. Even the software-a-as-service (SaaS) market will benefit from the IoT market growth. User-friendly SaaS solutions will facilitate users, executives, managers as well as end customers and partners to analyze and share the data generated by interconnected devices, sensors etc.

Use Cases in the Internet of Things

digitalSTROM + Microsoft Azure
digitalSTROM is one of the pioneers in the IoT market. As a provider of smart home technologies the vendor from Switzerland has developed an intelligent solution for connecting homes to communicate with several devices over the power supply line via smartphone apps. Lego kind bricks form the foundation. Each connected device can be addressed over a single brick, which holds the intelligence of the device. digitalSTROM early evaluated the potentials of a public cloud environment for its IoT offering. Microsoft Azure provides the technological foundation.

General Electric (GE) + Amazon Web Services
General Electric (GE) has created an own IoT factory (platform) within the AWS GovCloud (US) region to interconnect humans, simulator, products, sensors etc. with each other. The goal is to improve collaboration, prototyping and product development. GE’s decision for the AWS GovCloud was to fulfill legal and compliance regulations. One customer who already profits from the IoT factory is E.ON. When the demand for energy increases in the past GE typically tried to sell E.ON more turbines. In the course of the digital transformation GE early started to change its business model. GE is using operational data of turbines to optimize the energy efficiency by performing comprehensive analyzes and simulation. E.ON gets real-time access to the interconnected turbines to control the energy management on demand.

ThyssenKrupp + Microsoft Azure
Together with CGI ThyssenKrupp has developed a solution to interconnect thousands of sensors and systems within its elevators over the Microsoft Azure cloud. For this purpose they are using Azure IoT services. The solution provides ThyssenKrupp several information from the elevators to monitor the engine temperature, the lift hole calibration, the cabin velocity, the door functionality and more. ThyssenKrupp records the data, transfers it to the cloud and combines it in a single dashboard based on two data types. Alarm signals that indicate urgent problems and events that are only stored for administrative reasons. Engineers get real-time access to the elevators data to immediately make their diagnostics.

IoT-Backend: Service Portfolio and Development Capacities are central

All use cases above show three key developments that determine the next five years and will significantly influence the IaaS market:

  1. IoT applications are a central driver behind IaaS adoption.
  2. Development tools, APIs, and value added services are central decision criteria for a public cloud environment.
  3. Developer and programming skills are crucial.

Thus, several public cloud providers should question themselves whether they have the potential respectively the preconditions to develop their offering further to become an IoT backend. Only the ones who provide services and have development capacities (tools, SDKs, frameworks) in the portfolio will be able to play a central role in the profitable IoT market and being considered as the infrastructure base for novel enterprise and mobile workloads. Note: more and more public cloud infrastructure is used as an enabler and backend infrastructure for IoT offerings.

Various enablement services are available in the public cloud market that can be used to develop an IoT backend infrastructure.

Amazon AWS services for the Internet of Things:

  • AWS Mobile Services
  • Amazon Cognito
  • Simple Notification Service
  • Mobile Analytics
  • Mobile Push
  • Mobile SDKs
  • Amazon Kinesis

Microsoft Azure IoT-Services:

  • Azure Event Hubs
  • Azure DocumentDB
  • Azure Stream Analytics
  • Azure Notification Hubs
  • Azure Machine Learning
  • Azure HDInsight
  • Microsoft Power BI

Amazon AWS didn’t start any noteworthy marketing for the Internet of Things so far. Only a sub website explains the idea of IoT and what kind of existing AWS cloud services should be considered. Even with Amazon Kinesis – predestinated for IoT applications – AWS is taking it easy. However, taking a look under the hood of IoT solutions one realize that many cloud based IoT solutions are delivered via the Amazon cloud.

Microsoft considers the Internet of Things as a strategic growth market and has created Microsoft Azure IoT Services, a specific area within the Azure portfolio. However, so far this is only a best off of existing Azure cloud services that are encapsulating a specific functionality for the Internet of Things.

Public Cloud Providers continuously need to expand their Portfolio

From a strategy perspective IoT use cases are following the top-down cloud strategy approach. In this case the potentials of the cloud are considered and based on that a new use case is created. This will significantly change the ratio from bottom-up to more top-down use cases in the next years. (Today’s ratio is about 10 percent (top-down) to 90 percent (bottom-up)) More and more enterprises will start to identify and evaluate IoT use cases to enhance their products with sensors and machine-2-machine communication. The market behaviors we see for fitness wearable’s (wristbands and devices people are using to quantify themselves) today will exponentially escalate to other industries.

So, the majorities of the cloud providers are under pressure and can’t rest on their existing portfolio. Instead they need to increase their attractiveness by serving their existing customer base as well as potential new customers with IoT enablement services in terms of microservices and cloud modules. Because the growth of the cloud and the progress of the Internet of Things are closely bound together.

Categories
Cloud Computing @de

Analyst Strategy Paper: Generation Cloud – Der Markt für MSPs und Systemintegratoren im Wandel

Der Markt für Systemintegratoren und Managed Services Provider befindet sich inmitten eines nachhaltigen Wandels. Nur diejenigen, die schnellstmöglich mit ihrer Cloud-Transformation beginnen, werden langfristig am Markt bestehen können. Der Grund für diese Entwicklung ist das sich verändernde Einkaufsverhalten von IT-Entscheidern. Diese suchen nach einer höheren Flexibilität bei der Nutzung von IT-Ressourcen.

Systemintegratoren und Managed Services Provider stehen damit vor einer fundamentalen Veränderung ihres Kerngeschäftes und müssen das Skillset ihrer Mitarbeiter möglichst zügig auf den Status „Cloud-ready“ weiterbilden. Public Cloud-Infrastrukturen bieten dabei ideale Voraussetzungen in Bezug auf Preis-Performance-Ratio, um den Betrieb von Kunden-Systemen und -Applikationen in einem Managed-Services-Modell zu betreiben.

Damit lässt es sich schneller auf die veränderten Anforderungen der Kunden und die wechselnden Marktbegebenheiten reagieren. Systemintegratoren und Managed-Services-Provider können von der hohen Verfügbarkeit, der Skalierungsfähigkeit und einem hohen Sicherheitsstandard von Public Cloud-Infrastrukturen profitieren. Damit können sie ihrem kapitalintensiven Geschäft entkommen (Verschiebung vom CapEX- zum OpEx-Modell) und somit ihre Preis- und Vertriebsmodelle flexibler gestalten.

Vor diesem Hintergrund hat Crisp Research in Zusammenarbeit mit der ProfitBricks GmbH ein Strategy Paper erstellt, welches sich mit der aktuellen und zukünftigen Situation von Managed Services Providern und Systemintegratoren in der Cloud beschäftigt.

Das Analyst Strategy Paper kann unter “Generation Cloud: Der Markt für MSPs und Systemintegratoren im Wandel” kostenlos heruntergeladen werden.

Categories
Cloud Computing

Cloud Marketplace: A means to execute the Bottom-Up Cloud Strategy

Worldwide many CIOs are still looking for the right answer to let their companies benefit from cloud computing capabilities. Basically all kinds of organizations are the right candidates for a bottom-up cloud strategy – the migration of existing applications and workloads into the cloud. However, this approach isn’t much innovative but it offers a relatively high value proposition with low risk. An analysis of market-ready cloud marketplaces show promising capabilities to implement the bottom-up cloud strategy in the near term.

The Cloud drives the evolution of IT purchasing

In the blog post “Top-Down vs. Bottom-Up Cloud Strategy: Two Ways – One Goal” two strategy approaches are discussed that can be used to benefit from public cloud infrastructure. One conclusion was that top-down strategies remain reserved for innovators. Bottom-up strategies are mainly realized in the context of existing workloads to move them into the cloud. CIOs of prestigious companies worldwide are still searching for best practices to heave their legacy enterprise workloads to the cloud.

Looking at the general purchasing behavior of IT resources unveils a disruptive change. Besides CIOs, IT infrastructure manager and IT buyers also department managers are asking for their right to be heard or are going their own ways. Driver behind this development: the public cloud. Its self-service leads to a vanishing significance of classical IT purchasing. Obtaining hardware and software licenses from distributors and resellers will become less important in the future. Self-service makes it convenient and easy at once to get access to infrastructure resources as well as software. Thus, distributors and resellers systematically have to rethink their business models. Those system houses and system integrators who still haven’t start their cloud transformation so far are at risk to disappear from the market in the next three to five years. So long!

Besides self-service the public cloud offers primarily one thing: Choice! More than ever before. On the one hand there is the ever-growing variety of sources of supply – solutions of providers. On the other hand the different deployment models the public cloud can be connected with. Hybrid and multi cloud scenarios are the reality.

The next step of evolution is well underway – cloud marketplaces. Implemented by the operators the right way they are offering IT buyers an ideal central platform for purchasing IT resources. At the same time they are supporting CIOs to push their cloud transformation with a bottom-up strategy.

Bottom-Up Cloud Strategy: Cloud Provider Marketplaces support the implementation

The bottom-up cloud strategy helps companies to move existing legacy or enterprise applications into the cloud to benefit from the cloud’s capabilities without thinking about innovations or to change the business model. It is mainly about efficiency, costs and flexibility.

In this strategy approach the infrastructure is not the central point but rather a means to the end. Finally, the software needs to be operated somewhere. In most cases the purpose is to continue to use the existing software in the cloud. At application level cloud marketplaces can support to be successful in the short term. More importantly they are facing current challenges and requirements of companies. These are:

  • The distributed purchase of software across the entire organization is difficult.
  • The demand for accessing software in the short term – e.g. for testing purposes – increases.
  • Individual employees and departments are asking for a catalog of categorized and approved software solutions.
  • A centrally organized cloud marketplace helps to work against shadow IT.

In light of the fact that still a vast number of valid software licenses are used in on premise infrastructure underlines the importance of Bring Your Own License (BYOL). BYOL is a concept by which a company continues using its existing software licenses legally at the cloud provider’s infrastructure.

In respect of supporting the bottom-up cloud strategy by a cloud marketplace, experience has shown that cloud provider owned and operated marketplaces like Amazon AWS Marketplace or Microsoft Azure Marketplace are playing an outstanding role. Both are offering the necessary technology and excellence to make it easy for customers and partners to decide for running applications in the cloud.

Technical advantage, simplicity and most of all an extensive choice are the key success factors of cloud marketplaces of public cloud providers. The AWS Marketplace already offers 2100+ solutions, the Azure Marketplace even 3000+ solutions. Some mouse clicks and the applications are deployed on the cloud infrastructure – incl. BYOL. Thus, the infrastructure is becoming easily usable for all users.

The providers are investing a lot in the development of their marketplaces – with good reason. Amazon AWS’ and Microsoft Azure’s own cloud marketplaces are having a strategic importance. These are the ideal tools to maneuver new customers to the infrastructure and to increase the revenue with existing customers.

Cloud marketplaces operated by public cloud providers are constantly becoming more popular – this with verifiable numbers. The marketplaces are having a big market maturity and are offering a big and wide variety of solutions. Against this backdrop, CIOs who are planning to migrate their existing applications into the cloud should intensively deal with cloud marketplaces. Because these kinds of marketplaces are more than just an add-on – they can support to accelerate the cloud migration.

Categories
Strategie

Analyst Strategy Report: Managing OpenStack – Heimwerker vs. Smarte Cloudsourcer

OpenStack ist schnell zu einer maßgeblichen Kraft im Markt für Cloud Infrastruktur-Services geworden. Das Open-Source Cloud-Management-Framework erhält hierzu ernsthafte Unterstützung von großen IT-Herstellern, Service-Anbietern und Entwicklern. Cloud- und OpenStack-Wissen sind jedoch weiterhin ein rar gesätes Gut im IT- Markt. Geeignete Mitarbeiter zu finden oder das vorhandene Personal zu schulen ist sehr kosten- und zeitintensiv. Auch der Aufbau und die Wartung einer eigenen OpenStack- Umgebung im DIY-Ansatz (Do-it-yourself) mit internen Ressourcen können zu einer hochkomplexen und kostenintensiven Angelegenheit werden. Auf Grund dessen haben Unternehmen in einigen Fällen eigene OpenStack-Versionen entwickelt, die zu den aktuellen, offiziellen Releases nicht mehr kompatibel sind.

OpenStack-Distributionen bieten allerdings vorpaketierte und vorkonfigurierte OpenStack-Versionen, die mit einem professionellen Support durch den Distributionsanbieter unterstrichen werden. Hiermit lässt sich das technologische und finanzielle Risiko bei der OpenStack-Implementierung verringern. Zudem kann eine Managed OpenStack-Variante für einen direkten Nutzen sorgen und befähigt die Unternehmens-IT sich auf die wesentlichen Themen zu konzentrieren und das nicht vorhandene OpenStack-Wissen zu kompensieren.

Vor diesem Hintergrund hat Crisp Research in Zusammenarbeit mit der Host Europe Solutions GmbH einen Report erstellt, welcher die Vor- und Nachteile einer Managed OpenStack Infrastruktur mit denen einer selbst betriebenen OpenStack-Infrastruktur vergleicht.

Der Analyst Strategy Report kann unter “Managing OpenStack: Heimwerker vs. Smarte Cloudsourcer” kostenlos heruntergeladen werden.

Categories
Cloud Computing

Top-Down vs. Bottom-Up Cloud Strategy: Two Ways – One Goal

Along with the steady growth of the cloud the question for appropriate cloud uses cases rises. After companies like Pinterest, Airbnb, Foursquare, Wooga, Netflix and many others have shown how cloud infrastructure and platforms can be used to create new or even disruptive business models, more and more CEOs and CIOs would like to benefit from the cloud characteristics. The issue: established companies run many legacy enterprise applications that cannot be moved into the cloud based on its existing form. For many decision makers this raises the question whether they should follow a top-down or bottom-up strategy.

Cloud Strategy: Top-Down vs. Bottom-Up

An IT strategy has the duty to support the corporate strategy at the best. Thus, in line with the increasing digitalization of society and economy the value proposition and meaning of IT significantly rises. This means that the impact of IT on the corporate strategy will become more important in the future. Assuming that cloud infrastructure, platforms and services are the technological fundament of the digital transformation, it is consequent that the cloud strategy has a direct impact on the IT strategy.

This raises the question how far cloud services are able to support the corporate strategy whether direct or indirect. It is not mandatory that this is reflecting in numbers. If a company – for instance – is able to let its employees work more flexible – based on a software-as-a-service (SaaS) solution – than it has done something for the productivity, which has a positive effect on the company. However, it is important to understand that cloud infrastructure and platforms just serve as a foundation on which companies get the capabilities to create innovation. The cloud is just a vehicle.

Two approaches can be used to get a better understanding about the impact of cloud computing on the corporate strategy:

  • Top-Down Cloud Strategy
    During the top-down approach the possibilities of cloud computing are analyzed and a concrete use case is defined. So an innovation or idea is created that is enabled based on cloud computing. On this basis the cloud strategy is created.
  • Bottom-Up Cloud Strategy
    During the bottom-up approach an existing use case is implemented regarding the possibilities of cloud computing. This means it is analyzed how the cloud can help to support the needs of the use case. Thereof the respective cloud strategy is extrapolated.

Mainly the top-down approach comes up with new business models or disruptive ideas. The development is done on the green field within the cloud and mostly belongs to innovators. The bottom-up approach follows the goal to move an existing system or application into the cloud or redevelop it there. In this case it is mostly about to keep an existing IT resource alive or in best case to optimize it.

Bottom-Up: Migration of Enterprise Applications

Existing companies prefer to follow a bottom-up strategy in order to quickly benefit from the cloud capabilities. However, the devil is in the details. Legacy or classical enterprise applications are not developed to run on a distributed infrastructure – thus a cloud infrastructure. This means that it is not natural for them to scale out and they are only able to scale up at the utmost by using e.g. several Java threads on one single system. If this system fails also the application is not available anymore. Every application that should run in the cloud thus also has to follow the characteristics of the cloud and needs to be developed for this purpose. The challenge: Companies still lack of appropriate staff with the right cloud skills. In addition, companies are intensively discussing “Data Gravity”. This is about the inertia respectively the difficulty of moving data. Either because of the size of the data volume or because of a legal condition that requires to store the data in the own environment.

Vendors have recognized the lack of knowledge as well as the “Data Gravity” and try to support the bottom-up strategy with new solutions. Based on the “NetApp Private Storage” NetApp allows companies to balance the “Data Gravity” between public cloud services and the own control level. Companies have to fulfill several governance and compliance policies and thus have to keep their data under control. One solution is to let the cloud services access the data in a hybrid cloud model without moving them. Thus in this scenario the data is not directly stored in the cloud of the provider. Instead the cloud services are accessing the data via a direct connection when processing them. NetApp enables this scenario in cooperation with Amazon Web Services. For example, Amazon EC2 instances can be used to process the data that is stored in an Equinix colocation data center – the connection is established via AWS Direct Connect.

Another challenge with not cloud ready enterprise applications in the public cloud is the data level – when the data should leave the cloud of a provider. The reason is that cloud native storage types (object storage, block storage) are not compatible with common on-premise storage communication protocols (iSCSI, NFS, CIFS). In cooperation with Amazon Web Services, NetApp Cloud ONTAP tries to find a remedy. As some kind of NAS storage the data is stored on an Amazon Elastic Block Storage (EBS) SSD. In this case Cloud ONTAP serves as a storage controller and ensures the access of not cloud ready enterprise applications to the data. Due to the compatibility to common communication protocols the data can be moved easier.

VMware vCloud Air targets at companies with existing enterprise applications. The vCloud AIR public cloud platform based on the vSphere technology and is compatible to on-premise vSphere environments. So existing workloads and virtual machines can be moved back and forth between VMware’s public cloud and a virtualized on-premise infrastructure.

ProfitBricks tries to support companies with its Live Vertical Scaling concept. In this case a single server can be vertically extended with further resources – like amount of CPU cores or RAM – without rebooting the server. Thus the performance of a running virtual server can be enhanced without making changes to the application. The best foundation for this is a LAMP stack (Linux, Apache, MySQL, PHP) since e.g. a MySQL database recognizes new resources without any adjustments and a reboot of the host system and is able to use the added performance immediately. To make this possible ProfitBricks did modifications at operating system and hypervisor (KVM) level – that are transparent for the user. The customers only have to use the provided reference operating system image that includes the Live Vertical Scaling functionality.

A random bottom-up use case of enterprise applications in the public cloud can be find at Amazon Web Services. However, this example also shows that the meaning of system integrators in the cloud is rising.

  • Amazon Web Services & Kempinski Hotels
    The hotel chain Kempinski Hotels has migrated the majority of its core applications and departments among others finance, accounting and training to the Amazon cloud infrastructure. Together with system integrator Cloudreach a VPN connection was established between the own data center in Genf and the Amazon cloud over which the 81 hotels worldwide are now provided. Furthermore, Kempinski plans to completely shut down the own data center to move 100 percent of its IT infrastructure to the public cloud.

Top-Down: Greenfield Approach

In contrast to just keep enterprise applications the Greenfield approach follows the top-down strategy. In this case an application or a business model is developed from scratch and the system is matched to the requirements and characteristics of the cloud. In this context we are talking about a cloud native application that is considering the scalability and high-availability from the beginning. The application is able to independently start additional virtual machines if more performance is necessary (scalability) respectively to shut them down when they are not needed anymore. It is the same when a virtual machine fails. In this case the application also independently takes care that another virtual machine is starting as a substitute (high-availability). Thus, the application is able to work on any virtual machine of a cloud infrastructure. One reason is that any machine can fail at any time and a substitute needs to be started. In addition, also the data that are processed by an application are not stored at a single location anymore but are stored distributed over the cloud.

Startups and innovative companies are aware of this complexity what the following uses cases display.

  • Amazon Web Services & Netflix
    Netflix is one of the lighthouse projects on the Amazon cloud. The streaming provider is using each characteristics of the cloud what is reflected by the high availability as well as the performance of the platform. As one of the pioneers on the Amazon infrastructure Netflix has developed its own tools – Netflix Simian Army – from the very beginning to master the complexity.

However, the recent past has shown that innovative business models not necessarily have to be implemented on a public cloud and that the greenfield approach not only belongs to startups.

  • T-Systems & Runtastic
    Runtastic is an apps provider for persistence, strength & toning, health & wellness as well as fitness and helps users to reach their health and fitness goals. The company has a massive growth. After 100.000 downloads in 2010 and 50 million downloads in 2013, the number of downloads reached over 110 million until today. Furthermore, Runtastic counts 50 million users worldwide. Basically the numbers speak for an ideal public cloud scenario. However, due to technical reasons Runtastic decided for T-Systems and runs its infrastructure in two data centers in a colocation IaaS hybrid model.
  • Claranet & Leica
    Last year camera manufacturer Leica has launched “Leica Fotopark”, an online photo service to manage, edit as well as print and share photos. Managed cloud provider Claranet is responsible for the development and operations of the infrastructure. “Leica Fotopark” is running on a scale out environment based on a converged infrastructure and a software defined storage. The agile operations model is based on the DevOps concept.

Greenfield vs. Enterprise Applications: The Bottom-Line

Whether a company decides for the top-down or bottom-up cloud strategy depends on its individual situation and the current state of knowledge. The fact is that both variants help to let the IT infrastructure, IT organization and the whole company become more agile and scalable. However, only a top-down approach leads to innovation and new business models. Nevertheless, one has to consider that e.g. for the development and operations of the Netflix platform an excellent understanding for cloud architectures is necessary, which is still few and far between on the current market.

Regardless of their strategy and especially with regard to the Internet of Things (IoT) and the essential Digital Infrastructure Fabric (DIF) companies should focus on a cloud infrastructure. Those offer the ideal preconditions for the backend operations of IoT solutions as well as the exchange for sensors, embedded systems and mobile applications. In addition, a few providers support with ready micro services to simplify the development and to accelerate the time to market. Furthermore, a worldwide spanning infrastructure of datacenters offers a global scalability and helps to expand to new countries quick.

Categories
Cloud Computing @de

Analyst Strategy Paper: Datenkontrolle und globale Skalierbarkeit mit Hybrid- und Multi-Cloud-Architekturen sicherstellen

Mit der fortschreitenden Digitalisierung von Geschäftsmodellen und –Prozessen sind Unternehmenslenker und IT-Entscheider angehalten, sich mit neuen Sourcing- und Infrastrukturkonzepten zu beschäftigen. Die Public Cloud spielt hierbei eine zentrale Rolle. Im digitalen Zeitalter gewinnt der Stellenwert von Daten zunehmend an Bedeutung. Diese verfügen auf Grund von Compliance-Richtlinien, rechtlichen Anforderungen, technischen Limitationen und individuellen Sicherheitsklassen jedoch über eine bestimmte Trägheit und müssen verschiedenen Datenklassen zugeordnet werden.

Diese sogenannte „Data Gravity“ macht die Daten unterschiedlich beweglich. 
Damit sich diese schwer zu bewegenden Daten dennoch außerhalb der eigenen IT-Infrastruktur weiterhin verarbeiten lassen, ohne die Kontrolle darüber zu verlieren, sind neue Storage-Konzepte erforderlich. 
Hybrid- und Multi-Cloud Storage-Architekturen liefern hierfür Umsetzungsstrategien und belastbare Anwendungsszenarien. Die Daten befinden sich innerhalb dieser Architekturkonzepte in einem von dem Unternehmen kontrollierten Bereich und alleine der Inhaber der Daten bestimmt, welche Teile innerhalb der Public Cloud gespeichert werden.

Dementsprechend lassen sich sämtliche Vorteile von Public Cloud- Infrastrukturen nutzen, ohne dabei die Kontrolle über die eigenen Daten zu verlieren und somit die erforderlichen Compliance-Richtlinien und Gesetzesvorgaben zu erfüllen.

Vor diesem Hintergrund hat Crisp Research im Auftrag der NetApp Deutschland GmbH ein Strategiepapier erstellt, welches die Herausforderungen der Datenkontrolle und der globalen Skalierbarkeit im Rahmen von Hybrid- und Multi-Cloud-Architekturen thematisiert und Lösungsszenerien aufzeigt.

Das Analyst Strategy Paper kann unter “Datenkontrolle und globale Skalierbarkeit mit Hybrid- und Multi-Cloud-Architekturen sicherstellen” kostenlos heruntergeladen werden.