Categories
Cloud Computing

Contribution to the book “SAP on the Cloud”

In the end of 2014 Dr. Michael Missbach got in touch with me telling me about his plans to update his book “SAP on the Cloud“. The book gives a general overview of SAPs strategy and products around cloud computing and also covers (internal) cloud projects and ideas by SAP. One of his interests was the SAP Monsoon project, which was established and propelled by Jens Fuchs. I am covering the project as part of my OpenStack and open source cloud research at Crisp.

… and that is how it came that I have contributed the “SAP Monsoon” chapter to the book “SAP on the Cloud”.

The book can be ordered under http://www.springer.com/br/book/9783642436048.

Categories
Cloud Computing

Analyst Strategy Paper: Generation Cloud – The Market for MSPs and System Integrators in Transition

The market for system integrators and managed services providers is undergoing a substantial change. Only those who start their cloud transformation as early as possible will be able to survive on the market in the long run. The reason for this development is the changing purchase behavior of IT decision-makers. They are looking for more flexibility when it comes to the use of IT resources. So system integrators and managed service providers are faced with a fundamental change of their core business and need to upgrade the skill set of their employees as quickly as possible to the cloud-ready status. In this context public cloud infrastructures offer ideal conditions with regard to the price-performance ratio for running customer systems and applications in a managed services model. This way a faster response to changing customer requirements and to varying market situations is possible. System integrators and managed services providers can benefit from the high availability, the scalability and a high security standard of public cloud infrastructures. As a consequence they can free themselves from their capital-intensive business (shift from CAPEX model to the OPEX model) and thus design their pricing and marketing models more flexibly.

In the strategy paper “Generation Cloud – The Market for MSPs and System Integrators in Transition”, Crisp Research analyses the new role of MSPs and System Integrators in the age of the cloud.

The strategy paper can be downloaded free of charge under “Generation Cloud – The Market for MSPs and System Integrators in Transition“.

Categories
Cloud Computing

The Big Misunderstanding: Shared Responsibility in the Public Cloud

Responsibility in the public cloud is a story of several misunderstandings. Advisory sessions and conversations with different companies interested in public cloud unveil the certainty that the classical outsourcing concept is still widely spread among IT decision makers. Public cloud providers are being seen as full service providers. That complicates negotiations at eye level and blocks the quick adoption of public cloud services. „Shared Responsibility“ is the keyword that needs to be internalized. This research note clarifies the wrong assumptions and describes the concept.

Self Responsibility: The Big Misunderstanding

In the past 10 years, for the sake of convenience cloud computing was often defined as “Outsourcing 2.0”. What should have led to a better understanding on the user side, however, did public cloud providers a disservice. With the understanding in mind – an external service provider takes over responsibility for (partly all) IT operations – IT decision makers developed the expectations that public cloud providers are full service providers. The IT department just coordinates and controls the external service provider.

What is true for a software-as-a-service (SaaS) provider as a vendor of low-hanging fruits is completely different at platform-as-a-service (PaaS) and in particular at infrastructure-as-a-services (IaaS) level. SaaS providers are delivering ready developed and ready-to-use applications. The complexity, for example with solutions from Salesforce and SAP, comes with the configuration, customization and, if necessary, the integration with other SaaS providers. So, the SaaS provider is responsible for the deployment and the entire operations of the software, and the necessary infrastructure/ platform. The customer is consuming the application. PaaS providers are deploying environments for the development and operations of applications. Via APIs, the customer gets access to the platform and can develop and operate its own applications and provide those to his own customers. Thus, the provider is responsible for the deployment and the operations of the infrastructure and the platform. The customer is 100 percent responsible for his application but doesn’t have any influence on the platform or the infrastructure. IaaS providers only take responsibility at infrastructure level. Everything that is happening at higher levels is in the customer’s area of responsibility.

Thus, it is wrong to see public cloud providers such as Amazon Web Services, Microsoft Azure or VMware (vCloud Air) as full service providers who take whole responsibility for the entire stack – from infrastructure up to application level. Self responsibility is required instead!

Shared Responsibility: This is how IaaS Management works in the Public Cloud

A decisive public cloud detail that contrasts this deployment model clearly from outsourcing is the self -service. Depending on their DNA, the providers are only taking responsibility for specific areas. The customer is responsible for the rest.

In the public cloud, furthermore, it is about sharing responsibilities – referred to as Shared Responsibility. The provider and its customer divide the field of duties among themselves. In doing so, the customer’s self-responsibility plays a major role. In the context of IaaS utilization, the provider is responsible for the operations and security of the physical environment. He is taking care of:

  • Setup and maintenance of the entire data center infrastructure.
  • Deployment of compute power, storage, network and managed services (like databases) and other microservices.
  • Provisioning the virtualization layer customers are using to demand virtual resources at any time.
  • Deployment of services and tools customers can use to manage their areas of responsibility.

The customer is responsible for the operations and security of the logical environment. This includes:

  • Setup of the virtual infrastructure.
  • Installation of operating systems.
  • Configuration of networks and firewall settings.
  • Operations of own applications and self-developed (micro)services.

A very important part is security. The customer is 100 percent responsible for securing his own environment. This includes:

  • Security of operating systems, applications and own services.
  • Encryption of data, data connections as well as ensuring the integrity of systems based on authentication mechanisms as well as identity and access controls at system and application level.

Thus, the customer is responsible for the operations and security of his own infrastructure environment and the systems, applications, services, as well as stored data on top of it. However, providers like Amazon Web Services, Microsoft Azure or VMware vCloud Air provide comprehensive tools and services customers can use e.g. to encrypt their data as well as ensure identity and access controls. In addition, enablement services (microservices) exist that customers can adopt to develop own applications more quickly and easily.

By doing this, the customer is all alone in its area of responsibility and thus has to take self-responsibility. However, constantly growing partner networks are helping customers to set up virtual infrastructures in a secure way and run applications and workloads on top of public clouds.

@CIO: Public Cloud means stopping with Antiquated Traditions

In addition to requiring an understanding of the shared responsibility concept, using public cloud infrastructure also makes imperative the rethinking of the infrastructure design as well as the architecture of the corresponding applications and services.

During the way to public cloud infrastructure, the self-service initially looks simple. However, the devil is in the detail and hides in the complexity that is not obvious at first. That is why CIOs should focus on the following topics from the start:

  1. Understand the respective provider portfolio and the characteristics of the platform/ infrastructure. This sounds easy. However, public cloud infrastructure environments are developing at enormous speed. For this purpose, it is necessary to know the range of functions and the availability of all services on the infrastructure platform and train the employees on a rolling basis to exploit the full potential.
  2. Focus on a greenfield approach including a microservice architecture. Public cloud infrastructures are following completely different architecture and design concepts as compared to those taught and implemented just some years ago. Instead of developing monolithic applications, cloud infrastructure is set on so called microservice architecture to develop independent, loose coupled and individually scalable applications that are integrated to create an entire application. This ensures better scalability and leads to a higher availability of the entire application.
  3. Consider „design for failure“. „Everything fails, all the time“ (Werner Vogels, CTO Amazon.com). The design of a cloud application has to follow the rules and characteristics of cloud computing and consider high availability. In doing so, one has to make sure to avoid a single point of failure and know that something can go wrong at any time. Thus, the goal is to create an application that works anytime even if the provider’s underlying physical infrastructure starts having issues. Therefore, the providers offer the necessary tools and services.
  4. Use existing best practices and operational excellence guidelines for the virtual environment. Leading cloud users like Netflix impressively show how to handle self-responsibility, respectively shared responsibility, in the public cloud. In doing so, Netflix has developed its „Simian Army“. This is a huge set of tools and services they are using to ensure the highly-available operations of the virtual Netflix infrastructure on top of the cloud of the Amazon Web Services. Zalando makes similar steps by developing his own STUPS.io framework.
  5. Consider managed public cloud providers. The complexity of the public cloud shouldn’t be underestimated. This applies for setting up the necessary virtual infrastructure, the development of applications as well as for the operations and the holistic implementation of all security mechanisms. More and more system integrators like Direkt Guppe, TecRacer or Beck et al. Services specialize in the operations of public clouds. In addition, former web hosting providers and MSPs like Rackspace (whose Fanatical Support is now available for Microsoft Azure) transform to managed public cloud providers. And many more will follow!

The growing number of cloud migration projects at big medium-size companies and enterprises indicate that public cloud infrastructure platforms are becoming the new norm, while old architecture, design and security concepts are being replaced. After public clouds have been ignored over several years, this deployment model now also makes its way on the digital infrastructure agenda of IT decision makers. However, only CIOs with a changing mindset taking the shared responsibility concept for granted will successfully make use of the public cloud.

Categories
Analyst Cast Cloud Computing

Shared-Responsibility in the Public Cloud (Webcast Recording in German)

Public cloud and responsibility is a difficult topic. However, most of the companies still have to understand that self-responsibility is a central point using the public cloud. The provider does his homework up to a certain level and delivers services and tools the customer has to use to do his homework. Thus, the complexity mostly sticks in the architecture of the infrastructure respectively in the application running on the infrastructure. However, a customer cannot drop the full responsibility into the hands of the provider.

“Shared-Responsibility” is key in the public cloud. How this works and how companies have to deal with this in terms of operation and security I discussed together with Amazon Web Services German country manager Martin Geier and Zalando STUPS Hacker Henning Jacobs during the Computerwoche live webcast “Security First in the Public Cloud“.

The recording of the webcast is now online and can be watched for free under “Webcast mit Zalando und AWS: Security First in der Public Cloud” in German.


Picture source & credits: Amazon Web Services Germany

Categories
Cloud Computing

Cloud Market Update Germany 2015: Cloudy with a Chance of Digital Enterprises

The hard numbers speak for themselves. Those who still assume that the cloud is just a playground for developers, must learn a lesson from the $1.8bn revenue ($391m net profit) of Amazon Web Services. Such numbers cannot be accomplished with a couple of small workloads but are rather a clear indication of increased relevance for corporate customers. The strengths that AWS has proven on the international stage continuously increase in Germany and have a positive impact on the German cloud market.

„First they ignore you, then they laugh at you, then they fight you, then you win.“ The evergreen quote from Mahatma Gandhi is also relevant to the cloud story. In the beginning, the cloud was laughingly labeled as „the hype“; then followed the battle with partially erroneous arguments, among which the ever-recurring topics of Data Protection & Data Security, well covered by the media and unfairly used interchangeably. At the end of 2014, however, the cloud in Germany had rapidly spread out and this shift continues in 2015.

An analogy to the cloud’s growth can be made with the triumphant spread of WLAN at the enterprise and can provide evidence as to why the cloud finally enforces itself. Earlier, CIOs were being ridiculed for daring to introduce WLAN infrastructure and open their businesses to the outside world. Psychology and technological developments have made it possible to dismiss further claims that WLAN is unsecure.

The Cloud is the foundation for Digital Transformation

For close to 75 percent of the German enterprises, cloud computing occupies a significant place on the IT agenda. The cloud is either an active component of the ongoing IT operations and deployed as part of projects and workloads, or being used for planning and implementation purposes. We see a clear trend toward hybrid cloud solutions. More than half of the survey respondents (57 percent) rely on this operational cloud model. Moreover, in the context of multi-cloud strategy, managed private cloud environments (57 percent) gain increasing importance.

The hybrid cloud stays in focus, especially because enterprises are intensely occupied with the topic of „Data Gravity“. This comes down to data immobility. Either the data volume is too large to store in the cloud, or legal frameworks necessitate the keeping of specific data on premise. A hybrid cloud architecture is in a good position to address the challenges of „Data Gravity“ and offer unique solutions. In this case, data mobility is no longer necessary. Instead, the data are kept on private storage systems (for example, in the managed private cloud) and the public cloud services (compute power, applications, etc.) then access the data for processing. In the course of processing via the use of public cloud services, the data have indeed been modified and, as the case may be, new data have been generated. However, the data never leave the storage system in the direction of the public cloud.

Only around 25 percent of the German decision makers do not find any place for the cloud on their IT agenda. It is a matter of wait and see how long this stays the case. The pressure on the different departments for more flexibility and a shorter time-to-market for new applications is increasingly stronger. At the end, it all boils down to the CIOs not being able to move forward without using cloud services, in whatever form they may be. And who would let oneself to be reproached for negligently playing the wrong note with the digital transformation opportunity?

Among those businesses that actively dedicate themselves to the digital transformation, and accordingly build a Digital Infrastructure Fabric (DIF), the cloud platforms and infrastructure are a central part of IT strategy in supporting the digital efforts. This is indicated by the open presence of Infrastructure-as-a-Service and Platform-as-a-Service on the agenda of respectively 63 percent and 70 percent of CIOs, and their use as a foundation for the development and operations of new applications.

Furthermore, the Internet of Things (IoT) will spur forward the German cloud market and quickly become a decisive factor for the future competitive advantage of businesses who must quickly get busy and grapple with the necessary technologies. Public cloud environments – especially infrastructure (IaaS) and platforms (PaaS) – offer the ideal prerequisites for back-end support of IoT services and end devices. The critical attributes for this aim have already been placed in the cradles of leading public cloud providers, to be further developed into an IoT backend. One can then rightfully adhere to the view that cloud growth in Germany and the advancement of the Internet of Things are closely interrelated.

Provider Overview

An overview of the behavior of the most important public cloud providers is presented in the current research note: „Public Cloud Providers in Germany: Frankfurt is the Stronghold of Cloud Data Centers“.

Halfway 2015: Six Cloud Trends in Germany are already confirmed

In the beginning of the year, Crisp Research outlined ten trends in the cloud computing market in Germany. Six of them are already a reality.

  • The Public Cloud arrived in Germany
    The public cloud model is gaining massive ground in Germany. At the AWS Summit in Berlin, strong and comprehensive projects from companies like Audi, Zalando and Zanox spoke the same distinct language. The public cloud enjoys towering popularity as IT users use it to bring agility and flexibility into application management. Then again, by this time almost all relevant public cloud providers boast data centers on German grounds.
  • Consultants and System Integrators benefit from the cloud boom
    The complexity associated with public cloud infrastructure as well as the missing cloud expertise and development skills in the German enterprise brings cloud system integrators into the game. This has been confirmed through conversations with decision makers from the respective companies like Direkt Gruppe, TecRacer or Beck et. al. The somewhat EUR 2.9bn market is divided by the system integrators and consultants who are truly committed to the cloud topic and make a significant contribution to the digital transformation in Germany.
  • Multi Cloud is now a reality
    The use of cross-functioning cloud deployments has experienced strong growth. As part of cloud strategy, the evaluation of and operations from at least two cloud providers are considered. One reason for this is risk management. Another, main reason is that not every provider is capable of handling all applications – e.g. bare metal or legacy IT. At present, the customer administers workloads individually on each cloud provider’s platform. In the mid-term, the management will make a shift to a more centralized platform that simplifies deployment.
  • Frankfurt ensures cloud connectivity and performance
    Cloud connectivity is of paramount importance for both the user and the provider, as technical challenges (low latency, high availability and throughput) must be tackled to ensure high-performing, stable and secure application operations and services. Frankfurt is the core of the German and European cloud markets. A look into the presently leading public cloud providers in the German market reveals that already half of them have selected Frankfurt as a data center location, affirming the relevance of cloud connectivity and performance. Amazon Web Services, ProfitBricks, SoftLayer and VMware are already present in Frankfurt, to be followed by Salesforce in August.
  • The Internet of Things drives Mobile Backend Development
    In the course of digitalization, the development of mobile apps in the context of the Internet of Things mounts up. The data gathered from end devices will be transferred to back-end infrastructure, where back-end applications will undertake the analysis and linkage with other data, to prepare for the visualization of a front end. Mobile backends become IoT Backends and an important part of the IoT value chain. What is now known in the market as Fitness Wearables, which many people use to self-measure certain indicators, will spread to other industries. A lot of movement has been observed in the Smart Home space. Providers like Tado (intelligent heating and air conditioning management) or Netatmo (weather station) use backend applications based on cloud infrastructure that takes care of integrated connections of end devices and secure control.
  • More Services – Pricing rises
    Price reductions belong to the past. Now the price barometer points to the other direction. First, Microsoft will increase prices for Azure in the European region from August 1st. The focus is deemed to be increasingly more on innovation. Amazon AWS released 220 new services and functions this year, Microsoft Azure about 110, respectively. ProfitBricks also recognized that pure infrastructure has already become a commodity and offers very little potential for innovation. Instead, the customers require enablement of cloud infrastructure, on top of which to build their own products and services. ProfitBricks presented DevOps Central, a portal to inspire developers to use the infrastructure environment. Furthermore, proprietary SDKs for Java and Go were introduced, to aid the developer to manage the ProfitBricks infrastructure components via programmable REST APIs.

Marketing Gimmick: „Cloud Made in Germany“

What is „The German Cloud“ doing? Nothing! The marketing about „The German Cloud“ really needs to stop. The German customers have never asked for a German cloud. This naming is the creative idea of a couple of German marketing managers.

Instead of applauding „The German Cloud“, German providers should rather use their strengths to develop and bring to market innovative and appealing cloud services. The market leadership and the clear innovative advantage of the US providers versus the German ones present a grave problem. At the end, the availability and the competitive advantage of the German cloud services is really shrinking. Many enterprises are snatching the offerings of the cloud providers from the US.

The fact is that a large part of German CIOs ask for German data centers, in order to guarantee the level of data protection and fulfil the legal requirements. Certainly, enterprises should not neglect the significance of global scalability, in order to expand to other geographies. Having a provider with a global footprint is imperative. Here is a reason why the IT and cloud strategies must be aligned from the beginning.

Bottom Line: Digitalization drives Cloud Computing

The German businesses find themselves in the middle of their cloud transformation processes. They lay out a multi-cloud strategy step by step, based on existing infrastructure, platforms and services from different providers. In the course, CIOs assess technologies and providers for the creation of their Digital Infrastructure Fabric (DIF), which will further play out the technological implementation of the individual digital strategy and will put the foundation of novel business models and agile business processes.

The cloud has acquired leading status as a vehicle for the digital transformation. Only by means of deploying dynamic and globally scalable platforms and infrastructure can the IT strategy adequately address the evolving market conditions and support the business strategy in an agile way from a technical perspective.

Categories
Cloud Computing

Public Cloud Providers in Germany: Frankfurt is the Stronghold of Cloud Data Centers

In the beginning, US public cloud players didn’t care much about Germany. However, in the meantime more and more providers are bustling on the German market. Above all, Frankfurt emerged as the Mecca of cloud data centers.

The top 5 reasons of Germany’s attractiveness are:

  • a stable political situation.
  • a central location in Europe.
  • high data privacy laws.
  • a geographical stable situation.
  • a high economic performance (fourth largest national economy in the world and number one in Europe).

Frankfurt is the Cloud Hub in Germany

Data centers are experiencing their heyday. The “logistic centers of the future” are coming to the fore as never before and provide the digital backbone of the digital transformation. With good reason. In the course of the last decade more and more data and applications have been moved to IT infrastructure of global distributed data centers. The significance of data centers as well as IT infrastructure as a logistic data vehicle is no accident. Also German companies have recognized this. More than two-thirds (68 percent) see data centers as the most important building block of their digital transformation.

In the recent 20 years a cluster of infrastructure providers has been established in Frankfurt that helps companies of the digital economy to position their products and services in the market. These providers have shaped Frankfurt and its economy and deliver integration services for IT and networks as well as data center services. More and more public cloud providers have recognized that they have to be on site in the local markets and thus near their customers – despite the inherently global character of a public cloud infrastructure. This is an important insight. No provider who wants to make serious business in Germany can relinquish a local data center location.

The research paper “The significance of Frankfurt as a location for Cloud Connectivity” is dealing with the question, why Frankfurt is the preferred location for cloud data centers.

Public Cloud Providers and their Data Centers in Germany

A view on the most important public cloud providers for the German and European market shows that already half of them has decided for Frankfurt as their preferred data center location. That a German data center pays off is proved by Amazon Web Services (AWS). According to the German country manager Martin Geier, the cloud region in Frankfurt (consisting of two data centers), AWS has opened in October 2014, is the fastest growing international AWS region ever. In addition, the AWS region has helped to quicken the German cloud market. On the one hand, AWS customers are welcoming the opportunity to physically store their data in their own country. On the other hand, AWS efforts also help local providers like ProfitBricks who indirectly profit from constantly getting new customers.

  • Despite being a vigorous follower of Amazon Web Services, Microsoft has yet not managed to open a datacenter in Germany. This is hard to explain, considering that Microsoft has had its own legal entity in the German market for a long time and should be well familiar with the concerns and requirements of the German enterprise customer. Microsoft’s efforts in the cloud are considerable and display a clear trajectory. Certainly, Microsoft would be able to finally convince its German customers and meet their requirements only through establishing a local datacenter. Rumors are spreading that Microsoft will open one in Q2 2016. One possible strategy could be to partner up with a large local partner, similar to the joint efforts of Salesforce and T-Systems. At the moment, Microsoft relies on technology partnerships (Cloud OS Partner Networks) with local managed service providers, who build their own Azure-based cloud environments based on the Azure Pack.
  • At CeBIT, VMware announced officially the General Availability (GA) of its German datacenter. The outlook for the technology provider is generally good. On the one hand, a large part of on-premise infrastructure is already virtualized via VMware technologies. On the other hand, businesses are searching for ways to migrate their existing workloads (applications, systems) to the cloud without facing too much hassle and making many modifications. Indeed, even when VMware’s own public cloud offering focuses on standardized workloads, it still competes directly with a range of its partners (cloud hosting providers, managed service providers), who have built their offerings using VMware technologies as a foundation.
  • The American provider Digital Ocean has had its own datacenter in Germany since April 2015. Digital Ocean is a niche provider to keep a close watch on. It targets mostly developers and not so much enterprise customers. Moreover, if Digital Ocean seriously wants to stand in the ring against Amazon Web Services, then the company must offer more than boring SSD cloud servers and a couple of applications to its customers.
  • Rackspace is not yet represented by a local datacenter in the German market; yet its business is expanding into the DACH market, where Germany is of strategic importance. A local datacenter would certainly underline the commitment. Rackspace could have winning cards as a managed cloud service provider, because the majority of German businesses are already busy with managed cloud services.
  • On account of the partnership with T-Systems, Salesforce will likely establish a seat in Frankfurt. The datacenter’s opening is announced for August 2015.
  • At present, one should not expect Google to open a datacenter in Germany. This speaks for Google’s attitude to determine the rules of the game and concentrate on itself, rather than on the needs, challenges and concerns of its customers. This is reflected widely through the Google Cloud Platform. The requirements of the corporate customers have to this date not been considered by Google.

Moreover, it is worth noting that the lower costs and the innovation capabilities of public cloud providers put a continuously rising pressure on managed service providers with own datacenters. The providers are now forced to change their strategies and become managed service providers who operate infrastructure for public cloud environments. This means that the providers manage their customers’ applications and systems from cloud infrastructure such as that of Amazon’s AWS. Only specific, mission-critical workloads remain in the providers’ own datacenters.

Frankfurt’s leads in density of datacenters and interconnected Internet exchange points not only in Germany but also Europe-wide. The continuous motion of data and applications on the infrastructure of external providers has made Frankfurt the citadel of cloud computing in Europe. Strategic investments from colocation providers such as Equinix and Interxion have emphasized the significance of the location.

As most of the relevant public cloud providers have already found their own place in Frankfurt, Crisp Research envisions an important trend for the next couple of years – an increasing number of international providers will build their public cloud platforms in Frankfurt and will respectively continue to expand and develop them there.

Categories
Cloud Computing

AWS Summit Berlin 2015: Germany embraces the Public Cloud

The spell is apparently broken. The public cloud model is massively gaining ground in Germany. Without using the public cloud most of the German companies will struggle to play a significant role in the future market of the Internet of Things (IoT) and kicking off their digital transformation. However, it looks like that the topic has arrived at some German executive floors. The AWS Summit Berlin 2015 was a good indicator for this.

It is October 7th 2010, Amazon CTO Werner Vogels welcomes 150 developers at “Kalkscheune” in Berlin. Family environment, no partners, no booth, some snacks and drinks. It is the first AWS cloud computing event in Germany and quasi the very first AWS Summit in Germany ever. Speakers from Moviepilot, Cellular, Plinga and Schnee von Morgen are talking about their experiences with Amazon Web Services.

On June 30th 2015, almost five years later, Werner Vogels is again on stage, again in Berlin, this time at the “CityCube”, in front of over 2,000 attendees, in front of developers and decision makers. Big booths, conference food, 32 partners and 45 sessions distributed over 9 tracks. All this shows the enormous growth of the AWS Summit Berlin and reflects the interest of German IT users in the public cloud and the Amazon Web Services (AWS).

Germany has become one of the growth engines for AWS. According to the German country manager Martin Geier, the cloud region in Frankfurt (consisting of two data centers), AWS has opened in October 2014, is the fastest growing international AWS region ever!

Innovation: 1.170 new Functions and Services in 7 Years

The growth in the German market stands symbolic for AWS global growth. According to AWS, already over 1 million active customers of different sizes and from various industries are using the public cloud infrastructure. This includes 3,600 customers from the education sector and over 11,200 non-profit organizations. In order to expand the customer base in the German startup scene a partnership with Rocket Internet was established. Rocket has committed recommending all prospective startups to run their infrastructure and applications at AWS. In addition, existing startups are advised to think about moving to AWS. AWS next top target customer group is the public sector (schools and public authorities). For this purpose, the Summit hosted a public sector track for the first time. This is an important strategic step for AWS. If AWS is able to gain a foothold into one German government authority this would be a precedent that could encourage other public customer to follow.

The growth on the customer side can also be seen on the technical level. The incoming and outgoing data transfer of Amazon S3 has been increased by 102 percent in the last year. The usage of Amazon EC2 instances increased by 93 percent.

In addition, AWS operates 11 cloud regions consisting of 30 Availability Zones (AZ). One region consists at least of two AZs, one AZ consists of one or several data centers. Furthermore, 53 edge locations exist to deliver data to customers in single local markets quicker. The 12th cloud region opens in India in 2016.

Besides the advantage to be the first infrastructure-as-a-service (IaaS) provider on the market, especially two factors lead to the enormous head start: the service portfolio and the speed of innovation.

  1. Instead of just providing pure infrastructure, AWS has a huge portfolio of microservices that is helping customers to use the infrastructure gainful by developing web applications and backend solutions on top of it. At the same time the infrastructure platform serves as a technical enabler for new business models.
  2. AWS is more innovative than any other cloud provider. In the last seven years 1,170 new functions and services have been released, 516 of them only in 2014. AWS has more functionality than any other infrastructure provider.

AWS stands for the public cloud. The provider and CTO Werner Vogels again made this very clear. The private cloud has no space in the world of AWS. The hybrid cloud is just considered as the journey but not the ultimate destination. Therefore, customers are only provided with elementary solutions and services (Amazon VPC, AWS Direct Connect) for hybrid cloud scenarios and it is likely to remain like that. Customers who got used to it are Netflix, Kempinski Hotels, GPT, University of Notre Dame, Emdeon, Intuit, Infor, Splunk, Tibco and Informatica. These companies are “All-in” with the public cloud and AWS.

Corporate Customers discover the Public Cloud

Companies of all size are able to benefit from using the public cloud. Startups have the advantage to start with a greenfield approach und doesn’t drag any legacy along. They can grow slowly without making heavy investments in IT resources in the very beginning. Existing companies need one particular thing in this day and age: speed, in order to keep pace with the fast changing market conditions. An idea is just the beginning. However, a fast go-to-market mostly fails due to the technical execution because of not existing IT resources like modern tools and services that are significantly helping for the development.

AWS counts the who is who of the startup scene as its customers. Now it is about to navigate more corporate customers to the infrastructure. Unilever, Qantas, Dole, Netflix, Norvatis or Nasdaq are already big international customers on the list. In Germany after Talanx and Kärcher finally a heavyweight was presented at the AWS Summit 2015, Audi.

Audi

Audi decided for AWS in the context of its new mobility program “Audi on Demand” to provide customers with individual services. The reasons for this are requirements for a frictionless 24/7 operations as well as the capabilities for global scale in order to reach out to global customers quick and to store data in the local markets. Therefore, Audi is using several AWS services and functions such as Virtual Private Cloud, a multi availability concept and Amazon EC2. One decisive detail: Audi has transformed the organizational structure from a hierarchy model to a fully meshed model and built everything around IT.

Zalando

Zalando said good-bye to its own data center and moved the IT infrastructure of its ecommerce shops into the AWS cloud. This was for strategic reasons to promote the innovation and creativity of the company. For this purpose, Zalando is empowering its employees acting autonomously to provide the necessary IT infrastructure quicker than previously. In doing so, Zalando is able to deliver its customers new functions quicker. However, Zalando’s example shows how important a cultural change is if innovation should be promoted supported by the cloud. Therefore the company is orientating on something it is calling “Radical Agility”. This is about e.g. the organization and architecture that is required to give own teams the capabilities for more freedom to learn and in this context allowing them to make mistakes. In the end, this is the only way to understand how to develop massive complex applications.

Zanox

In a personal briefing Sascha Möllering, lead engineer at Zanox AG, talked about his experiences of using the AWS cloud. Möllering is significantly responsible building the virtual IT infrastructure and the development of the backend service that is used by the Zanox affiliate marketing network. Zanox operates an own data center infrastructure in Berlin to serve the European market. However, the number of customers in Brazil is constantly increasing. Zanox depends on a global scale to provide the customers in this market with a good performance. The challenge is the high latency to deliver data as fast as possible. This is the main reason Zanox decided for AWS. The cloud region “Sao Paulo” offers Zanox three availability zones and four edge locations in Brazil. For this purpose, Möllering developed a native AWS application but only focus on using core services in order trying to avoid an AWS service lock-in. Because Zanox doesn’t want to relinquish the own data center infrastructure in Berlin and connects AWS in a hybrid model. In addition, Möllering plans in the case of an AWS error and has considered a multi region scenario respectively a multi cloud scenario. Therefore he has developed an own module that is implementing the APIs of Amazon Kinesis, Microsoft Azure Service Bus as well as Apache Kafka to make sure not losing the incoming data stream and thus not losing data on no account.

Public Cloud has arrived in Germany

The AWS Summit in Berlin was a good indicator that more and more German companies are discovering the public cloud. Conversations with customers, partners and system integrators show a good progress. Audi, one of the leading German car manufacturers from one of the oldest industries, has shown that it has realized the capability and necessity of the public cloud.

The trust in the public cloud is growing and the time is playing for the public cloud providers and against the German companies. The one who is not transforming digitally will sooner or later disappear from the market. The one who wants to keep pace with digital transformation (e.g. Internet of Things, smart products, ecommerce etc.) is not able to avoid using the public cloud, its services, modern development tools as well as the global scalability.

AWS is one of the public cloud providers who can help during this transformation process. However, AWS is complex! It is complex during the setup as well as the operation and administration of the virtual infrastructure and thus also with regard to the development of web applications and backend services. No discussion! The conversation with Zanox has clarified this once again and confirmed several conversations with other AWS interested parties. Moreover, the necessary cloud knowledge is still limited and it will take some time until it has broadly arrived.

Categories
Cloud Computing

Microservice: Cloud and IoT applications force the CIO to create novel IT architectures

The digital transformation challenges CIOs to remodel their existing IT architectures providing their internal customers with a dynamic platform that stands for a better agility and fosters the companies’ innovation capacity. This change calls for a complete rethink of the historically implemented architecture concepts. Even if most of the current attempts are to migrate existing enterprise applications into the cloud, CIOs have to empower their IT teams to consider novel development architectures. Because modern applications and IoT services are innovative and cloud based.

Microservice: Background and Meaning

Typical application architectures, metaphorically speaking, remind of a “monolith”, a big massive stone that is made of one piece. The characteristics of both are the same: heavy, inflexible and not or not easy to modify.

Over the last decades, many, mostly monolithic applications have been developed. This means that an application includes all modules, libraries, and independencies that are necessary to ensure a smooth functionality. This architecture concept implicates a significant drawback. If only a small piece of the application needs to change, the whole application has to be compiled, tested and deployed again. This also implies for all parts of the application that don’t experience any changes. This comes at big costs taking manpower, time and IT resources and in most cases lead to delays. In addition, a monolith makes it difficult to ensure:

  • Scalability
  • Availability
  • Agility
  • Continuous Delivery

CIOs can meet these challenges by changing the application architecture from a big object to an architecture design composed of small independent objects. All parts are integrated with each other providing the overall functionality of the application. The change of one part doesn’t change the characteristics and functionality of other parts. This means that each part works as an independent process, respectively, service. This concept is also known as microservice architecture.

What is a Microservice?

A microservice is an encapsulated functionality and is developed and operated independently. So, it is a small autonomous software component (service) that provides a sub-function within a big distributed software application. Thus, a microservice can be developed and provided independently and scales autonomous.

Application architectures based on microservices, are modularized and thus can be extended with new functionalities easier and faster as well as better maintained during the application lifecycle.

Compared to traditional application architectures, modern cloud based architectures are following a microservice approach. This is because of the cloud characteristics cloud native application architectures have to be adapted to. This means that issues like scalability and high-availability have to be considered from the very beginning. The benefits of microservice architectures are related to the following characteristics:

  • Better scalability: A sub-service of an application is able to scale autonomously if its functionality experienced a higher demand without affecting the remaining parts of the application.
  • Higher availability of the entire application: A sub-service that experiences an error doesn’t affect the entire application but only the functionality it is representing. This means that a sub-failure necessarily doesn’t affect customer-facing functionality if the service represents a backend service.
  • Better agility: Changes, improvements and extensions can be implemented independently from the entire application functionality without affecting other sub-services.
  • Continuous delivery: These changes, improvements and extensions can be conducted on a regular basis without updating the whole application respectively without a major maintenance mode.

Another benefit of microservice architectures: A microservice can be used in more than one application. Developed once it can serve its functionality in several application architectures.

What Provider works with Microservices?

Today, a number of providers already understood the meaning of microservice architectures. However, in particular the big infrastructure players have their difficulties with this transformation. Startups respectively cloud native companies show how it works:

  • Amazon Web Services
    From the very beginning Amazon AWS aligned its cloud infrastructure providing microservices (building blocks). Examples: Amazon S3, Amazon SNS, Amazon ELB, Amazon Kinesis, Amazon DynamoDB, Amazon Redshift
  • Microsoft Azure
    From the very beginning the cloud platform consists of microservices. The Azure Service Fabric exists for a short time offering capabilities for the development of own microservices. Examples: Stream Analytics, Batch, Logic App, Event Hubs, Machine Learning, DocumentDB
  • OpenStack in general
    The community extends the OpenStack portfolio with new microservices with each release mainly for infrastructure operations. Examples: Object Storage, Identity Service, Image Service, Telemetry, Elastic Map Reduce, Cloud Messaging
  • IBM Bluemix
    IBM’s PaaS Bluemix provides an amount of microservices. These are offered directly by IBM or via external partners. Examples: Business Rules, MQ Light, Session Cache, Push, Cloudant NoSQL, Cognitive Insights
  • Heroku/Salesforce
    Heroku’s PaaS offers “Elements”, a marketplace for ready external services that can be integrated as microservices in the own application. Examples: Redis, RabbitMQ, Sendgrid, Raygun.io, cine.io, StatusHub
  • Giant Swarm
    Giant Swarm offers developers an infrastructure for the development, deployment and operations of microservice based application architectures. For this purpose, Giant Swarm is using technologies like Docker and CoreOS.
  • cloudControl
    cloudControl’s PaaS offers “Add-ons”, a marketplace to extend self-developed applications with services from external partners. Examples: ElephantSQL, CloudAMQP, Loader.io, Searchify, Mailgun, Cloudinary

The providers, based on their microservice portfolios, are offering a programmable modular construction system of ready services that are accelerating the development of an application. These are ready building blocks (see hybrid and multi-cloud architectures), whose functionalities don’t have to be developed again. Instead they can be used directly as a “brick” within the own source code.

Example of a Microservice Architecture

Netflix, the video on demand provider, is not only a cloud computing pioneer and one of the absolute role models for IT architects. Under the direction of Arian Cockroft (now Battery Ventures) Netflix has developed an own powerful microservice architecture to operate its video platform high scalable and high available. Services include:

  • Hystrix = Latency and fault tolerance
  • Simian Army = High-availability
  • Asgard = Application deployment
  • Exhibitor = Monitoring, backup and recovery
  • Ribbon = Inter process communication (RPC)
  • Eureka = Load balancing and failover
  • Zuul = Dynamic routing and monitoring
  • Archaius = Configuration management
  • Security_Monkey = Security and tracking services
  • Zeno = In-memory framework

All microservices, Netflix encapsulates within its “Netflix OSS” that can be downloaded as open source from Github.

An example from Germany is Autoscout24. The automotive portal is facing the challenge to replace its 2000 servers that are distributed over 2 data centers, and the currently used technologies based on Microsoft, VMware and Oracle. The goal: a microservice architecture supported by a DevOps model to implement a continuous delivery approach. Thus, Autoscout24 wants to stop its monthly releases and instead provide improvements and extensions on a regular basis. Autoscout24 decided for the Amazon AWS cloud infrastructure and already started the migration phase.

Microservice: The Challenges

Despite the benefits, microservice architectures come along with several challenges. Besides the necessary cloud computing knowledge (concepts, technologies, et.al.) these are:

  • A higher operational complexity since the services are very agile and movable.
  • An additional complexity because of the development of a massive distributed system. This includes latency, availability and fault tolerance.
  • Developer need operational knowledge = DevOps
  • API management and integration play a major role.
  • A complete end-to-end test is mandatory.
  • Ensuring a holistic availability and consistency of the distributed data.
  • Avoiding a high latency of the single services.

The Bottom Line: What CIOs should consider

Today, standard web applications (42 percent) still represent the major part at public IaaS platforms. By far mobile applications (22 percent), media streaming (17 percent) and analytics services (12 percent) follow. Enterprise applications (4 percent) and Internet of Things (IoT) services (3 percent) are still playing a minor part. The reason for the current segmentation: websites, backend services as well as content streaming (music, videos, etc.) are perfect for the public cloud. On the other hand enterprises are still sticking in the middle of their digital transformation and evaluate providers as well as technologies for the successful change. IoT projects are still in the beginning or among the idea generation. Thus in 2015, IoT workloads are only a small proportion on public cloud environments.

Until 2020 this ratio will significantly change. Along with the increasing cloud knowledge within the enterprises IT and the ever-expanding market maturity of public cloud environments for enterprise applications the proportion of this category will increase worldwide from 4 percent to 12 percent. Accordingly, the proportion of web and mobile applications as well as content streaming will decrease. Instead worldwide IoT workloads will almost represent a quarter (23 percent) on public IaaS platforms.

These influences are challenging CIOs to rethink their technical agenda and thinking about a strategy in order to enable their company to keep up with the shift of the market. Therefore they have to react to the end of the application lifecycle early enough by replacing old applications and look for modern application architectures. However, a competitive advantage only exists if things are done differently from the competition and not only better (operational excellence). This means that CIOs have to contribute significantly by developing new business models and develop new products like an IT factory. The wave of new services and applications in the context of the Internet of Things (IoT) is just one opportunity.

Microservice Architecture: The Impact

Microservice architectures support IT departments to respond faster to the requirements of specialty departments to ensure a faster time-to-market. In doing so, independent silos need to be destroyed and a digital umbrella should be stretched over the entire organization. This includes the introduction of the DevOps model to develop microservices in small and distributed teams. Modern development and collaboration tools are enabling this approach for worldwide-distributed teams. This helps to avoid shortage of skilled labor at certain countries by recruiting specialist from all over the world. So, a microservice team with the roles product manager, UX designer, developer, QA engineer and a DB admin could be established across the world, which accesses the cloud platform via pre-defined APIs. Another team composed of system, network and storage administrators operates the cloud platform.

Decision criteria for microservice architectures are:

  • Better scalability of autonomous acting services.
  • Faster response time to new technologies.
  • Each microservice is a single product.
  • The functionality of a single microservice can be used in several other applications.
  • Employment of several distributed teams.
  • Introduction of the continuous delivery model.
  • Faster onboarding of new developers and employees.
  • Microservices can be developed more easily and faster for a specific business purpose.
  • Integration complexity can be reduced since a single service contains less functionality and thus less complexity.
  • Errors can be isolated easier.
  • Small things can be tested easier.

However, the introduction of microservice architectures is not only a change on the technical agenda. Rethinking the enterprise culture and the interdisciplinary communication are essential. This means that also the existing IT and development teams needs to be changed. This can happen either by internal trainings or external recruiting.

Categories
Cloud Computing Open Source

Round 11: OpenStack Kilo

The OpenStack community hits round 11. Last week the newest OpenStack release “Kilo” was announced – with remarkable numbers. Almost 1.500 developers and 169 organizations contributed source code, patches etc. Top supporting companies to OpenStack Kilo include Red Hat, HP, IBM, Mirantis, Rackspace, Yahoo!, NEC, Huawei and SUSE. OpenStack Kilo is characterized by a better interoperability for external drivers, supporting new technologies like container as well as bare-metal concepts.

OpenStack Kilo: New Functions

According to the OpenStack Foundation, almost half of all OpenStack deployments (46 percent) are production environments. Network function virtualization (NFV), for using single virtual network components, is the fastest-growing use case for OpenStack. One of the lighthouse projects is eBay, operating OpenStack at large scale.

Essential new functions of OpenStack Kilo

  • OpenStack Kilo is the first release that fully supports the bare-metal service “Ironic” to run workloads directly on physical machines.
  • The OpenStack object storage service “Swift” supports “Erasure Coding (EC)” to fragment data and store it at distributed locations.
  • The “Keystone” identity service was enhanced with identity federation to support hybrid and multi-cloud scenarios.

New features of the OpenStack Core Projects (excerpts)

  • OpenStack Nova Compute
    Improvements for live updates when a database schema is changed and support the change of resources of a running virtual machine.
  • OpenStack Swift Object Storage
    Support of “Erasure Coding”. Temporary access to objects via an URL and improvements for global cluster replication.
  • OpenStack Cinder Block Storage
    Enhancement to attach a volume to multiple virtual machines to implement high-availability and migration scenarios.
  • OpenStack Neutron Networking
    Extension of network function virtualization (NFV) like port security for OpenVSwitch and VLAN transparency.
  • OpenStack Ironic Bare-Metal
    Ironic supports existing virtual machine workloads as well as new technologies like container (Docker), PaaS and NFV.
  • OpenStack Keystone Identity Service
    The extensions around identity federation help to distribute workloads across public and private clouds to build OpenStack based hybrid and multi-cloud environments.

OpenStack Kilo: Short Analysis and Impact

OpenStack is still growing. Even if a high ratio of NFV use cases shows that OpenStack is mainly used in service provider networks to operate single network components more flexible and cost-effective. However, the new Kilo functions for “federated identity”, “erasure coding” and “bare-metal” will move OpenStack up to the top of the CIO agenda.

The support of “erasure coding” is a long overdue function for Swift Object Storage – even though initial discussions already started for the “Havanna” release in 2013. All big public cloud providers are working with this distribution strategy for years to ensure high-availability of data. The introduction of bare-metal is at the right time. Workload migrations to cloud based infrastructure show with increasing frequency that virtual machines are not suitable for all use cases. Thus, database servers and performance intense workloads are ideally running on physical machines, whereas distributed workloads like application and web servers are good candidates for virtual machines. On a final note, identity federation will help CIOs building seamless OpenStack based hybrid and multi-cloud environments. Users only need a single login to authorize across multiple providers and get access to servers, data and applications in private and public clouds at once.

This begs the question how easy and fast CIOs can benefit from these new functions. The last five years unveiled that using OpenStack implicates a high complexity. This is mainly because OpenStack is organized as a big project composed of several sub-projects. Only the close interaction of all necessary sub-projects to support a specific use case is promising. The majority of CIOs who are working with OpenStack are considering a professional distribution instead of building an own OpenStack version based on the source code of the community trunk. In Germany these are 75 percent of the OpenStack users.

Categories
Cloud Computing Internet of Things

API Economy as a competitive factor: iPaaS in the Age of the Internet of Things (IoT) and Multi-Cloud Environments

What do APIs, integration and complexity have in common? All three are inseparable during the growth process of an IT project. Integration projects among two or multiple IT systems often lead to a delay or even the failure of the whole project. Depending on the company size, on-premise environments mostly consist of a relatively manageable number of applications. However, the use of multiple cloud services and the rise of the Internet of Things (IoT) scales to an excess of integration complexity.

The ever-growing use of cloud services and infrastructure across several providers (multi-cloud) makes a central approach necessary to preserve overview. In addition, it is essential to ensure a seamless integration among all cloud resources and the on-premise environment to avoid system and data silos. The variety of cloud services is rising incessantly.

The cloud supports the Internet of Things and its industrial offshoot – the Industrial Internet. Cloud infrastructure and platforms are providing the perfect foundation for IoT services and IoT platforms and will lead to a phenomenal rise of IoT business models. This will end in a market with ongoing new devices, sensors and IoT solutions whose variety and potential cannot be foreseen. However, the demand for integration also increases. After all, only the connection of various IoT services and devices leads to an actual value. At the same time analytics services need access to the collected data from different sources for analyzing and connection purposes.

The access typically happens via the cloud and IoT service APIs. As a consequence the term API economy comes in the spotlight. Integration Platform-as-a-Services (iPaaS) exposed as good candidates to ensure the access, integration, control and management in the cloud and in the Internet of Things.

iPaaS and API Economy: It’s all about the API

Enterprise Application Integration (EAI) was the central anchor in the age of client server communication to ensure business process integration within the whole value chain. The focus is on the tight interaction of a variety of applications that are distributed over several independent operated platforms. The goal: the uniform and integrated mapping of all business processes in IT applications and thus to avoid data silos.

However, the transition into the cloud age leads to a change in the usage behavior of on-premise interfaces to a mainly consumption of web APIs (Application Programming Interfaces). Finally almost each cloud and web services provider offers a REST or SOAP based API that enables to integrate services in the own application and thus benefit directly from external functions. Along with the increasing consumption of cloud services and the ever-growing momentum of the Internet of Things, the importance of APIs will rise significantly.

API Economy

The cloud native Internet companies are reflecting this trend. APIs are a central competitive factor for players like Salesforce, Twitter, Google, Amazon and Amazon Web Services and represent the lifeline of their success. All mentioned providers have created an own API ecosystem around them, which is used by their customers and partners to develop own offerings.

In this context the term “API economy” is used. The API economy describes the increasing economic potential of APIs. Thanks to mobile, social media and cloud services, APIs are no longer popular only under developers but also find their ways on the memos of CEOs and CIOs who have identified the financial impact. Providers typically benefit from APIs by:

  • Selling (premium) functions within a free of cost service.
  • Charging the sharing of content through an application or service of a partner.

CIOs benefit from the API economy by getting access to a quasi endless choice of applications and services they can use to expand their websites, applications and systems without developing, operating or even maintaining these functionalities on their own. Furthermore, APIs enable partner, customers and communities to get an easy access to own applications, data and systems to let the CIO’s company become a part of the API economy.

Everything works using pretended “simple” API calls. However, the devil is in the details. Integration and API management have a big significance in the API economy.

iPaaS = Integration Platform-as-a-Service

Over the last years many vendors have been originated that are specialist on the API management and integration of different services. These, so called, Integration Platform-as-a-Services (iPaaS) are cloud based integration solutions (in pre cloud times known as “middleware”) that support the interaction between several cloud services. Thus, developers and enterprises get the opportunity to create own “integration flows” that connect multiple cloud services among each other but also on-premise applications.

The iPaaS market splits in two camps: The wild startups and the IT majors who have developed respectively rebuild their portfolios. iPaaS vendors to watch are (excerpt):

  • 3scale
    The 3scale platform consists of two areas. The API Program Management gives an overview and information of the used APIs. The API Performance Management analyzes the API traffic in the cloud as well as in on-premise infrastructure. Together they enable to control and manage the API traffic within an own system and application architecture.
  • elastic.io
    The elastic.io iPaaS is offered as a cloud service as well as an on-premise installation in the own infrastructure. Based on the programming languages Node.js, Java and JSON, elastic.io provides a development framework that can be used to integrate several CRM, financial, ERP and ecommerce cloud services to ensure data integrity. Therefore necessary connectors are provided e.g. for SAP, SugarCRM, Zendesk, Microsoft Dynamics, Hybris and Salesforce.
  • SnapLogic
    The SnapLogic iPaaS is provided as a SaaS solution and helps to integrate data of cloud services as well as to let SaaS applications interact among each other and with on-premise applications. Therefore SnapLogic provides ready connectors (Snaps and Snaplex) that can be used for the integration and data processing. The iPaaS provider primarily focuses on the Internet of Things to connect data, applications and devices among each other.
  • Software AG
    The central parts of Software AGs iPaaS portfolio are webMethods Integration and webMethods API-Management. webMethods Integration Backbone integrates several cloud, mobile, social and big data services as well as solutions from partners via a B2B gateway. webMethods API-Management contains all tasks to get an overview and the control of the own and external used APIs. Among other things the functional range includes design, development, cataloging and version management.
  • Informatica
    The Informatica cloud integration portfolio contains a large service offering specifically for enterprise customers. This includes Informatica Cloud iPaaS, which is responsible for the bidirectional synchronization of objects among cloud and on-premise applications as well as the replication of cloud data and the business process automation. The Integration Services support the consolidation of different cloud and on-premise applications to integrate, process and analyze operational data in real-time.
  • Unify Circuit
    Unify Circuit is a SaaS based collaboration suite that combines voice, video, messaging and screen- and file-sharing – everything organized in “conversations”. However, Unify introduced a new PaaS category – cPaaS (Collaborative Platform-as-a-Service). This is an iPaaS that consolidates PBX, SIP as well as external cloud services like Box.com, Salesforce or Open-Xchange into a uniform collaboration platform. All data is stored at the external partners and is consolidated on the Unify Circuit platform at runtime.

IoT and Multi-Cloud: The future belongs to open platforms

Openness is a highly discussed topic in the IT and especially in the Internet. The past or rather Google have taught us: The future only belongs to open platforms. This is not about openness that should be discussed in terms of open standards – even or especially Google runs diverse proprietary implementations, e.g. Google App Engine.

However, Google understood from the very beginning to position itself as an open platform. Important: Openness in the context of providing access to its own services via APIs. Jeff Jarvis illustrates in his book „What Would Google Do?“ how Google – based on its platform – enables other companies to build own business models and mashups. Not without a cause – of course. This kind of openness and the right use of the API economy quickly lead to dissemination and made Google to a cash cow – via advertising.

Companies like Unify are still far away from the status to become comparable with the Google platform. However, the decision makers at Unify apparently realized that only an open architecture approach helps to turn the company from a provider of integrated communication solutions to a cloud integration provider and thus to become a part of the API economy. For this purpose Unify Circuit doesn’t only consolidates external cloud services on its collaboration platform, but rather enables developers to integrate Circuit’s core functions like voice or video as mashups in their own web applications.

From a CIO perspective integration is crucial to avoid system and data silos. A non-holistic integration of multiple and independent systems can harm the overall process. Therefore it is vital that cloud, IoT and Industrial Internet services are seamlessly integrated among each other and with existing systems to completely support all business processes.