Categories
Cloud Computing @de

Analyst Strategy Paper: Open Cloud Alliance – Offenheit als Imperativ

Unternehmenskunden werden in Zukunft auf einen Mix aus eigener On-Premise-IT, gehosteten Cloud-Services von lokal ansprechbaren Anbietern und global agierenden Cloud Service Providern zurückgreifen. Das ist eine große Chance für den Markt und alle beteiligten Teilnehmer. Hier besteht insbesondere eine Chance für kleinere Hoster mit vorhandenen Infrastrukturen, als auch Systemintegratoren mit entsprechendem Know-How und vorhandenen Kundenbeziehungen.

Vor diesem Hintergrund untersucht Crisp Research in diesem Strategy Paper die Herausforderungen, die sich für beide Anbietergruppen auf Grund dieser Perspektive ergeben und geht dabei auf die wichtigsten Aspekte und deren Lösungen ein.

Das Strategy Paper kann unter “Open Cloud Alliance – Offenheit als Imperativ” heruntergeladen werden.

Categories
Analyst Cast @de

Q & A Panel with Media and Analyst Covering OpenStack

Categories
Analyst Cast

Q & A Panel with Media and Analyst Covering OpenStack

Categories
Cloud Computing

The impact of OpenStack for the cloud sourcing

In 2014 German companies will invest around 6.1 billion euro in cloud technologies. Thus, cloud sourcing is already seven percent of the whole IT budget. For this reason the importance of cloud ecosystems and cloud marketplaces are getting a higher significance in the future.

Crisp Research predicts the amount of cloud services that are traded using cloud marketplaces, platforms and ecosystems of around 22 percent until 2018. However, the basic requirement for this is to eliminate the current weak spots:

  • The lack of comparability,
  • Minor transparency,
  • As well as a poor integration.

These are elemental factors for a successful cloud sourcing.

Openness vs. Comparability, Transparency and Integration

In bigger companies the cloud sourcing process and the cloud buying center are dealing with a specific complexity. This is due to the challenge that the cloud environment is based on several operation models, technologies and vendors. On average smaller companies using five vendors (e.g. SaaS). Big and worldwide distributed companies are dealing with over 20 different cloud providers. On the one hand this shows that hybrid and multi cloud sourcing is not a trend but reality. On the other hand that data and system silos even in cloud times are an important topic. But, how should IT buyer deal with this difficult situation? How could a dynamic growing portfolio be planned and developed in the long-term and how is the future safety guaranteed? These are different challenges that should not be underestimated. The reason is obvious: Over the last years neither cloud providers nor organizations or standardization bodies were able to create mandatory and viable cloud standards.

Without these standards clouds are not comparable among themselves. Thus, IT buyer had a lack of comparability. On the technical as well as on the organizational level. In this context contracts and SLAs are one issue. More difficult and chancier it is becoming in the technical context. Each cloud infrastructure provider has its own magical formula on how the performance of a single virtual machine is composed. This lack of transparency leads to a bigger overhead for IT buyer and increases the costs for planning and tendering processes. The IaaS providers are fighting out their competition on the back of their customers. Brave new cloud world.

Another problem is the bad integration of common cloud marketplaces and cloud ecosystems. The variety of services on these platforms is growing. However, the direct interaction between these different services within a platform was neglected. The complexity increases when services are integrated across infrastructure, platforms respectively marketplaces. Today, without a big effort deep process integration is not possible. This is mostly due to the fact that each closed ecosystem is cooking one’s own meal.

Standardization: OpenStack to set the agenda

Proprietary infrastructure foundations could have an USP for the provider. However, at the same time they are leading to a bad interoperability. This leads to enormous problems during the use across providers and increases the complexity for the user. Thus, the comparison of the offerings is not possible.

Open source technologies are putting things right in this situation. Based on the open approach several providers are taking part in projects in order to push the solution and of course to represent the own interests. Therefore it turns out that an audit authority is necessary to increase the distribution and adaption. The benefit: If more than one provider is using the technology, this leads to a better interoperability across the providers and the user is getting a better comparability. In addition, the complexity for the user decreases and thus the effort during the use across providers – e.g. the setup of hybrid and multi cloud scenarios.

A big community of interests, where well-known members are pushing the technology and using it for their own purposes, is leading to a de-facto standard over time. This is a technical standard, which is “[…] may be developed privately or unilaterally, for example by a corporation, regulatory body, military, etc.”

How this works the open source project OpenStack shows impressively. Since its start in 2010 the framework for building public and private cloud infrastructures is getting a lot of attention and has a big constant momentum. By now OpenStack is the foundation of several public cloud infrastructures and product portfolios, among others Rackspace, HP, IBM, Cisco and Oracle. But also many enterprises have discovered OpenStack for their private cloud environments, e.g. Wells Fargo, Paypal, Bloomberg, Best Buy and Walt Disney.

Because of the open approach as well as the continuous development by the huge and potent community (every six month a new version is released) OpenStack is a reliable and trustable partner for IT infrastructure manager. Professional distributions are helping to increase the footprint on the user side and make sure that more and more IT decision maker of bigger companies are building their cloud infrastructure based on OpenStack in the future.

This positive development also arrived in Germany. The results of a current Crisp Research study (“OpenStack im Unternehmenseinsatz”, German) show that almost 50 percent of the cloud users know OpenStack. Already 29 percent of the cloud users engage actively with OpenStack.

The OpenStack ecosystem is still getting bigger and thus pushing the standardization in the cloud. For this reason, IT buyers are getting a better scope while purchasing cloud resources from several providers. But they should keep in mind that their IT architects would entirely separate more from the underlying infrastructure in the future to move applications and workloads on demand across providers. Container technologies like Docker – supported by OpenStack – are pushing this trend.

Think across marketplaces

Cloud marketplace provider should act in terms of their customers and instead of using a proprietary technology also set on open source technologies respectively a de-facto standard like OpenStack. Thus they enable the interoperability between cloud service provider as well as between several marketplaces and creating the requirements for a comprehensive ecosystem, in which users are getting a better comparability as well as the capabilities to build and manage truly multi cloud environments. This is the groundwork to empower IT buyer to benefit from the strength of individual provider and the best offerings on the market.

Open approaches like OpenStack are fostering the prospective ability to act of IT buyer across provider and data center borders. This makes OpenStack to an important cloud-sourcing driver – if all involved parties are admitting to a common standard. In terms of the users.

Categories
Cloud Computing @de

Studie: OpenStack im Unternehmenseinsatz (DACH-Markt)

Manchmal braucht es Zeit, bis neue Trends sichtbar werden. Manchmal auch nur den richtigen Marktkontext, bis sich neue Technologien den Weg in den Mainstream bahnen.

Derzeit hat das Open Source Cloud Management Framework „OpenStack“ ein unglaubliches Momentum. Keine Produktankündigung, kein Cloud-Event und keine Key-Note ohne den Verweis auf OpenStack. Und das, obwohl OpenStack schon seit über drei Jahren unter der Open Source-Lizenz verfügbar ist. Was hat sich seit dem verändert?

Die Unternehmen haben begonnen echte und große Cloud-Projekte umzusetzen. Und dabei haben viele CIOs, CTOs und Rechenzentrums-Manager feststellen müssen, dass es um die Integration über Cloud-Anbieter-Grenzen hinweg noch nicht so gut bestellt ist. Für den Aufbau und Betrieb flexibler „Multi-Cloud“-Umgebungen sind deutlich mehr Standardisierung und Kontrolle erforderlich, als die Anbieter mit ihren eigenen Produkten bislang ermöglichen können. Daher kommt der offenen, herstellerneutralen OpenStack-Infrastruktur derzeit soviel Aufmerksamkeit von Seiten der Anwender als auch der Anbieter zu. Das breite Einsatzspektrum und das große Ökosystem von OpenStack haben aus dem vormals kleinen Open Source-Projekt eine der mächtigsten Cloud-Infrastrukturlösungen für den Bau und Betrieb von Private, Public und Hybrid Clouds in großen Unternehmen gemacht. OpenStack ist heute schon mehr als ein reiner Technologietrend auf der IT-Hypekurve. OpenStack ist bereits auf der Roadmap vieler CIOs gelandet.

Da es bislang keine empirisch fundierten Erkenntnisse zum Einsatz von OpenStack gibt, hat Crisp Research in Kooperation mit HP Deutschland eine Studie durchgeführt, die erstmals detaillierte Einblicke liefert, wie IT-Entscheider im DACH-Raum mit OpenStack umgehen.

Die Studie steht unter “OpenStack im Unternehmenseinsatz” zum Download bereit.

Categories
Cloud Computing

Data Center: Hello Amazon Web Services. Welcome to Germany!

The analysts already wrote about it. Now the advanced but unconfirmed announcements are finally reality. Amazon Web Services (AWS) have opened a data center in Germany (region: eu-central-1). It is especially for the German market and AWS’ 11th region in the world.

Data centers in Germany are booming

These days it is fun to be a data center operator in Germany. Not only the “logistic centers of the future” are getting more and more into the focus because of the cloud and digital transformation. The big players in the IT industry also throw their grapplers more and more into the direction of German companies. After Salesforce announced its landing for March 2015 (partnership with T-Systems), Oracle and VMware followed.

Against the spread out opinion of a higher data security on German ground, these strategic decisions have nothing to do with data security for customers. A German data center on its own offers no higher data security but rather gives only the benefit to fulfill the legal requirements of the German data privacy level.

However, from a technical perspective locality is of a big importance. Due to continuous relocation of business-critical
 data, applications and processes to external cloud infrastructure, the IT-operating concepts (public, private, hybrid), as well as network architectures and connectivity strategies are significantly changing for CIOs. On the one hand, modern technology is required to provide applications in a performance-oriented, stable and secure manner; on the other hand, the location is significantly decisive for optimal “Cloud- Connectivity“. Therefore, it is important to understand that the quality of a cloud service is significantly dependent on its connectivity and the performance on the backend. A cloud service is only as good as the connectivity that provides it. Cloud-Connectivity – minor latency as well as high throughput and availability – is becoming a critical competitive advantage for cloud provider.

Now also the Amazon Web Services

Despite of concrete technical hints, AWS executives have cloaked in a mantle of secrecy. Against all denials, it is now official. AWS has opened a new region “eu-central-1” in Frankfurt. The region is based on two availability zones (two separated data center locations) and offers the whole functionality of the Amazon cloud. The new cloud region is already operational and can be used by customers. With the location in Frankfurt Amazon opens its second region in Europe (besides Ireland). This empowers the customer to build a multi-regional concept in Europe to ensure a higher availability of their virtual infrastructure from which the uptime of the applications also benefits.

Frankfurt is not an unusual selection of location for cloud providers. On the infrastructure side, the location Frankfurt am Main is the backbone of the digital business in Germany. As far as data center density and connectivity to central internet hubs are concerned, Frankfurt is the leader throughout Germany and Europe. 
The continuous relocation of data and applications to external cloud provider infrastructures made Frankfurt the stronghold for cloud computing in Europe. 


In order to help its customers to fulfill the data privacy topics as well as on the technical site (Cloud-Connectivity), Amazon took the right decision on the German data center landscape. This already happened in May this year when the partnership with NetApp for setting up hybrid cloud storage scenarios was announced.

Serious German workloads on the Amazon Cloud

A good indication for the appeal of a provider’s infrastructure is its reference customers. From the beginning AWS is focusing on startups. However, in the meantime they try everything to also attract enterprise customers. Talanx and Kärcher are already two well-known customers from the German business landscape.

The insurance company Talanx has shifted the reporting and calculation of its risk scenarios into the Amazon cloud (developed from scratch). Thereby, Talanx is able to ban its risk management out of its own data center but it is still Solvency II compliant. According to Talanx, it is achieving a time advantage as well as annual savings at a height of eight million euro. The corporation and its CIO Achim Heidebrecht are already evaluating further applications to shift into the Amazon cloud.

Kärcher is the world’s leading manufacturer of cleaning systems and is using the Internet of Things (Machine-to-Machine communication) to improve its business model. For optimizing the usage of the worldwide cleaning fleet Kärcher is using the global footprint of Amazon’s cloud infrastructure. Kärcher’s machines regularly send information into the Amazon cloud to be processed. In addition, Kärcher provides information to its worldwide partners and customers through its Amazon cloud.

Strategic vehicle: AWS Marketplace

Software AG is the first best-known traditional ISV (Independent Software Vendor) on the Amazon cloud. The popular BPM tool Aris is now available as “Aris-as-a-Service” (SaaS) and is scalable using the Amazon cloud infrastructure.

Software AG is only one example. Several German ISVs could follow. The global scalability of Amazon’s cloud infrastructure makes it an especially attractive partner, capable of delivering SaaS applications soon to target customers beyond the German market. The AWS Marketplace plays a key role in this context. AWS owns a marketplace infrastructure that ISVs can use to provide their solutions to a broad and global audience. The benefits for the ISV are in being able to:

  • Develop directly on the Amazon cloud without need for own (global) infrastructure.
  • Develop solutions “as-a-service” and distribute over the AWS Marketplace.
  • Use the popularity and scope of the marketplace.

This scenario means one thing for AWS: The cloud giant wins in any case. As long as the infrastructure resources are used the money is rolling.

Challenges of the German market

Despite their innovation, leadership public cloud provider like AWS are having hard times with German companies, especially when it comes to the powerful Mittelstand. For the Mittelstand self-service and the complex use of the cloud infrastructure are among the main reasons to avoid using the public cloud. In addition, even if it has nothing to do with the cloud, the NSA scandal has left psychological scars at German companies. Data privacy connected with US providers is the icing on the cake.

Nevertheless, AWS has carried out its duty with the data center in Frankfurt. However, to be successful in the German market there are still things left. These are:

  • Building a potent partner network to appeal to the mass of German enterprise customers.
  • Reduce the complexity by simplifying the use of the scale-out concept.
  • Strengthen the AWS Marketplace for the ease of use of scalable standard workloads and applications.
  • Increase the attractiveness for German ISVs.
Categories
Cloud Computing

Top 15 Open Source Cloud Computing Technologies 2014

Open source technologies have a long history. Linux, MySQL and the Apache Web Server are among the most popular and successful technologies brought forth by the community. Over the years, open source experienced a big hype which, driven by developers, moved into corporate IT. Today, IT environments are no longer conceivable without open source technologies. Driven by cloud computing, open source presently gains strong momentum.

Several projects launched in the recent past have significantly influenced the cloud computing market, especially when it comes to the development, setup and operations of cloud infrastructure, platforms and applications. What are the hottest and most important open source technologies in the cloud computing market today? Crisp Research has examined and classified the “Top 15 Open Source Cloud Computing Technologies 2014” in order of significance.

OpenStack to win

Openness and flexibility are among the top five reasons for CIOs during their selection of open source cloud computing technologies. At the same time, standardization becomes increasingly important and serves as one of the biggest drivers for the deployment of open source cloud technologies. It is for a reason that OpenStack qualifies as the upcoming de-facto standard for cloud infrastructure software. Crisp Research advises to build modern and sustainable cloud environments based on the principles of openness, reliability and efficiency. Especially in the areas of openness and efficiency, open source makes a significant contribution. With this in mind, CIOs set the stage for the implementation of multi-cloud and hybrid cloud/infrastructure scenarios and assist the IT department in the introduction and enforcement of a holistic DevOps strategy. DevOps, in particular, plays a crucial role in the adaptation of Platform-as-a-Service and the development of applications for the cloud and leads to significant speed advantages, which also affect the competitive strength of the business.

The criteria for assessing the top 15 open source cloud computing technologies include:

  • Innovation and release velocity
  • Development of the community including support of large suppliers
  • Adoption rate of innovative developers and users

In consulting projects Crisp Research especially identifies leading users who are using modern open source technologies to run their own IT environments efficiently and future oriented in different scenarios.

The “Top 5 Open Source Cloud Computing Technologies 2014”:

  1. OpenStack
    In 2014, OpenStack already is the most important open source technology for enterprises and developers. Over 190 000 individuals in over 144 countries worldwide already support the infrastructure software. In addition, its popularity among IT manufacturers and vendors increases steadily. OpenStack serves a continuously increasing number of IT environments as a foundation for public, private and managed infrastructure. Organizations in particular have utilized OpenStack for their purposes to build own private clouds. IT providers like Deutsche Telekom (Business Marketplace) use OpenStack to build their cloud platforms. Today only few developers have direct contact with OpenStack. However, the solution has a high importance for them since platforms like Cloud Foundry or the access to container technologies like Docker are often delivered via OpenStack. In other cases, they directly access the OpenStack APIs to develop their applications directly on top of the infrastructure.
  2. Cloud Foundry
    In the growing platform-as-a-service (PaaS) market, Cloud Foundry gets in a leading position. The project was initialized by Pivotal, a spin-off by EMC/ VMware. Cloud Foundry is mostly used by organizations to deploy a private PaaS environment for internal developers. Managed service providers use Cloud Foundry to offer PaaS in a hosted environment. The PaaS project plays perfectly together with OpenStack to build highly-available and scalable PaaS platforms.
  3. KVM
    KVM (Kernel-based Virtual Machine) is the preferred hypervisor of infrastructure solutions like OpenStack or openQRM and enjoys a high priority within the open source community. KVM stands for a cost-efficient and especially powerful option to commercial offerings like VMware ESX or Microsoft Hyper-V. KVM has a market share of about 12 percent, due to the fact that Red Hat is using this hypervisor as the foundation for its virtualization solutions. Over time, the standard hypervisor KVM will be in a tight play with OpenStack as CIOs are presently searching for cost-effective capabilities to virtualize their infrastructure.
  4. Docker
    This year’s shooting star is Docker. The container technology, which was created as a byproduct during the development of platform-as-a-service “dotCloud”, currently experiences a strong momentum and gets support from large players like Google, Amazon Web Services and Microsoft. For a good reason. Docker enables the loosely coupled movement of applications that are bundled in containers, across several Linux servers, thus improving application portability. At first glance, Docker looks like a pure tool for developers. From the point of view of an IT decision-maker, however, it is definitely a strategic tool for optimizing modern application deployments. Docker helps to ensure the portability of an application, to increase the availability and to decrease the overall risk.
  5. Apache Mesos
    Mesos rose to a top-level project of the Apache Software Foundation. It was conceived at the University of California at Berkeley and helps to run applications in isolation from one another. At the same time, the applications are dynamically distributed on several nodes within a cluster. Mesos can be used with OpenStack and Docker. Popular users are Twitter and Airbnb. One of the driving forces behind Mesos is the German developer Florian Leibert, who was also jointly responsible for the implementation of the cluster technology at Twitter.

Open Source is eating the license-based world

Generally, proprietary players such as IBM, HP and VMware schmooze with open source technologies. HP’s first cloud offering “HP Cloud” already based on OpenStack. With HP Helion Cloud, the whole cloud portfolio (public, private) was harmonized via OpenStack. In addition, HP has become the biggest code contributor for the upcoming OpenStack “Juno”-release, which will be released in October. IBM contributes to OpenStack and uses Cloud Foundry as the foundation for its PaaS “Bluemix”. At VMworld in San Francisco, VMware announced a tighter cooperation with OpenStack as well as with Docker. In this context, VMware will present its own OpenStack distribution (VMware Integrated OpenStack (VIO)) in Q1/2015, which empowers the setup of an OpenStack implementation based on VMware vSphere. The Docker partnership shall ensure that the Docker engine runs on VMware Fusion and servers with VMware vSphere and vCloud Air.

Open source solutions like OpenStack are attractive not only for technical reasons. From a financial perspective, OpenStack also supplies an essential contribution, as the open source framework reduces the cost for building and operating a cloud infrastructure significantly. The license costs for current cloud management and virtualization solutions are around 30 percent of the overall cloud TCO. This means that numerous start-ups and large, well-respected software vendors like Microsoft and VMware make a good chunk of the business by selling licenses for their solutions. With OpenStack, CIOs gain the opportunity to conduct the provisioning and management of their virtual machines and cloud infrastructure via the use of open source technologies. To support this, free community editions as well as professional enterprise ready distributions including support are available. In both cases options to significantly reduce the license costs for operating cloud infrastructures. OpenStack empowers CIOs with a valuable tool to exercise pressure on Microsoft and VMware.

Categories
Strategy

Disaster Recovery for SMEs: No apologies!

Disaster Recovery currently plays a minor role in the strategic planning and daily routine of SMEs. Yet this negligent attitude can cause harm to the business in a fraction of a second. With the advent of new operational models in the cloud age, there is no longer room for excuses.

Categories
IT-Infrastructure

The pitfalls of cloud connectivity

The continuous shift of business-critical data, applications and processes to external cloud infrastructures is not only changing the IT operating concepts (public, private versus hybrid) for CIOs but also the network architectures and integration strategies deployed. As a result, the selection of the location to host your IT infrastructure has become a strategic decision and a potential source for competitive advantage. Frankfurt is playing a crucial role as one of the leading European cloud connectivity hubs.

The digital transformation lashes out

The digital transformation is playing an important part in our lives today. For example, an estimated 95 percent of all smartphone apps are connected to services hosted on servers in data centers located around the world. Without a direct and uninterrupted connection to these services, metadata or other information, apps are not functioning properly. In addition, most of the production data, needed by the apps, is stored on systems in a data center, with only a small percentage cached locally on the smartphone.

Many business applications are being delivered from cloud infrastructures today. From the perspective of a CIO a reliable, high-performance connection to systems and services is therefore essential. This trend will only continue to strengthen. Crisp Research estimates that in the next five years around a quarter of all business applications will be consumed as cloud services. At the same time, hybrid IT infrastructure solutions, using a mix of local IT infrastructure connected and IT infrastructure located in cloud data centers, are also becoming increasingly popular.

Data volumes: bigger pipelines required

The ever-increasing data volumes further increases the requirement of reliable, high-performance connectivity to access and store data and information any place, any time. Especially in case business-critical processes and applications are located on cloud infrastructure. For many companies today, failure to offer their customers reliable, low-latency access to applications and services can lead to significant financial and reputation damage, and represents a significant business risk. Considering that the quality of a cloud services depend predominantly on the connectivity to and performance of the back end, cloud connectivity is becoming the new currency.

Cloud connectivity is the new currency

“Cloud connectivity” could be defined technically as latency, throughput and availability:

In simple terms, cloud connectivity could be defined as the enabler of real-time access to cloud services any place, any time. As a result, connectivity has become the most important feature of today’s data centres. Connectivity means the presence of many different network providers (carrier-neutrality) as well as a redundant infrastructure of routers, switches, cabling, and network topology. CIOs are therefore increasingly looking at carrier-neutrality as a pre-requisite, facilitating a choice between many different connectivity options.

Frankfurt is the perfect example of a cloud connectivity hub

In the past 20 years a cluster of infrastructure providers for the digital economy has formed in Frankfurt, which facilitates companies to effectively and efficiently distribute their digital products and services to their customers. These providers have crafted Frankfurt as a the German capital of the digital economy delivering a wide range of integration services for IT and networks, and IT infrastructure and data centre services. More and more service providers have understood that despite the global nature of a cloud infrastructure, a local presence is crucial. This is an important finding, as no service provider that is seriously looking to do business in Germany, can do so without a local data center. Crisp Research predicts that all major, international service providers will build or expand their cloud platforms in Frankfurt within the next two to three years.

Against this backdrop, Crisp Research has researched the unique features of Frankfurt as an international hotspot for data centres and cloud connectivity. The whitepaper, titled “The Importance of Frankfurt as a Cloud Connectivity Hub” is available for download now: http://www.interxion.com/de/branchen/cloud/die-bedeutung-des-standorts-frankfurt-fur-die-cloud-connectivity/download/

Categories
Analysen

Cloud-Connectivity: Ruckelfrei in die Cloud

Die kontinuierliche Verlagerung von geschäftskritischen Daten, Applikationen und Prozessen auf externe Cloud Infrastrukturen sorgt dafür, dass sich für CIOs nicht nur die IT-Betriebskonzepte (Public, Private, Hybrid) sondern ebenfalls maßgeblich die Netzwerkarchitekturen und Anbindungsstrategien verändern. Hierbei ist die Auswahl des richtigen Standorts ein entscheidender Wettbewerbsvorteil, bei dem Frankfurt bereits heute, aber vor allem in der Zukunft eine tragende Rolle spielen wird.

Die digitale Transformation schlägt um sich

Der digitale Wandel macht heute in keinem Bereich unseres Lebens mehr halt. So sind schätzungsweise 95 Prozent aller Smartphone Apps mit Services verbunden,
 die sich auf Servern in globalen und weltweit verteilten Rechenzentren befinden. Gleichermaßen sind die Apps ohne eine direkte und zumeist konstante Verbindung zu diesen Services nicht funktionsfähig. Der Zugriff auf Metadaten oder anderweitige Informationen ist für den reibungslosen Betrieb unabdingbar. Zudem wird der Großteil der Produktivdaten, welche von den Apps benötigt werden, auf Systemen in den Rechenzentren gespeichert und nur eine kleine Auswahl lokal auf dem Smartphone bei Bedarf zwischengespeichert.

Bereits heute werden viele moderne Business-Applikationen über Cloud-Infrastrukturen bereitgestellt. Aus der heutigen Sichtweise eines CIOs ist eine stabile und performante Verbindung zu Systemen und Services damit unerlässlich. Dieser Trend wird sich noch weiter verstärken. Crisp Research geht davon aus, dass in den nächsten fünf Jahren rund ein Viertel aller Business-Anwendungen als Cloud-Services eingesetzt werden. Gleichzeitig werden auch hybride Szenarien, bedienen lokale unternehmenseigene IT- Infrastrukturen mit Infrastrukturen in Cloud Rechenzentren verbunden werden, immer wichtiger.

Datenaufkommen: Das neue Öl erfordert größere Pipelines

Insbesondere das stetig steigende Datenaufkommen erfordert eine zuverlässige und vor allem stabile Konnektivität, um auf die Daten und Informationen zu jeder Zeit Zugriff zu erhalten und verlässlich zu speichern. Noch wichtiger wird es, wenn geschäftskritische Prozesse und Applikationen auf eine Cloud-Infrastruktur ausgelagert werden. Der Zugriff muss jederzeit und performant – mit einer geringen Latenz – sichergestellt werden und kann für Unternehmen im Fehlerfall zu maßgeblichen finanziellen als auch Schäden am Image führen. Das stellt ein hohes Unternehmensrisiko dar. Die Qualität eines Cloud Service hängt somit maßgeblich von seiner Konnektivität und der Performance im Backend ab. Ein Cloud Service ist nur so gut, wie die Verbindung, über die er bereitgestellt wird.

Cloud-Connectivity ist die neue Währung

Damit sich Applikationen und Services performant, stabil und sicher bereitstellen lassen, sind einerseits moderne Technologien erforderlich, andererseits ist der Standort für eine optimale „Cloud-Connectivity“ maßgeblich entscheidend. „Cloud Connectivity“ lässt sich technisch mittels der Latenz, dem Durchsatz und der Verfügbarkeit definieren.

Cloud-Connectivity_Tuecken

Hierbei ist ein entscheidendes und wichtiges Merkmal die Konnektivität des Rechenzentrums, um den Kunden und deren Kunden einen stabilen und zuverlässigen Zugriff auf Cloud-Services zu jeder Zeit zu gewährleisten. Hierzu gehören eine hohe Ausfallsicherheit anhand unterschiedlicher Carrier (Netzwerkprovider) und eine redundante Infrastruktur hinsichtlich Router, Switches, Verkabelung sowie der Netzwerktopologie. In diesem Zusammenhang bilden Carrier-neutrale Anbindungen ein wichtiges Merkmal für den Kunden, damit dieser sich aus mehreren Anbietern den für sich passenden auswählen kann.

Frankfurt ist das Vorbild für die Cloud-Connectivity

In den vergangenen 20 Jahren hat sich in Frankfurt
 ein Cluster von Infrastrukturanbietern für die digitale Ökonomie gebildet, die Unternehmen dabei helfen, ihre Produkte und Dienstleistungen am Markt zu positionieren. Diese Anbieter haben Frankfurt und dessen Wirtschaft geprägt und liefern Integrationsservices für IT und Netzwerke sowie Rechenzentrumsdienste. Immer mehr Service-Anbieter haben verstanden, dass sie trotz der Globalität einer Cloud-Infrastruktur lokal vor Ort in den Ländern bei ihren Kunden sein müssen. Das ist eine wichtige Erkenntnis. Kein Anbieter, der ernsthafte Geschäfte in Deutschland machen will, wird auf einen lokalen Rechenzentrumsstandort verzichten können. Crisp Research sieht einen wichtigen Trend darin, dass viele internationale Anbieter in den kommenden zwei bis drei Jahren ihre Cloud Plattformen in Frankfurt aufbauen beziehungsweise weiter ausbauen werden.

Vor diesem Hintergrund hat Crisp Research in einem White Paper die Rolle von Frankfurt als Rechenzentrumsstandort und Connectivity Hub untersucht. Das White Paper „Die Bedeutung des Standorts Frankfurt für die Cloud Connectivity“  steht unter http://www.interxion.com/de/branchen/cloud/die-bedeutung-des-standorts-frankfurt-fur-die-cloud-connectivity/download/ zum Download bereit.