Categories
Strategie

Analyst Strategy Paper: Die dunkle Seite der digitalen Transformation

Mit der voranschreitenden digitalen Transformation verändern sich auch die Herausforderungen und Verantwortlichkeiten auf Vorstandsebene. Unternehmenslenker müssen einerseits die Potentiale der Digitalisierung für ihr Unternehmen erschließen. Andererseits stehen sie in der Pflicht für einen ganzheitlichen Schutz der Unternehmenswerte zu sorgen. Und dies gilt nicht nur für den CIO! Denn die durch innovative Geschäftsmodelle glänzende digitale Welt hat auch ihre Schattenseiten. Ein hoher Vernetzungsgrad korrespondiert mit einem hohen Verletzungsgrad, da Cyberkriminelle die Schwachstellen der komplexen Infrastrukturen unerbittlich ausnutzen.

Dabei ist zu beachten, dass Daten im digitalen Zeitalter einen ganz neuen Stellen- und Unternehmenswert erhalten. Der Verlust oder Missbrauch von Daten, verursacht durch gezielte Attacken, richtet für Unternehmen erhebliche Schäden an. Mit dem steigenden Wert der Unternehmensdaten, steigt somit auch die Verantwortung auf Vorstands- und Geschäftsführungsebene („C-Level“).

Verantwortungsbewusste und langfristig erfolgreiche Unternehmenslenker haben schon heute ein Feingefühl für die Einflüsse und Auswirkungen der Digitalisierung auf das Unternehmen entwickelt. Dazu zählt auch eine strategische und betriebswirtschaftliche Bewertung des Risikos des digitalen Business und der fortschreitenden Vernetzung.

Das Analyst Strategy Paper kann unter “Die dunkle Seite der digitalen Transformation” kostenlos heruntergeladen werden.

Categories
Analyst Cast

Video Interview: Benefits of the Hybrid Cloud

At CeBIT 2015 VMware took the chance to catch up with me to explain why the new German vCloud Air datacenter will benefit companies across Europe. I also speak about the three golden rules that should be considered when choosing hybrid cloud services: compatibility, integration and ease of use.

Categories
Cloud Computing

Hybrid and Multi Cloud: The real value of Public Cloud Infrastructure

Since the beginning of cloud computing the hybrid cloud is on everyone’s lips. Praised as the universal remedy by vendors, consultants as well as analysts the combination of various cloud deployment models is permanently in the focus during discussions, at panels and conversations with CIOs and IT infrastructure manager. The core questions that needs to be clarified: What are the benefits and do credible hybrid use cases indeed exist, which can be used as best practice guidance notes. This analysis is giving answers to these questions and also describes the ideas behind multi cloud scenarios.

Hybrid Cloud: Driver behind the Public Cloud

Many developers and startups bless the public cloud to escape from high and incalculable upfront costs into infrastructure resources (server, storage, software). Examples like Pinterest or Netflix are showing real use cases and confirm the true benefit. Without the public cloud Pinterest would have never experienced such growth in a short time. Also Netflix benefits from the scalable access to public cloud infrastructure. In the 4th quarter 2014 Netflix has delivered 7.8 billion hours of videos. This is a data traffic of 24,021,900 terabytes of data.

However, what these prime examples are hiding: All of them are green field approaches – like almost every workload that is developed as a native web application on public cloud infrastructure and just represent the tip of the iceberg. However, the reality in the corporate world unveils a completely different truth. Inside the iceberg you find aplenty of legacy applications that are not ready to be operate in the public cloud at the present stage. Furthermore, requirements and scenarios exist for which the use of the public cloud is ineligible. In addition, most of the infrastructure manager and architects know their workloads and its demand very good. Provider should finally accept this and admit that the public cloud in most cases is too expensive for static workloads and other deployment models are more attractive.

By definition, the hybrid cloud sphere of activity is limited to connect a private cloud with the resources of a public cloud. In this case, a company is running an own cloud infrastructure and uses the scalability of a public cloud provider to get further resources like compute, storage or other services on demand. With the rise of further cloud deployment models other hybrid cloud scenarios have been developed that include hosted private and managed private cloud. In particular, for most static workloads – these where the requirements of the infrastructure on average are known – an external static hosted infrastructure fits very well. Variations because of marketing campaigns or the Christmas season – that are occurring periodically – can be compensated by dynamically add further resources from a public cloud.

This approach can be mapped to many other scenarios. In this case, not only pure infrastructure resources like virtual machines, storage or databases must be in the foreground. Even the hybrid use of value added services from the public cloud providers within self-developed applications should be considered, to use a ready function instead of developing it on the own again or benefit from external innovations immediately. With this approach the public cloud offers companies a real value without outsourcing the whole IT environment.

Real hybrid cloud use cases can be find at Microsoft, Rackspace, VMware and Pironet NDH:

  • Microsoft Azure + Lufthansa Systems
    For expanding the internal private cloud and the worldwide datacenter capacities Lufthansa sets on Microsoft Azure. One of the first hybrid cloud scenarios was a disaster recovery concept whereby Microsoft SQL Server databases are mirrored to Microsoft Azure in a Microsoft datacenter. In case of an error within the Lufthansa environment the databases are operated in a Microsoft datacenter without interruption. Furthermore, the own infrastructure resources are extended by Microsoft’s worldwide datacenters to deliver customers a consistent service offering without building own infrastructure resources globally.
  • Rackspace + CERN
    As part of its OpenLap partnership the CERN is using a public cloud infrastructure from Rackspace to get compute resources on demand. This happens typically if physicist needs more compute, as the local OpenStack infrastructure is able to deliver. CERN is experiencing this regularly during scientific conferences when the last data of the LHC and its experiments are being analyzed. Applications with a small I/O rate are well suited to be outsourced to Rackspace’s public cloud infrastructure.
  • Pironet NDH + Malteser
    As part of the “Smart.IT” project Malteser Deutschland sets on a hybrid cloud approach. At this, applications in the own datacenter are combined with communication services like Microsoft Office 365, SharePoint, Lync and Exchange from a public cloud. Applications that are critical in terms of data-protection law – like electronic patient record – are being used from a private cloud in a Pironet datacenter.
  • VMware + Colt + Sega Europe
    Since the beginning of 2012 gaming manufacturer Sega Europe sets on a hybrid cloud to give external testers access to new games. Previously this was realized via a VPN connection into the company’s own network. Meanwhile Sega is running an own private cloud to provide development and test systems for internal projects. This private cloud is directly connected with a VMware based infrastructure in a Colt datacenter. Thus, on the one hand Sega can get further resources in order to compensate peak loads from a public cloud. On the other hand the game testers get a special testing area over it. Thus, the testers don’t have to access the Sega corporate network anymore but testing on servers within the public cloud. When the tests are finished the no more needed servers are shutting down by Sega IT without the intervention of Colt.

Multi Cloud: Automotive Industry as the role model

In the course of the continuously propagation of the hybrid cloud also multi cloud scenarios are moving in the focus. For a better understanding of the multi cloud, it helps to consider the supply chain model of the automotive industry as an example. The automaker sets on various (sometimes redundant) suppliers, which provide him with single components, assemblies or ready systems. In the end the automaker assembles the just in time delivered parts within the own assembly factory.

The multi cloud respectively the hybrid cloud are adopting the idea from the automotive industry by working together with more than one cloud provider (cloud supplier) and integrating everything with the own cloud application respectively the own cloud infrastructure in the end.

As part of the cloud supply chain three delivery tiers exist that can be used to develop an own cloud application or to build an own cloud infrastructure:

  • Micro Service: Micro Services are granular services like Microsoft Azure DocumentDB and Microsoft Azure Scheduler or Amazon Route 53 and Amazon SQS that can be used to develop an own cloud native application. Micro Services can also be integrated as part of an existing application, which is running on an own infrastructure and thus is extended by the function of the Micro Service.
  • Module: A Module encapsulates a scenario for a specific use case and thus provides a ready usable part for an application. To these belong e.g. Microsoft Azure Learning Machine and Microsoft Azure IoT. Modules can be used like Micro Services for development purposes respectively for the integration into applications. However, compared to Micro Services they are providing a greater functionality.
  • Complete System: A Complete System is about a SaaS service, thus an entire application that can directly be used within the company. However, it still needs to be integrated with other existing systems.

In a multi cloud model an enterprise cloud infrastructure respectively a cloud application can fall back on more than one cloud supplier and thus integrate various Micro Services, Modules and Complete Systems of different providers. For this model a company develops most of the infrastructure/ application on its own and extends the architecture with additional external services whose effort would be much too big to redevelop it on its own.

However, this leads to higher costs at cloud management level (supplier management) as well as at integration level. Solutions like SixSq Slipstream or Flexiant Concerto are specialized on multi cloud management and support during the usage and management of cloud infrastructure across providers. On the contrary Elastic.io works on several cloud layers, across various providers and supports as a central connector to make cloud integration easier.

The cloud supply chain is an important part of the Digital Infrastructure Fabric (DIF) and should be considered in any case to benefit from the variety of different cloud infrastructure, platforms and applications. The only disadvantage is that the value added services (Micro Services, Modules) named above are still only available in the portfolios of Amazon Web Services and Microsoft Azure. In the course of the rapid development of use cases for the Internet of Things (IoT), IoT platforms and mobile backend infrastructure are taking an ever-growing significance. Ready solutions (Cloud Modules) are helping potential customers to reduce the development effort and giving impulses for new ideas.

Infrastructure providers whose portfolios still focus on pure infrastructure resources like servers (virtual machines, bare metal), storage and some databases will disappear from the screen in the midterm. Only the ones who enhance their infrastructure with enablement services for web applications, mobile and IoT applications will remain competitive.

Categories
Analyst Cast @de

Video Interview: Digitale Transformation

Die Digitale Transformation ist in aller Munde. Und sie verändert Märkte und Unternehmen in rasender Geschwindigkeit. René Büst ist Senior Analyst und Cloud Practice Lead bei der Crisp Research AG in Kassel. Er gibt einen Einblick in unterschiedliche Perspektiven des Marktes und erklärt, wie Unternehmen auf den Schritten zur Digitalen Transformation nicht in´s Stolpern geraten.

Categories
Cloud Computing

Top 10 Cloud Trends for 2015

In 2015, German companies are going to invest around 10.9 billion euro in cloud services, technologies as well as integration and consulting. Although, the German market has developed quite slow as compared to international standard. However in 2015, also this market will mature. The reasons can be find in this article. Crisp Research has identified the drivers behind this development and deducted the top 10 trends of the cloud market for 2015.

1. Cloud Ecosystems and Marketplaces

This year cloud ecosystems and marketplaces are becoming more popular. For some time the Deutsche Telekom Business Marketplace, Deutsche Börse Cloud Exchange or the German Business are present. Service providers are offering marketplaces to increase the scope of their services. However, the buyer side is still not keen. This has several reasons. The lack of integration and less demand are just two reasons. However, along with the cloud market maturity the demand could rise. Cloud marketplaces are part of the logical development of the cloud to give IT buyer a more convenient access to categorize IT resources. Distributors also had understood the importance of cloud marketplaces and are in motion to offer own marketplaces in order to preserve the attraction in the channel. Vendors like the startup Basaas are offering a „Business App Store as a Service“ concept, which can be used to create multi-tenant public cloud marketplaces or internal business app stores.

Integration is a technical challenge and is not easy to solve. However, with a powerful ecosystem of providers under the lead of a neutral marketplace operator, the necessary strengths could be bundle to ensure a holistic integration of services in order to take the biggest burden from the buyer side.

2. Secret Winner: Consultants and Integrators

Complexity. IaaS providers are keeping it a secret. However, for some customers this already ended in a catastrophe. IaaS looks quite simple on paper. But to start a virtual machine with an application on it has basically nothing to do with a cloud architecture. In order to run a scalable and failure-resistant IT infrastructure in the cloud more than administration know-how is necessary. Developer skills and comprehension of the cloud concept are basic skills. Modern IT infrastructures for cloud services are developed like an application. For this purpose, providers like Amazon Web Services, Microsoft Azure, Rackspace or HP are providing building blocks of higher value added services to exactly achieving the scalability and failure-resistance, since this is the responsibility of the customer and not of the cloud provider. ProfitBricks provides “Live Vertical Scaling” setting on a scale-up principle that can be used without special cloud developer skills.

The challenge for a majority of CIOs is that their IT teams lack of the necessary cloud skills or still not enough investments in advanced trainings have been taken. However, this means that a big market (2.9 billion EUR in 2015) opens for consultants and system integrators. But also classical system houses and managed services providers can benefit from this knowledge gap when they are able to transform themselves fast enough. The direkt gruppe and TecRacer are two cloud system integrators from Germany that have impressively shown that they are able to handle public cloud projects.

3. Multi Cloud as a long runner

The multi cloud is an abiding theme. Eventually its importance is propagandizes for years. However, besides the growing demand for cloud services on the buyer side and the increasing maturity level on the vendor side, the area of use for cloud spanning deployments is constantly increasing. This is not only due to offerings like Equinix Cloud Exchange that is enabling direct connections between several cloud providers and the own enterprise IT infrastructure. Based on APIs a central portal can be developed that offers IT buyers a consistent access on IT resources of various providers.

Within the multi cloud context OpenStack and technologies like SaltStack and Docker are playing a central role. The world wide propagation of OpenStack rises continuously. Already 46 percent of all deployments are in production environments – of this 45 percent are on premise private clouds. In Germany also already one third (29.8 percent) of the cloud using companies are dealing actively with OpenStack. In parallel with the increasing importance of OpenStack the relevance of OpenStack for cloud sourcing in the context of multi cloud infrastructure is growing to ensure the interoperability between a various of cloud providers.

To support DevOps strategies and to relinquish writing comprehensive Puppet or Chef scripts, SaltStack is used more often for the configuration management of big and distributed cloud infrastructure. In this context the Docker container wave will grow in 2015. Until December 2014 the Docker Engine was already downloaded 102.5 million times. This is a growth by 18.8 percent within a year. In addition, the team announced extensions for multi container, to support the orchestration of applications across several infrastructures. In the context of container technologies it is worth to take a look at GiantSwarm from Germany. They have developed a micro service infrastructure based on container.

4. Public Clouds are on the rise

In the past public cloud providers faced a barrage of criticism. However, in 2015 they will experience a distinct number of new customers. One reason is the groundwork of necessary requirements they did in the recent past in order to address also enterprise customers. Another reason is the strategic change of managed cloud providers and already cloud transformed system houses with own data centers.

Public cloud players like Amazon AWS or Microsoft Azure massively have made prices spiral downwards. Of course also the customer side have recognized this. Local managed cloud providers (MCP) are getting more and more in a price Q&A with their customers – an unpleasant situation. Virtual machines and storage are sold on a competitive price a small provider is not able to keep up.

The strategies are changing that – for certain situations – MCPs are falling back on public cloud infrastructure to offer their customers lower costs on the infrastructure level. Thus, they have to create partnerships and build knowledge for the respective cloud infrastructure in order to run and maintain the virtual infrastructure and not only offer consulting services. At the same time they also benefit from new functions by the public cloud providers and the global reach. A provider with a data center in a local market is only able to exactly serve this market. However, customers have the demand to enter new target markets without big additional efforts in the short-term. Public cloud provider’s data centers are represented in many regions worldwide and offer exactly these capabilities. MCPs still keep their local data centers to offer customers services referring to local requirements (e.g. legal subjects). In this context hybrid scenarios are playing a major role by which the multi cloud has a priority.

5. Cloud Connectivity and Performance

Because of the continuous shift of mission critical data, applications and processes to external cloud infrastructure leads to the fact that CIOs not only rethink their operational IT concepts (public, private, hybrid) but also have to change their network architectures and connection strategies. A crucial competitive advantage is the selection of the right location. Modern business applications are already provided over cloud infrastructures. From a CIOs today point of view a stable and performant connection to systems and services is essential. This trend will strengthen on and on. Based on direct connect connections like AWS Direct Connect or Microsoft Express Route this can be handled more easily. In this case direct network connections are established between a public cloud provider and an enterprise IT infrastructure in a data center of a colocation provider.

The ever increasing data traffic requires a reliable and in particular stable connectivity in order to get access to the data and information at all times. This becomes more important when business critical processes and applications are outsourced to the cloud infrastructure. The access has to be ensured at any time and with low latency. Otherwise this could lead to essential financial and image damages. The quality of a cloud services significantly depends on its connectivity and the backend performance. Here an essential and important characteristic is the connectivity of the data center to guarantee the customer a stable and reliable access to the cloud services at all time. Data centers are the logistics centers of the future and experience as a logistical data vehicle its heyday.

6. Mobile Backend Development

The digital transformation is affecting each part of our life. Around 95 percent of all smartphone applications are connected to services that are running on servers, which are distributed over data centers worldwide. In addition, without a direct and mostly constant connection these apps are not functional.

This means that modern mobile applications without a stable and global oriented backend infrastructure are not working anymore. This is equally with services in the Internet of Things (IoT). A mix consisting of distributed intelligence on the device and at the backend infrastructure ensures a holistic communication. In addition, the backend infrastructure ensures the holistic connection between among all devices.

For this a public cloud infrastructure provides the ideal foundation. On the one hand the leading providers are offering the global reach. On the other hand they already have ready micro services in their portfolios, which represent specific functionalities that don’t need to be developed from scratch. These services can be used within the own backend service. Other providers of mobile-backend-as-a-services (MBaaS) or IoT platforms have been specialized on the enablement of mobile backend or IoT services. Examples are Apinauten, Parse (now part of Facebook) and Kinvey.

7. Cloud goes Vertical

In the first phase of the cloud providers of software-as-a-service (SaaS) applications concentrated on general respectively horizontal solutions like productivity suites or CRM systems. The needs of single industries weren’t consider very much. One reason was the lack of cloud ready ISVs (Independent Software Vendor), which didn’t find their way into the cloud.

With the emerging cloud transformation of ISVs and the continuous entrance of new vendors the SaaS market growth and with that the offering of vertical solutions tailored for specific industries. Examples are Opower and Enercast in the area of Smart Energy, Hope Cloud for the hotel industry and trecker.com in the agricultural sector.

One example for the importance of verticals is Salesforce. Besides investments in further horizontal offerings Salesforce is trying to make its platform more attractive specifically for single industries like the financial sector or the automotive industry.

8. The Channel to turn on the gas

The majority of the channel has recognized that it needs to demonstrate its abilities in cloud times. First of all the big distributors started initiatives to preserve respectively increase their attractiveness on the customers side (reseller like system houses). 2015 can mark a watershed. At all events a practical test.

The success of distributors is directly connected with the successful cloud transformation of system houses. Many system houses are not able to make this way on their own and need help from the distributors. Different cloud scenarios will show which services are still being purchased from the distributors and which services are directly sourced from the cloud providers.

The whole channel needs to rethink itself and its business model and to align it to the cloud. Except for hard- and software to build private or managed private clouds the access to public clouds via a self-service is a cakewalk. For some target groups the system house and thus the distributor won’t have any relevance. Other customers still need help on their way to the cloud. If the channel is not able to help someone else will do it.

9. Price vs. Feature War

In the past price reductions for virtual machines (VM) and storage hit the headlines. Amazon AWS went first and after a short time Microsoft and Google followed. Microsoft even announced to follow each price reduction by Amazon.

It seems that the providers reached their economic border and that the price war is over for now. Instead features and new services are coming to the fore to ensure differentiation. These include more powerful VMs or the expansion of the portfolio of value-added services. For a good reason – pure infrastructure like VMs of storage are no longer a differentiator in the IaaS market. Vertical services are the future of IaaS in the cloud.

Although the IaaS market is getting its real pace now, however, infrastructure is commodity and doesn’t have much potential for innovation. We have reached a point in the cloud where it is about to use a cloud infrastructure to create services on top of it. Thus, besides virtual compute and storage enterprises and developers need value-added services like Amazon SWF or Azure Machine Learning in order to run the own offering at speed, scale and failure-resistance – and to use it for mobile and IoT products.

10. Cloud Security

The attacks on JP Morgan, Xbox and Sony last year have shown that each company is a potential target for cyber attacks. Whether it is because of fun (“lulz”), financial interests or motivated by political reasons, the potential of threats increases constantly. Here it shouldn’t be neglected that mostly the big cases appear in the media. Attacks on SMEs are unmentioned or worse, the victims didn’t realized it or if too late.

One doesn’t need to be part of the Sony executive board to realize that a successful attack is a big threat! If it’s about the reputation because of stolen customer data or sensitive company information – digital data have become a precious good that needs to be protected. It is just a matter of time until one is getting into the crosshairs of hackers or political motivated extremists and intelligence agencies. This must not happen in 2015. However, the ongoing digitalization leads to a higher connectivity that hacker avails oneself to plan his attacks.

Compared to standard security solutions like firewalls or email security, Crisp Research estimates that more investments in higher-value security services like data leak prevention (DLP) are taking place in 2015. In addition, CISOs have to address strategies to avert DDoS attacks.

Categories
Cloud Computing

Study: OpenStack in the Enterprise (DACH Market)

OpenStack is making big headlines these days. The open source cloud management framework is no longer an infant technology only suited to proof of concepts for service providers, academic institutions and other “early users”.

Over the last 12 months OpenStack has gained serious momentum among CTOs and experienced cloud architects. But what about the typical corporate CIO? What are the key use cases, potential benefits and main challenges when it comes to implement OpenStack within a complex Enterprise-IT environment? How far have CIOs and data center managers in the DACH region pushed their evaluation and proof of concepts around the new cloud technology? Where can we find the first real world implementations of OpenStack in the German speaking market?

The given survey presents the first empirical findings and answers to the above raised questions regarding the enterprise adoption of OpenStack. In cooperation with HP Germany Crisp Research has conducted 716 interviews with CIOs from the DACH region across various industries. The interviews were collected and analyzed between July and October 2014.

If you are interested in the executive version of the OpenStack DACH study get in touch with me via my Crisp Research contact details.

Categories
Strategie

Studie: Digital Business Readiness

Digital – Dieser Begriff ist in aller Munde, ob in der Politik oder den Entscheidern der Wirtschaft. Und er ist Drohung und Verheißung zugleich. Bietet er doch Wirtschaft und Gesellschaft große Chancen auf eine langfristige Neuausrichtung und -justierung der bisherigen Ökonomie-Modelle. Im Umkehrschluss hat die Digitale Transformation aber auch das Potential Länder und Wirtschaftsräume, die bisher die Gewinner der Post-industriellen Dienstleitungsgesellschaft waren, schnell zu Verlierern zu machen.

Das Erkennen der Tiefe der in den kommenden Jahren anstehenden Veränderungen und daraus die richtigen Konsequenzen zu ziehen ist somit erfolgskritisch. Gesellschaft und Ökonomie müssen vorbereitet sein und diese Transformation nach Möglichkeit aktiv mitgestalten. Denn die Digitale Transformation ist kein Wunschkonzert. Sondern eine Entwicklung, der Unternehmen nur aktiv und positiv begegnen können, wollen sie auch in Zukunft zur Weltspitze gehören.

Es stellt sich somit die Frage, wie gut vorbereitet sich deutsche Unternehmen im Hinblick auf die Herausforderungen der digitalen Transformation sehen. Dabei spielt die Berücksichtigung moderner IT-Infrastrukturen, externer Rechenzentrumskapazitäten als auch Partner eine tragende Rolle.

Vor diesem Hintergrund hat die Crisp Research AG im Auftrag von Dimension Data diese Studie durchgeführt, um ein Stimmungsbild deutscher Unternehmen zum aktuellen Stand ihrer digitalen Transformation zu zeichnen. Die empirischen Ergebnisse liefern einen Überblick zur Selbsteinschätzung, der konkreten Planung, den Herausforderungen sowie zukünftigen Investitionsvorhaben die mit dem digitalen Wandel einhergehen.

Die Studie kann unter “Digital Business Readiness – Wie deutsche Unternehmen die Digitale Transformation angehen” kostenlos heruntergeladen werden.

Categories
Cloud Computing

Cloud Market 2015: The Hunger Games are over.

Last year, the cloud market gave us great pleasure with a lot of thrilling news. Lots of new data centers and innovative services show that the topic has been established in the market. The hunger games are finally over. Although, the German market has developed quite slow as compared to international standard. However, an adoption rate of almost 75 percent shows a positive trend – underwritten by two credible reasons. The providers are finally addressing the needs and requirements of their potential customers. At the same time more and more users jump on the cloud bandwagon.

Cloud providers at a glance

In 2015, cloud providers will enjoy a large clientele in Germany. For that, the majority of the providers have strategically positioned themselves with a German data center to empower local customers to physically store their data and to fulfill the requirements of the German Federal Data Protection Act (BDSG).

  • Amazon Web Services made the biggest step from all US American providers. A region especially for the German market shows an acknowledgement of the IaaS market leader to Germany. At the same time Amazon has strategically positioned in central Europe and also enhanced the attraction for customers in adjoining countries. From a technological point of view (reduction of latency etc.) this is not a neglectable step. Services especially for enterprises (AWS Directory Service, AWS CloudTrail, AWS Config, AWS Key Management Service, AWS CloudHSM) show that Amazon has been developed from a startup enabler to a real alternative for enterprises. This Amazon has underwritten with significant German enterprise reference customer (like Talanx, Kärcher and Software AG). However, Amazon still lacks of powerful hybrid cloud functionalities at application level and need to improve. After all, enterprises won’t go for a pure public cloud approach in the future.
  • Microsoft’s “Cloud-First” strategy pays off. In particular, the introduction of Azure IaaS resources was an important step. Besides an existing customer base in Germany, Microsoft has the advantage to support all cloud operation models. Alongside Azure public cloud also hosted models (Cloud OS Partner Network, Azure Pack) as well as private cloud solutions (Windows Server, System Center, Azure Pack) are available, customer can use to build a hybrid scenario. In addition, rumors from 2013 grow stronger that Microsoft will open a German data center in 2015 to offer cloud services under German law.
  • ProfitBricks, one of the few IaaS public cloud providers originally from Germany growth and thriving. Besides a new data center location in Germany (Frankfurt) several new employees in 2014 show that the startup develops well. An update of its Data Center Designer (WYSIWYG editor) underwrites the technology progress. Compared to other IaaS providers like Amazon or Microsoft there is still a lack of a portfolio for value added services. This has to compensate with a convincing and powerful network of partners.
  • Last year Rackspace started to refocus from public IaaS to managed cloud services to bethink itself on one of its strength – the “Fanatical Support”. However, when it comes to trends like OpenStack or DevOps, Rackspace is forward pressing. After all no company can’t afford to focus on this technologies and services in the future in order to offer its developers more liberty to create new digital applications faster and more efficient.
  • At the end of 2014, IBM announced an official Softlayer data center in Frankfurt. As part of the global data center strategy this happened in cooperation with colocation provider Equinix. The Softlayer cloud offers the benefit to provide bare metal resources (physical server) like virtual machines.

Even if the market and the providers made a good progress, Crisp Research has identified following challenges that still need to be addressed (excerpt):

  • The importance of hybrid capabilities and interfaces (APIs) for multi cloud approaches is getting bigger.
  • Standards like OpenStack and OpenDaylight must be supported.
  • Advanced functionalities for the enterprise IT (end-to-end security, governance, compliance et.al.) are needed.
  • There is a big need for more cloud connectivity based on cloud hubs within colocation data centers.
  • Price transparency has to improve significantly.
  • Ease of use needs to have a high priority.
  • Enablement services for the Internet of Things (IoT). Only the ones with an appropriate service portfolio will be in the vanguard in the long term.

Depending on the provider’s portfolio the requirements above are fulfilled partly or predominantly. However, what all providers have in common is how to address their target groups. Historically, direct dealings in Germany are difficult within the IT market. New potential customers are mainly addressed with the aid of partners or distributors.

View of the users

More than 74 percent of German companies are planning, implementing and using cloud services and technologies in their production environments. This is a strong signal that cloud computing has finally arrived in Germany. For 19 percent of German IT decision makers cloud computing is an integral part of their agenda and operations. Another 56 percent of the companies are planning and implementing or using cloud as part of first projects or workloads. Here, hybrid and multi cloud infrastructures play a central role to ensure integration on data, application and process level.

This leads to the question why now – after more than 10 years? After all, Amazon AWS started in 2006 and Salesforce was already founded in 1999. One reason is the fundamentally slow adoption of new technologies – this arises from caution and German efficiency. The majority of German companies are usually waiting until new technologies have been settled and prove the successful usage. Traditionally, early adopters are very few in Germany.

But this is not the main reason. The cloud market had to develop. When Salesforce and later Amazon AWS entered the market not many services were available that fulfilled the requirements or were an equal substitute for existing on premise solutions. For this reason IT decision makers still set on well-tried solutions at that time. In addition, there was no need to change something, which was down to the fact that the benefits of the cloud weren’t clear respectively the providers didn’t clarify it good enough. Another reason is the fact that sustainable changes in the IT industry are happening in decades and not in a couple of years or months. For all those IT decision makers, which set on classical IT solutions during the first two cloud phases, the amortization period and IT lifecycles are ending now. The ones, who have to renew hard- and software solutions today, have cloud services on their list for the IT environments.

Essential reasons that defer cloud transformation are (excerpt):

  • Insecurity due to misinformation from many providers that sold virtualization as cloud.
  • Legal topics had to clarified.
  • The providers had to build trust.
  • Cloud knowledge was few and far between. Lacks of knowledge, complexity and integration problems are still the core issues.
  • Applications and systems have to be developed in and for the cloud from scratch.
  • There were no competitive cloud services from German providers.
  • There were no data centers in Germany to fulfill the German Federal Data Protection Act (BDSG) and other laws.

German companies are halfway through their cloud transformation process. Meanwhile they are looking to multi cloud environments based on infrastructure, platforms and services from various providers. This part of the Digital Infrastructure Fabric (DIF) is the foundation of their individual digital strategy, on which new business models and digital products e.g. for the Internet of Things can be operated.

Categories
Cloud Computing

The way to the holy IaaS grail

In 2014, cloud computing finally arrived in Germany. A current study by Crisp Research under 716 IT decision makers shows a representative picture of the cloud adoption in the DACH market. For 19 percent of the sample cloud computing is a regular part on the IT agenda and production environments. 56 percent of the companies are already planning and implementing cloud services and technologies and using it for first projects and workloads. Crisp Research forecasts that German companies will spend around 10.9 billion EUR on cloud services, technologies as well as integration and consulting in 2015. Therefore, more and more companies evaluate the use of infrastructure-as-a-service (IaaS). For German IT decision makers this raises the question which selection criteria they must consider. Which deployment model is the right one? Is a US provider in general unsecure? Is a German provider mandatory? Which possibilities stay after Snowden and Co.?

Capacity Planning, Local or Global, Service?

Before using IaaS there is the fundamental question how and for which purpose the cloud infrastructure will be used. In this context the capacity planning plays a decisive role. In most cases companies know their applications and workloads and thus can estimate how scalable the infrastructure regarding performance and availability must be. However, scalability must also be considered from a global point of view. If the company focuses mainly on the German or DACH market a local provider with a data center in Germany is enough to serve the customers. If the company wants to expand in global markets in the midterm, a provider with a global footprint is to recommend who also operates data centers in the target markets. Following questions are:

  • What is the purpose of using IaaS?
  • Which capacities are necessary for the workloads?
  • What kind of reach is required? Local or global?

Talking about scalability the term “hyper scaler” is often used. These are provider whose cloud infrastructure theoretically is capable to scale endlessly. To these belong Amazon Web Services, Microsoft Azure and Google. The term endlessly should be treat with caution. Even the Big Boys hit the wall. Finally the virtual infrastructure based on physical systems and hardware doesn’t scale.

Companies who have a global strategy to grow into their target markets in the midterm should concentrate on an international operating provider. Among the above named Amazon AWS, Google, and Microsoft also HP, IBM (Softlayer) or Rackspace come into play which are operating a public or managed cloud offering. Who sets on a “global scaler” from the beginning gets an advantage later on. The virtual infrastructure and the running applications and workloads on top of it can be deployed easier to accelerate the time to market.

Cloud connectivity (low latency, high throughput and availability) should also not be underestimated. Is it enough that the provider and its data centers are only able to serve the German market or exists a worldwide-distributed infrastructure of data centers, which are linked to each other?

Two more parameters are the cloud model and the related type of service. Furthermore, hybrid and multi cloud scenarios should be considered. Following questions are:

  • Which cloud model should be considered?
  • Self-service or managed service?
  • Hybrid and multi cloud?

Current offerings distinguish public, hosted and managed private clouds. Public clouds are built on a shared infrastructure and are mainly used by service providers. Customers share the same physical infrastructure and are logically separated based on a virtualized security infrastructure. Web applications are an ideal use case for public clouds since standardized infrastructure and services are sufficient. A hosted cloud model transfers the ideas of the public cloud into a hosted version administered by a local provider. All customers are located on the same physical infrastructure and are virtual separated from each other. In most cases the cloud provider operates a local data center. A managed private cloud is an advanced version of a hosted cloud. It is especially attractive to those companies who want to avoid the public cloud model (shared infrastructure, multi-tenancy) but do not have the financial resources and the knowledge to run a cloud in the own IT infrastructure. In this case, the provider operates an exclusive and dedicated area on its physical infrastructure for the customers. The customer is able to use the managed private cloud exactly like a public cloud but on a non-shared infrastructure, which is located in a provider’s data center. In addition, the provider offers consultancy services to help the customer to transfer its applications and systems in the cloud or to develop them from scratch.

These hyper scaler respectively global scalers named above are mainly public cloud provider. Offering a self-service model the customers are responsible for building and operating the virtual infrastructure respectively the applications running on top of the infrastructure. In particular cloud player like Amazon AWS, Microsoft Azure and Google GCE are offering their infrastructure services based on a public cloud model and a self-service. Partner networks are helping the customers to build and run the virtual infrastructure, applications and workloads. Public cloud IaaS offerings with a self-service are very limited in Germany. The only providers are ProfitBricks and JiffyBox by domainFactory. However, JiffyBox’s focus is on webhosting and not enterprise workloads. CloudSigma from Switzerland should be named as a native provider in DACH. This German reality also reflects the provider’s strategies. The very first German public IaaS provider ScaleUp Technologies (2009) completely renewed its business model by focusing on managed hosting plus consultancy services.

Consultancy is the keyword in Germany. This is the biggest differentiator to the international markets. German companies prefer hosted and managed cloud environments including extensive service and value-added services. In this area providers like T-Systems, Dimension Data, Cancom, Pironet NDH or Claranet are present. HP has also recognized this trend and serves consultancy services in addition to its OpenStack-based HP Helion Cloud offering.

Hybrid- und multi-cloud environments shouldn’t be neglected in the future. A hybrid cloud connects a private cloud with the resources of a public cloud. In this case, a company operates an own cloud and uses the scalability and economies of scale from a public cloud provider to get further resources like compute, storage or other services on demand. A multi-cloud concept extends the hybrid cloud idea with the number of clouds that are connected. More precisely, it is about n-clouds that are connected, integrated or used in any form. For example, cloud infrastructures are connected so that the applications can use several infrastructure or services in parallel, depending on the capacity utilization or according to the current prices. Even the distributed or parallel storage of data is possible in order to ensure the availability of the data. It is not necessary that a company connect each cloud that is used to run a multi-cloud scenario. If more than two SaaS applications are part of the cloud environment it is basically already a multi-cloud setup.

At application level Amazon AWS doesn’t offer extensive hybrid cloud functionalities at present but is ever expanding. Google doesn’t offer any hybrid cloud capabilities. Because of public and private cloud solutions Microsoft and HP are able to offer hybrid cloud scenarios on a global scale. In addition, Microsoft has the Cloud OS Partner Network, which enables companies to build Microsoft based hybrid clouds together with a hosting provider. As a German provider T-Systems has the capabilities to build hybrid clouds on a local level as well as on a global scale. Local providers like Pironet NDH are offering hybrid capabilities on German ground.

Legend: Data Privacy and Data Security

Since Edward Snowden and the NSA scandal happened many legends have been created around data privacy and data security. Providers, especially from Germany, advertise with a higher security and protection against espionage and other attacks when the data is stored in a German data center. The confusion. When it comes to security, two different terms are frequently being mixed: data security and data privacy.

Data security means the implementation of all technical and organizational procedures in order to ensure confidentially, availability and integrity for all IT systems. Public cloud providers by far offer better security than a small business is able to achieve. This is due to the investments that cloud providers are making to build and maintain their cloud infrastructures. In addition, they employ staff with the right mix of skills and have created appropriate organizational structures. For this reason, they are annually investing billions of US dollars. There are only few companies outside of the IT industry that are able to achieve the same level of IT security.

Data privacy is about the protection of personal rights and privacy during the data processing. This topic leads to the biggest headaches for most companies, due to the fact that the legislative authority can’t take it easy. This means that a customer has to audit the cloud provider in compliance with the local federal data protection act. In this case, it is advisable to use the expert report of a public auditor since it is time and resource consuming for a public cloud provider to be audited by each of its customers. Data privacy is a very important topic; after all, it is about a sensitive dataset. However, it is essentially a topic of legal interest that must be ensured by data security procedures.

A German data center as a protection against the espionage of friendly countries is and will stay a myth. When there’s a will, there’s a way. When an attacker wants to get the data it is only about the criminal energy he is willing to undertake and the funds he is able to invest. If the technical challenges are too high, there is still the human factor as an option – and a human is generally “purchasable”.

However, US American cloud players have recognized the concerns of German companies and have announced or started to offer their services from German data centers. Among other Salesforce (partnership with T-Systems), VMware, Oracle and Amazon Web Services. Nevertheless, a German data center has nothing to do with a higher data security. It just fulfills

  • The technical challenges of cloud connectivity (less latency, high throughput and availability).
  • The regulatory framework of the German data privacy level.

Technical Challenge

During the general technical assessment of an IaaS provider the following characteristics should be considered:

  • Scale-up or scale-out infrastructure
  • Container support for better portability
  • OpenStack compatibility for hybrid and multi cloud scenarios

Scalability is the characteristic to increase the overall performance of a system by adding more resources like complete computing units or granular units like CPU or RAM. Using this approach the system performance is capable to grow linear with the increasing demand. So, unexpected load peaks can be absorbed and the system doesn’t break down. Scalability differs scale-up and scale-out. Scale-out (horizontal scalability) increases the system performance by adding complete compute units (virtual machines) to the overall system. In contrast, scale-up (vertical scalability) increases the system performance by adding further granular resources to the overall system. These resources can be storage, CPU or RAM. Taking a closer look on the top cloud applications, these are mainly developed by startups, uncritical workloads or developments from scratch. Attention should be paid to the scale-out concept, which makes it complicated for enterprises to move their applications and systems into the cloud. At the end of the day, the customer has to develop everything from scratch since a non-distributed developed system doesn’t work, as it should run on a distributed scale-out cloud infrastructure.

IT decision makers should consider that their IT architects are detaching from the underlying infrastructure in the future to move applications and workloads over different providers without borders. Container technologies like Docker make this possible. From the IT decision makers point of view thus the selection of a provider that supports e.g. Docker is a strategic tool to optimize modern application deployments. Docker helps to ensure the portability of an application to increase the availability and decrease the overall risk.

Hybrid and multi cloud scenarios are not only a trend but reflects the reality. Cloud provider should act in terms of their customers and instead of using a proprietary technology also set on open source technologies respectively a de-facto standard like OpenStack. Thus they enable the interoperability between cloud service provider and creating the requirements for a comprehensive ecosystem, in which users are getting a better comparability as well as the capabilities to build and manage truly multi cloud environments. This is the groundwork to empower IT buyer to benefit from the strength of individual provider and the best offerings on the market. Open approaches like OpenStack are fostering the prospective ability to act of IT buyer across provider and data center borders. This makes OpenStack to an important cloud-sourcing driver.

Each way is an individual path

Depending on the requirements the way to the holy IaaS grail can become very rocky. In particular, enterprise workloads are more difficult to handle as novel web applications. Regardless of this, it must be considered that applications, which are running on IaaS must be developed from scratch. This depends on the particular provider. But in most cases this is necessary in order to use the specific provider occurrences. Mastering the individual path the following point of views can help:

  • Know and understand the own applications and workloads
  • Perform a data classification
  • Don’t confuse data privacy with data security
  • Evaluate the cloud model: Self-service or managed service
  • Check hybrid and multi cloud scenarios
  • Estimate local and global operating distance
  • Don’t underestimate cloud connectivity
  • Evaluate container technologies for technological liberty of applications
  • Consider OpenStack compatibility
Categories
Cloud Computing

Amazon WorkMail: Amazon AWS is moving up the cloud stack

For a long time the Amazon Web Services portfolio was the place to go for developers and startups which used the public cloud infrastructure to start test balloons or try to make their dream come true to become the next billion dollar company. Over the years Amazon understood that startups don’t have the biggest jewel cases and that the real money comes from established companies. New SaaS applications have been released to address enterprises, which still haven’t found their way to the Amazon cloud. The next coup is Amazon WorkMail a managed e-mail and calendar service.

Overview: Amazon WorkMail

Amazon WorkMail is a fully managed e-mail and calendar service like Google Apps for Work or Microsoft Office 365/ Microsoft Exchange. This means that a customer doesn’t have to administrate the e-mail infrastructure and the necessary servers and software and only need to take responsibility for managing the users, email addresses and security policies at user-level.

Amazon WorkMail offers access via a web interface, supports Outlook clients and mobile devices via the Exchange ActiveSync protocol. The administration of all users is handled with the recently released AWS Directory Service.

Amazon WorkMail is integrated with several existing AWS services like AWS Directory Service, AWS Identity and Access Management (IAM), AWS Key Management Service (KMS) and Amazon Simple Email Service (SES). The integration with Amazon WorkDocs (former Amazon Zocalo) allows sending and sharing documents within an email workflow.

E-Mail as a starter drug

E-Mail. A no-brainer? You’d think. However, after IBM and Microsoft recently invested in this topic e-mail is still a slow-burner. E-mail belongs to the category of “low hanging fruits”, thus those products with which it is possible to gain success quick without much effort.

In the case of Amazon WorkMail the portfolio extension is a logical step. The service catalogue development with services like Amazon WorkSpaces (Desktop-as-a-Service) and Amazon WorkDocs (File-Sync and Share) is especially targeting enterprise customers for whom the Amazon cloud wasn’t a contact point so far. This has different reasons. The main reason is that the Amazon cloud infrastructure is a programmable building block and primarily attractive for those who want to develop own web-based applications on it. With the help of “Value-added services” respectively “Enablement services” an additional value can be created out of the pure infrastructure resources like Amazon EC2 (compute) or Amazon S3 (object storage). Because at the end of the day an EC2 instance is just a virtual machine and offers no additional value of its own volition.

Most of the companies, who want to deal with low complexity and less effort on infrastructure level and gain success in the short run, are overstrained with the self-service modus of the Amazon cloud. For this, most of them lack the necessary cloud knowledge and developer skills to ensure scalability and high-availability of the virtual infrastructure. In the meantime the AWS service offering is versatile but still addresses real infrastructure professionals and developer.

The continuous portfolio development lets AWS moving up the cloud stack. After infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS), WorkSpaces, WorkDocs and WorkMail make sure Amazon finally arrived in the software-as-a-service (SaaS) market. Amazon has started to use its own cloud infrastructure to offer higher value services. Oracle is exactly doing the opposite. After the database giant started on the SaaS layer it is now moving down the cloud stack to IaaS.

E-mail is still an important business process in the enterprise world. Thus, it is just a logical step for Amazon also to be part of the game. At the same time WorkMail can act as a starter drug for potential new customers to explore the Amazon cloud and find other benefits. Furthermore, the partner network of system integrators can use Amazon WorkMail to offer its customers a managed e-mail solution. How successful Amazon WorkMail will be remains to be seen. Google Apps, Microsoft Hosted Exchange, Zoho or Mailbox.org (powered by Open-Xchange) are only some mature solutions on the market.

In the end one important issue must be considered: IaaS the Amazon style is the ideal way to develop own web applications and services. Managed cloud services and SaaS are helping to adopt new technologies in the short run. Amazon WorkSpaces, WorkDocs and WorkMail belong to the last category.