Categories
Open Source

Open Source and OpenStack: Complexity and lack of knowledge raise risks

Open source solutions offer a cost advantage with their free licenses or low license costs. However, there are costs associated with these solutions that should not be neglected. On the one hand, specialized knowledge is necessary to build and develop open source cloud infrastructure. On the other hand, administrators have to ensure proper infrastructure operations, for which task extensive skills regarding solution administration and maintenance are requisite. In most cases, these skills are acquired via external expert knowledge such as advisory or developer resources.

Furthermore, users who decide to build their cloud on pure open source software, are limited to the support dependency of the open source project. This could be tough and painful, since support is generally provided by forums, chats, Q&A systems and bug tracking systems. In addition, it is well received when users are actively participating and playing a part in contributing to the project, a behavior generally not needed in the commercial world of software. Luckily, commercial-oriented vendors of open source cloud software have already identified the service gap and provide support through professional services, as part of special license plans for enterprises.

Despite a high level of flexibility and openness as well as a decrease in license costs, it is inherently important to understand that OpenStack is not just a piece of software. Instead, the open source cloud management solution is based on a collection of specific components, among others, for compute (Nova), storage (Swift, Cinder) and networking (Neutron) capabilities, which must be tightly integrated to build a complete and powerful OpenStack based cloud environment. The whole project takes in new functionalities with each release. In addition, a community of developers and vendors participates with further add-ons and source code for maintenance purposes and other improvements. Therefore, the use of OpenStack can lead to unpredictable complexity and overall risk increases.

IT organizations who attempt to deal with this complexity on their own by integrating all components from scratch and being up-to-date at all times, tend to expose themselves to the risk of creating their own, unmanageable cloud solution instead of using an industry compliant standard. The precise customization of OpenStack to the individual company requirements can easily lead to an OpenStack environment that is incompatible with external OpenStack based cloud infrastructure. Thus, the connection of internal and external cloud infrastructure in a hybrid scenario becomes quite tricky.

The increasing relevance of OpenStack as a central technology component within cloud environments leads to a higher demand for specialized consultancy, integration and support services. This market is still in the nascent stage and big IT vendors are currently training and improving their staff knowledge. After all, the present supply of readily trained, skilled and experienced OpenStack administrators, architects and cloud service broker is negligible. CIOs should immediately plan how to build basic skill levels for OpenStack within their IT organizations. Even if specialized service contractors can help during the implementation and operation of OpenStack based clouds, IT architects and managers should still have the main responsibility and know what is happening. OpenStack is not an instant meal that just needs to be warmed up but rather a complex technology platform composed of several individual components whose configuration is rather matching the preparation of a multi-course gourmet dinner. Skills and passion are on the most wanted list.

Categories
Cloud Computing Open Source

Signature Project: SAP Monsoon

Until now, SAP didn’t make a strong impression in the cloud. The Business-by-Design disaster or the regular changes in the cloud unit’s leadership are only two examples that reveal the desolate situation of the German flagship corporation from Walldorf in this market segment. At the same time, the powerful user group DSAG attempts a riot. The complexity of SAP’s cloud ERP as well as the lack of HANA business cases are some of the issues. The lack of transparency of prices and licenses as well as a sinking appreciation for the maintenance agreements, since the support doesn’t justify the value of the supporting fees, are leading to uncertainty and irritation on the customer side. To add to this, a historically grown and complex IT infrastructure causes a significant efficiency bottleneck in the operations. However, a promising internal cloud project might set the course for the future if it is thoroughly implemented: Monsoon.

In the course of the years, the internal SAP cloud landscape has evolved into massive but very heterogeneous infrastructure boasting an army of physical and virtual machines, petabytes of RAM and petabytes of cloud storage. New functional requirements, changes in technology as well as a number of M&As have resulted in various technology silos, thus greatly complicating any migration efforts. The new highly diverse technology approaches and a mix of VMware vSphere and XEN/KVM distributed over several datacenters worldwide lead to increasingly high complexity during the SAP’s infrastructure operations and maintenance.

The application lifecycle management is the icing on the cake as the installations; respectively, the upgrades are manual, semi-automated or automated, as determined by the age of the respective cloud. This unstructured environment is by far not an SAP-specific problem, but represents rather the reality in middle to big size cloud infrastructure whose growth has not been controlled over the last years.

„Monsoon“ in focus – a standardized and automated cloud infrastructure stack

Even if SAP is in good company with the challenge, this situation leads to vast disadvantages at the infrastructure, application, development and maintenance layers:

  • The time developers wait for new infrastructure resources is too long, leading to delays in the development and support process.
  • Only entire releases can be rolled out, a stumbling block which results in higher expenditures in the upgrade/ update process.
  • IT operations keep their hands on the IT resources and wait for resource allocation approvals by the responsible instances. This affects work performance und leads to poor efficiency.
  • A variety of individual solutions make a largely standardized infrastructure landscape impossible und lead to poor scalability.
  • Technology silos distribute the necessary knowledge across too many heads and exacerbate the difficulties in collaboration during the troubleshooting and optimization of the infrastructure.

SAP addresses these challenges proactively with its project “Monsoon”. Under the command of Jens Fuchs, VP Cloud Platform Services Cloud Infrastructure and Delivery, the various heterogeneous cloud environments are intended to become a single homogeneous cloud infrastructure, which should be extended to all SAP datacenters worldwide. Harmonized cloud architecture, widely supported uniform IaaS management, as well as an automated end-to-end application lifecycle management form the foundation of the “One Cloud”.

As a start, SAP will improve the situation of its in-house developers. The foundation of a more efficient development process is laid out upon standardized infrastructure, streamlining future customer application deployments. For this purpose, “Monsoon” is implemented in DevOps mode so that development and operations of “Monsoon” is split into two teams who work hand in hand to reach a common goal. Developers are getting access to required standardized and on-demand IT resources (virtual machines, developer tools, services) through a self-service portal. Furthermore, this mode enables the introduction of the so called continuous delivery. This means that parts of “Monsoon” have already been implemented and used actively in production while other parts are still in development. After passing through development and testing, components are being directly transferred into the production environment without wait time for a separate release cycle. As a result, innovation growth is fostered.

Open Source and OpenStack are the imperatives

The open source automation solution Chef is the cornerstone of “Monsoon’s” self-service portal, enabling SAP’s developers to deploy and automatically configure the needed infrastructure resources themselves. This also applies to self-developed applications. In general, the “Monsoon” project makes intensive use of open source technologies. In addition to the hypervisors XEN and KVM, other solutions like the container virtualization technology Docker or the platform-as-a-service (PaaS) Cloud Foundry are being utilized.

The anchor of this software-defined infrastructure is OpenStack. The open source project that can be used to build complex and massive scalable cloud computing infrastructure supports IT architects during the orchestration and the management of their cloud environments. Meanwhile, a powerful conglomerate of vendors stand behind the open source solution, trying to position OpenStack and their own services built on OpenStack prominently in the market. Another wave of influence emerges through a range of developers and other interested parties who provide their contributions to the project. At present, around 19,000 individuals from 144 countries participate in OpenStack, signifying that the open source project is also an interest group and a community. The broad support can be verified by a range of service providers and independent software vendors who have developed their services and solutions compatible to the OpenStack APIs. Since its development, OpenStack has continuously evolved into an industry standard and is destined to become the de facto standard for cloud infrastructure.

At the cloud service broker and cloud integration layer, SAP “Monsoon” sets on OpenStack Nova (Compute), Cinder (Block Storage), Neutron (Networking) and Ironic (Bare Metal). OpenStack Ironic facilitates “Monsoon” to deploy physical hosts as easily as virtual machines. Among other things, the cloud service management platform OpenStack is responsible for authentication, metering, as well as billing and orchestration. OpenStack’s infrastructure and automation API helps developers to create their applications for “Monsoon” and deploy them on top. In addition, external APIs like Amazon EC2 can be exploited in order to distribute workloads over several cloud infrastructures (multi cloud).

On the one hand, this open approach gives SAP the ability to build standardized infrastructure, in order to support VMware vSphere alongside OpenStack. On the other hand, it is also possible to execute hybrid deployments for both internal and external customers. The on-demand provisioning for virtual as well as physical hosts completes the hybrid approach. Compared to virtual machines, the higher performance of physical machines shouldn’t be underestimated. HANA will appreciate it.

Study: SAP weigh as a powerful OpenStack partner

SAP’s open source focus on OpenStack is nothing new. First announcements have already been made in July 2014 and show the increasing importance of open source technologies to well-established industry giants.

In the meantime, SAP’s OpenStack engagement also got around on the side of the user. In the context of the very first empirical OpenStack study in the DACH market “OpenStack in the Enterprise”, Crisp Research asked 700+ CIOs about their interests, plans and operational status of OpenStack.

The study concluded that cloud computing has finally arrived in Germany. For 19 percent of the sampled IT decision makers, cloud computing is an inherent part on their IT agenda and the IT production environments. 56 percent of German companies are in the planning or implementation phase and are already using cloud as part of first projects and workloads. One can also say that in 2014 the OpenStack wave has also arrived in Germany. Almost every second cloud user (47 percent) has heard of OpenStack. At present, already 29 percent of the cloud users are actively dealing with the new technology. While 9 percent of the cloud users are still in the information phase, already one in five (19 percent) have started planning and implementing their OpenStack project. However, only two percent of the cloud users are using OpenStack in their production environments. Therefore, OpenStack is only a topic for pioneers.

On the subject of performance ability of OpenStack partners, the study has shown that cloud users supportive of OpenStack appreciate SAPs OpenStack engagement and respectively expect a lot from SAP. Almost half of the sampled IT decision makers attribute “a very strong” performance ability to SAP, IBM and HP.

„Monsoon“ – Implications for SAP and the (internal) Customer

In view of the complexity associated with “Monsoon”, the project rather deserves the name “Mammoth”. To move a tank ship like SAP into calm waters is not an easy task. Encouraging standardization within a very dynamic company will raise anticipated barriers. In particular when further acquisitions are pending, the main challenges are to integrate these into the existing infrastructure. However, “Monsoon” seems to be on the right way to building a foundation for stable and consistent cloud infrastructure operations.

As a start, SAP will benefit organizationally from the project. The company from Walldorf promises its developers time savings of up to 80 percent for the deployment of infrastructure resources. As a consequence, virtualized HANA databases can be provided completely automated, thus decreasing the wait time from about one month to one hour.

In addition to the time advantage, “Monsoon” also helps its developers to focus on their core competencies (software development). In former times, developers were involved in further processes such as configuration and provisioning of the needed infrastructure; now they can independently deploy virtual machines, storage or load balancers in a fully automated way. Besides fostering the adoption of a cost-effective and transparent pay-per-use model where used resources are charged by the hour, standardized infrastructure building blocks also support cost optimization. For this purpose, infrastructure resources are combined into standardized building blocks and provided across the SAP’s worldwide datacenters.

The introduced continuous delivery approach by “Monsoon” is well positioned to gain momentum at SAP. The “Monsoon” cloud platform is regularly extended during operations and SAP is saying good-bye to fixed release cycles.

External customers will benefit from “Monsoon” in the mid-term, as SAP is using collected experiences from the project during the work with its customers and will also flow them into future product deployment (e.g. continuous delivery).

SAP is burning too many executives in the cloud

SAP will not fail with the technical implementation of “Monsoon”. The company is employing too many high-qualified employees who are equipped with the necessary knowledge. However, the ERP giant is incessantly showing signs of weakness on the organizational level. This vehemently raises the question why ambitious employees are never allowed to implement their visions to the end. SAP has burned several of its cloud senior managers (Lars Dalgaard is a prime example). For some reason, committed and talented executives who try to promote something within the company seem to have a tough act to follow.

SAP should start to act according to the terms of its customers. This means not only thinking about the shareholders, but also following a long-term vision (greetings from Amazon’s Jeff Bezos). The “Monsoon” project could be a beginning. Therefore, Jens Fuchs and his team are hopefully allowed to implement the ambitious goal – the internal SAP cloud transformation – successfully to the end.

– – –
Image source: Christian heinze / pixelio.de

Categories
Cloud Computing

Plain-speaking: Data Privacy vs. Data Security – Espionage in the Cloud Age

When it comes to data privacy, data security and espionage, there are myths and obscurities to consider. Over and over, false truths continuously circulate, especially within the context of the cloud. Vendors shamelessly take advantage of customer insecurities by using incorrect information in their PR or marketing activities.

Causing confusions and hoaxes

Headlines like „Oracle invests in Germany for data security reasons“ are just one of many examples of information misinterpreted by the media. However, the ones who should know better – the vendors – are doing nothing to provide better clarity. On the contrary, the fears and concerns of the users are used without mercy to make business. For example, Oracle’s country manager Jürgen Kunz justifies both new German data centers by stating that “In Germany data security is a particularly sensitive topic.” The NSA card is easy to play these days by just saying, “… that Oracle, as a US company, stays connected to the German market.”

However, the location has nothing to do with data security and the NSA scandal. If intelligence is getting access to the data in a data center in Germany, the US, Switzerland or Australia, this has very little to do with the country itself. If the cloud provider sticks to its own global policies for data center security on the physical as well as the virtual level, a data center regardless of the location should overall provide the same level of security. Storing data in Germany is no guarantee for a higher level of security. A data center in the US, UK or Spain is just as secure as a data center in Germany.

The confusion. When it comes to security, two different terms are frequently being mixed: data security and data privacy.

What is data security

Data security means the implementation of all technical and organizational procedures in order to ensure confidentiality, availability and integrity for all IT systems.

Public cloud providers by far offer better security than a small business is able to achieve. This is due to the investments that cloud providers are making to build and maintain their cloud infrastructures. In addition, they employ staff with the right mix of skills and have created appropriate organizational structures. For this reason, they are annually investing billions of US dollars. There are only few companies outside of the IT industry that are able to achieve the same level of IT security.

What is data privacy

Data privacy is about the protection of personal rights and privacy during the data processing.

This topic leads to the biggest headaches for most companies, due to the fact that the legislative authority can’t take it easy. This means that a customer has to audit the cloud provider in compliance with the local federal data protection act. In this case, it is advisable to use the expert report of a public auditor since it is time and resource consuming for a public cloud provider to be audited by each of its customers.

Data privacy is a very important topic; after all, it is about a sensitive dataset. However, it is essentially a topic of legal interest that must be ensured by data security procedures.

The NSA is a false pretense. Espionage is ubiquitous.

Espionage is ubiquitous. Yes, even in countries like Germany. Although one should not forget that each company could have a potential Edward Snowden in its home. The employee still feels comfortable but what happens when he receives a more attractive offer or the culture in the team or the company changes? Insider threat presents a much greater danger than external attackers or intelligence. The former hacker Kevin Mitnick describes in his book “The Art of Deception” how he got all the information in order to prepare his attacks by simply browsing the trash of his victims and using techniques of social engineering. In his cases it was more about the manipulation of people and extensive research instead of the capturing of IT systems.

A German data center as a protection against the espionage of friendly countries is and will stay a myth. When there’s a will, there’s a way. When an attacker wants to get the data it is only about the criminal energy he is willing to undertake and the funds he is able to invest. If the technical challenges are too high, there is still the human factor as an option – and a human is generally “purchasable”.

The cloud is the scapegoat!

The cloud is not the issue. To use espionage as an excuse for not using cloud services is too easy. Bottom line, in the times before the cloud, the age of outsourcing, it was also possible to spy. And the intelligence did it. Despite the contracts with their customers, providers were also able to secretly give data to the intelligence.

If espionage had been in the focus during the age of outsourcing as it is today, outsourcing would have been demonized by now. Today’s discussions are a relevant product of the political situation in which the lack of trust characterizes the formerly established economic, military and intelligence partnerships.

Due to the amount of data that cloud providers are hoarding and merging, today they have become more attractive. Nevertheless, for an outsider to get access to a data center takes a lot of effort. Andrew Blum describes in his book „Tube: Behind The Scenes At The Internet“ that because of the high connectivity to other countries (e.g. the data from Tokyo to Stockholm or data from London to Paris), one of the first Internet hubs „MAE-East“ (1992) had quickly become an objective of the US espionage. No wonder, since MAE-East was the de-facto way into the Internet. Bottom line is, intelligence does not need to make a footstep into one single provider data center – it simply needs to hijack a connectivity hub to eavesdrop the data lines.

The so-called “Schengen-Routing” is discussed in this context. The idea is to let the data traffic stay in Europe as data are transferred only between hosts in Europe. Theoretically, this sounds like an interesting idea. In practice it is totally unfeasible. When using cloud services from US providers the data are routed through the US. If an email from a German provider’s account is sent to an account managed by a US provider, the data need to leave Europe. In addition, for many years we have been living in a fully interconnected world where data are exchanged globally. And there is no way back.

A more serious issue is the market power and the clear innovation leadership of the US when compared to Europe and Germany. The availability and competitiveness of German and other European cloud services is still limited. The result is that many companies have to use cloud services from US providers. Searching for a solution only on the network layer is useless, unless competitive cloud services, infrastructure and platforms of European providers are available. Until then, only data encryption helps in order to avoid intelligence and other criminals from accessing the data.

It is imperative that European and German providers develop and market innovative and attractive cloud services, because a German or European data center on its own has little to do with a higher data security. It just offers the benefit of the German/ European data privacy standard to fulfill the regulatory framework.

– – –
Picture source: lichtkunst.73 / pixelio.de

Categories
Cloud Computing

Cloud Computing Adoption in Germany in 2014

A current study by Crisp Research including 716 IT decision makers shows a representative picture of the cloud adoption in the German (DACH) market. In 2014, the cloud has been embraced by IT departments. More than 74 percent of the companies are already planning and implementing cloud services and technologies in production environments. Only about one out of four (26 percent) of the surveyed said that at present and in the future the cloud is not an option for their company.

The cloud naysayers are primarily represented by small and medium-sized enterprises (more than 50 percent of all cloud naysayers are from companies with up to 500 employees). The number of companies from this domain who are actively planning or implementing cloud technologies is also significantly low. One reason for this is the low affinity for IT at enterprises of this size. Many small and medium-sized enterprises still do not understand the different IaaS, PaaS and SaaS possibilities and are not familiar with the offered cloud services. In addition, compared to bigger companies, the smaller ones place higher weight on the risk associated with use of cloud services. IT departments of larger companies are better equipped with protection measures and knowledge of IT security, and so feel less exposed to the risk of data misuse or loss.

Cloud naysayers are a dying species in large businesses with at least 5000 employees (less than 20 percent). IT managers are already implementing cloud operation processes and are looking for the most intelligent architectures as well as the most secure systems. For them the question is not “if” but rather “how” they can use cloud technologies.

According to the survey, companies with 5000 – 10000 employees are the frontrunner. The percentage of companies that describe the cloud as a solid element of their IT strategy and IT operations is around 39.5 – the peak in the study. For companies of this size, the number of cloud naysayers is also the lowest – only 16.3 percent.

In very large companies with more than 10000 employees cloud usage is flattening a little. There are various reasons for this. On the one hand, the increasing IT and organizational complexity lead to longer planning and implementation processes. Discussions of security and governance processes also take respectively more time and naturally lead to demands for higher standards, which sometimes cannot be fulfilled by the cloud provider. On the other hand, the tendency to develop own applications to manage individual software solutions and processes in this enterprise category is higher. The stronger financial resources and the availability of own, comprehensive data center capabilities allow for additional demands to be handled internally, even if the implementation is not as flexible and innovative as the internal users and departments require.

Categories
Analyst Cast @de

Aufgaben der Enterprise IT in der Cloud

Categories
Cloud Computing

Platform-as-a-Service: Strategies, technologies and providers – A compendium for IT decision-makers

“Software is eating the world” – With this sentence in an article for the Wall Street Journal in 2011, the inventor of Netscape and famous venture capitalist Marc Andreesen described a trend which is today more than apparent: software will provide the foundation for transforming virtually all industries, business models and customer relationships. Software is more than just a component for controlling hardware: it has become an integral part of the value added in a large number of services and products. From the smartphone app to the car computer. From music streaming to intelligent building control. From tracking individual training data to automatic monitoring of power grids.

60 years after the start of the computer revolution, 40 years after the invention of the microprocessor and 20 years after the launch of the modern internet, the requirements for developing, operating and above all distributing software have changed fundamentally.

But not just the visionaries from Silicon Valley have rec- ognised this structural change. European industrial concerns such as Bosch are in the meantime also investing a large percentage of their research and development budgets in a new generation of software. So we find Bosch CEO Volkmar Denner announcing that by 2020 all equipment supplied by Bosch will be internet-enabled. Because in the Internet of Things, the benefit of products is only partially determined by their design and hardware specification. The networked services provide much of the benefit – or value added – of the product. And these services are software-based.

The implications of this structural shift have been clearly apparent in the IT industry for a few years now. Large IT concerns such as IBM and HP are trying to reduce their dependency on their hardware business and are investing heavily in the software sector.

In the meantime, the cloud pioneer Salesforce, with sales of four billion USD, has developed into one of the heavyweights in the software business. Start-ups like WhatsApp offer a simple but extremely user-friendly mobile app which have gained hundreds of millions of users worldwide within a few years.

State-of-the-art software can cause so-called disruptive changes nowadays. In companies and markets. In politics and society. In culture and in private life. But how are all the new applications created? Who writes all the millions of lines of code?. Which are the new platforms upon which the thousands upon thousands of applications are developed and operated?

This compendium examines these questions. It focuses par- ticularly on the role that “Platform as a Service” offerings play for developers and companies today. Because although after almost a decade of cloud computing the terms IaaS and SaaS are widely known and many companies use these cloud services, only a few companies have so far gathered experience with “Platform as a Service”.

The aim is to provide IT decision-makers and developers with an overview of the various types and potential applications of Platform as a Service. Because there is a wide range of these, extending from designing business processes (aPaaS) right through to the complex integration of different cloud services (iPaaS).

With this compendium, the authors and the initiator, PIRONET NDH, wish to make a small contribution to the better under- standing of state-of-the-art cloud-based software development processes and platforms. This compendium is designed to support all entrepreneurs, managers and IT experts who will have to decide on the development and use of new applications and software-based business processes in the coming years. The authors see Platform as a Service becoming one of the cornerstones for implementing the digital transformation in companies, because the majority of the new digital applications will be developed and operated on PaaS platforms.

The compendium can be downloaded free of charge under “Platform-as-a-Service: Strategies, technologies and providers – A compendium for IT decision-makers“.

Categories
Cloud Computing @de

Platform-as-a-Service: Strategien, Technologien und Anbieter – Ein Kompendium für IT-Entscheider

„Software is eating the world” – Mit diesem Satz umschrieb der Netscape-Erfinder und berühmte Venture Capitalist Marc Andreesen in einem Beitrag für das Wallstreet Journal 2011 einen heutzutage nur allzu offensichtlichen Trend – Software wird die Grundlage für die Transformation nahezu aller Branchen, Geschäftsmodelle und Kundenbeziehungen. Software ist mehr als nur Steuerungskomponente von Hardware, sondern elementarer Wertschöpfungsbestandteil einer Vielzahl an Diensten und Produkten geworden. Von der Smartphone-App bis zum Bordcomputer in Autos. Vom Musikstreaming bis zur intelligenten Gebäudesteuerung. Vom Tracking individueller Trainingsdaten bis zur automatisierten Überwachung von Energienetzen.

60 Jahre nach Beginn der Computer-Revolution, 40 Jahre nach Erfindung des Mikroprozessors und 20 Jahre nach dem Beginn des modernen Internets haben sich die Voraussetzungen für die Entwicklung, den Betrieb und vor allem die Verbreitung von Software grundlegend verändert.

Doch nicht nur Visionäre aus dem Silicon Valley haben den strukturellen Umbruch erkannt. Auch europäische Industriekonzerne, wie beispielsweise Bosch, investieren große Teile ihrer Forschungs- und Entwicklungsbudgets mittlerweile in eine neue Software-Generation. So gab Bosch CEO Volkmar Denner die Losung aus, dass bis 2020 alle von Bosch ausgelieferten Geräte internetfähig sein werden. Denn im Internet der Dinge definiert sich der Nutzen von Produkten nur noch teilweise über deren Design und Hardware-Spezifikation. Große Teile des Nutzens – bzw. Mehrwertes stiften die vernetzten Dienste rund um das Produkt. Und die sind software-basiert.

Die Auswirkungen dieser strukturellen Verschiebung zeichnen sich in der IT-Branche schon seit einigen Jahren deutlich ab. Große IT-Konzerne, wie IBM und HP, versuchen die Abhängigkeit vom Hardware-Business zu reduzieren und investieren intensiv im Software-Bereich.

Cloud-Pionier Salesforce zählt mit rund 4 Milliarden USD mittlerweile zu den Schwergewichten im Software-Geschäft. Startups, wie WhatsApp, erreicht mit seiner einfachen aber hochgradig nutzerfreundlichen mobilen App innerhalb von wenigen Jahren hunderte Millionen Nutzer weltweit.

Moderne Software kann heute disruptive Veränderungen hervorrufen. In Unternehmen und Märkten. In Politik und Gesellschaft. In Kultur wie im privaten Leben. Doch wie entstehen all die neuen Applikationen? Wer schreibt all die Millionen „Lines of Code“? Welches sind die neuen Plattformen, auf denen die abertausende Applikationen entwickelt und betrieben werden?

Mit diesen Fragen beschäftigt sich das vorliegende Kompendium. Speziell soll dargestellt werden, welche Rolle die sogenannten “Platform-as-a-Service”-Angebote für Entwickler und Unternehmen heute spielen. Denn, obwohl nach fast einer Dekade Cloud Computing die Begriffe IaaS und SaaS weitgehend bekannt sind und viele Unternehmen diese Cloud-Dienste nut- zen, so haben bislang nur wenig Unternehmen Erfahrungen mit “Platform-as-a-Service” gesammelt.

Zielsetzung ist es, IT-Entscheidern und Entwicklern einen Überblick zu den verschiedenen Spielarten und Einsatzmöglichkeiten von Platform-as-a-Service zu liefern. Denn diese sind vielfältig und reichen von der Kompositionierung von Businessprozessen (aPaaS) bis hin zur komplexen Integration verschiedener Cloud-Services (iPaaS).

Mit dem Kompendium möchten die Autoren und Initiator PIRONET NDH einen kleinen Beitrag zum besseren Verständnis der modernen Cloud-basierten Software-Entwicklungsprozesse und -Plattformen liefern. Das Kompendium soll all diejenigen Unternehmer, Manager und IT-Experten unterstützen, die in den kommenden Jahren über die Entwicklung und den Einsatz neuer Applikationen und software-gestützter Businessprozesse entscheiden müssen. Aus Perspektive der Autoren wird Platform-as-a-Service zu einer der tragenden Säulen für die Umsetzung der digitalen Transformation in den Unternehmen. Denn die Mehrheit der neuen digitalen Anwendungen wird auf PaaS-Plattformen entwickelt und betrieben werden.

Das Kompendium kann kostenlos unter “Platform-as-a-Service: Strategien, Technologien und Anbieter – Ein Kompendium für IT-Entscheider” heruntergeladen werden.

Categories
IT-Infrastructure

OpenStack Deployments Q4/2014: On-Premise Private Clouds continue to lead the pack

Around 4.600 attendees at the OpenStack Summit in Paris made a clear statement. OpenStack is the hottest open source project for IT infrastructure and cloud environments in 2014. The momentum, driven by an ever-growing community, is reflected in the technical details. Juno, the current OpenStack release, includes 342 new features, 97 new drivers and plugins together with 3,219 fixed bugs. 1,419 contributors supported Juno with code and innovations. An increase by 16 percent from the former Icehouse release. According to the OpenStack Foundation, over the last six month the ratio of production environments has increased from 33 percent to 46 percent. Most of the users are coming from the US (47 percent), followed by Russia (27 percent) and Europe (21 percent).

Thus, even though the total number of new projects in Q4 went up by only 13 percent as compared to Q3, the appeal is still unabated. On-premise private clouds are by far still the preferred deployment model. In Q3 2014, the OpenStack Foundation registered 114 private cloud installations worldwide. In Q4, the number grew to 130. By comparison, the number of worldwide OpenStack public clouds grew by 13 percent. With 3 percent, hosted private cloud projects reveal the merest growth.


Note: The numbers are based on the official statistics of the OpenStack Foundation.

Annualized, the worldwide growth of the overall OpenStack projects increased by 128 percent in 2014, Q1 (105) –> Q4 (239). Regarding the total number of deployments, on-premise private clouds are by far the preferred model. In Q1 2014, the OpenStack Foundation counted 55 private cloud installations worldwide. In Q4, the number grew to 130. However, with an increase by 140 percent, hybrid clouds show the biggest growth.

Private clouds clearly have the biggest appeal, because through them cloud architects have found an answer how to build cloud environments tailored to specific needs. OpenStack supports the necessary features in order to build modern and sustainable cloud environments based on the principles of openness, reliability and efficiency. The project makes a significant contribution, especially in the areas of openness and efficiency. After years of “trial and error” approaches and cloud infrastructure with an exploratory character, OpenStack is the answer when it comes to implementing large-volume projects within production environments.

OpenStack on-premise private cloud lighthouse projects are run by Wells Fargo, Time Warner, Overstock, Expedia, Tapjoy and CERN (also hybrid cloud). CERN’s OpenStack project in particular is an impressive example and shows the capabilities of OpenStack to be the foundation of an infrastructure for massive scale. Some facts about CERN’s OpenStack project were presented by CERN Infrastructure Manager Tim Bell at the OpenStack Summit in Paris:

  • 40 million pictures per second are taken
  • 1 PB of data per second are stored
  • 100 PB archive of data (plus: 27 PB per year)
  • 400 PB per year by 2023 are estimated
  • 11,000 servers
  • 75,000 disk drives
  • 45,000 tapes

The CERN operates a total of four OpenStack based clouds. The largest cloud (Icehouse release) runs around 75,000 cores on more than 3,000 servers. The three other clouds have a total of 45,000 cores. The CERN expects to pass 150,000 cores by Q1 2015.

Further interesting OpenStack projects can be found under http://superuser.openstack.org.

Categories
IT-Infrastructure

Build or Buy? – The CIOs OpenStack Dilemma

OpenStack has become the most important open source project for cloud infrastructure solutions. Since 2010, hundreds of companies are participating in order to develop an open, standardized and versatile technology framework, which can be used to manage compute, storage and networking resources in public, private and hybrid cloud environments. Even though OpenStack is an open source solution, this does not imply that the setup, operation and maintenance are easy to handle. OpenStack can behave like a true beast. A number of CIOs who are running self-developed OpenStack infrastructure are reporting significant increases in cost and complexity. They have made several fine-tunings to fit OpenStack to their individual needs, developing OpenStack implementations that are no longer compatible with the current releases. This leads to the question whether a build or buy strategy is the right approach in deploying OpenStack in the captive IT environment.

OpenStack to gather pace

OpenStack has quickly become an essential factor in the cloud infrastructure business. Started in 2010 as a small open source project, the solution is used by hundreds of enterprises and organizations in the meantime including several big companies (PayPal, Wells Fargo, Deutsche Telekom), as well as innovative startups and developers. In the early days its initiators used OpenStack to build partly proprietary cloud environments. More than 850 companies are now supporting the project, among them IBM, Oracle, Red Hat, Cisco, Dell, Canonical, HP and Ericsson.


Note: The numbers are based on the official statistics of the OpenStack Foundation.

Alongside the continuous improvement of the technology, the adoption rate accordingly increases. This can be seen in the worldwide growth of the OpenStack projects (increase by 128 percent in 2014, Q1 (105) –> Q4 (239)). Here, on-premise private clouds are by far the preferred deployment model. In Q1 2014, the OpenStack Foundation counted 55 private cloud installations worldwide. In Q4, the number grew to 130. For the next 12 months, Crisp Research expects a 25 percent growth for OpenStack based enterprise private clouds.

OpenStack: Build or Buy?

OpenStack offers capabilities to operate environments in synergy with a variety of other open source technologies and at the same time to be cost-efficient (no or only minor license costs). However, the complexity level in this case increases dramatically. Even if CIOs tend to use OpenStack only as a cloud management layer, there is still a high degree of complexity to manage. Most of the OpenStack beginners are not aware that OpenStack has more than 500 buttons to configure OpenStack clouds the right way.

The core issue for most of the companies who want to benefit from OpenStack is: Build or Buy!

During the preparation and evaluation of the build or buy decision companies should absolutely consider the in-house experiences and technical knowledge with respect to OpenStack. IT decision makers should bring the internal skills into question and clearly define their requirements in order to compare them with the offerings of OpenStack distributors. Analogous to the Linux business, OpenStack distributors offer ready bundled OpenStack versions including support – mostly with integration services. This reduces the implementation risk and accelerates the execution of the project.

The CIO is in demand

For quite some time, CIOs and cloud architects are trying to answer the question of how they should build their cloud environments in order to match their companies’ requirements. After the last years had been used for “trial and error” approaches and most of the cloud infrastructures had an exploratory character, it is about time to implement large-volume projects within the production environments.

This raises the question which cloud design is the right one that IT architects can use to plan their cloud environments. Crisp Research advises to build modern and sustainable cloud environments based on the principles of openness, reliability and efficiency. Especially in the areas of openness and efficiency OpenStack can make a significant contribution.

The complete German analyst report “Der CIO im OpenStack Dilemma: BUY oder DIY?” can be downloaded under http://www.crisp-research.com/report-der-cio-im-openstack-dilemma-buy-oder-diy.

Categories
Cloud Computing

Analyst Strategy Paper: Open Cloud Alliance – Openness as an Imperative

Future enterprise customers will be accessing a mixture of proprietary on-premises IT, hosted cloud services of local providers and globally active cloud service providers. This is a major opportunity for the market and all participants involved. This is especially so for small hosters with existing infrastructures, as well as for system integrators with the appropriate know- how and existing customer relations.

In light of this, in this strategy paper Crisp Research investigates the challenges that both provider groups are facing in this situation, and deals with the most important aspects and their solutions.

The strategy paper can be downloaded under “Open Cloud Alliance – Openness as an Imperative“.