Categories
IT-Infrastructure

Figure: AI-defined Infrastructure in a Nutshell

AI-defined Infrastructure in a Nutshell

Categories
IT-Infrastructure

Analyst Report: IT Infrastructure 2020 – Enterprises in a fully Interconnected World

The future IT Infrastructure are fundamentally different from those of today. These do not just have a cloud based character, but they also have special requirements regarding the scope, performance, stability and must ensure a maximum density of interconnection. In context with the planning of their IT infrastructure agenda 2020, CIOs should deal with certain topics in order to support the business activities from a technology perspective.

The report can be downloaded free of charge under “IT Infrastructure 2020 – Enterprises in a fully Interconnected World“.

Categories
IT-Infrastructure

OpenStack Deployments Q4/2014: On-Premise Private Clouds continue to lead the pack

Around 4.600 attendees at the OpenStack Summit in Paris made a clear statement. OpenStack is the hottest open source project for IT infrastructure and cloud environments in 2014. The momentum, driven by an ever-growing community, is reflected in the technical details. Juno, the current OpenStack release, includes 342 new features, 97 new drivers and plugins together with 3,219 fixed bugs. 1,419 contributors supported Juno with code and innovations. An increase by 16 percent from the former Icehouse release. According to the OpenStack Foundation, over the last six month the ratio of production environments has increased from 33 percent to 46 percent. Most of the users are coming from the US (47 percent), followed by Russia (27 percent) and Europe (21 percent).

Thus, even though the total number of new projects in Q4 went up by only 13 percent as compared to Q3, the appeal is still unabated. On-premise private clouds are by far still the preferred deployment model. In Q3 2014, the OpenStack Foundation registered 114 private cloud installations worldwide. In Q4, the number grew to 130. By comparison, the number of worldwide OpenStack public clouds grew by 13 percent. With 3 percent, hosted private cloud projects reveal the merest growth.


Note: The numbers are based on the official statistics of the OpenStack Foundation.

Annualized, the worldwide growth of the overall OpenStack projects increased by 128 percent in 2014, Q1 (105) –> Q4 (239). Regarding the total number of deployments, on-premise private clouds are by far the preferred model. In Q1 2014, the OpenStack Foundation counted 55 private cloud installations worldwide. In Q4, the number grew to 130. However, with an increase by 140 percent, hybrid clouds show the biggest growth.

Private clouds clearly have the biggest appeal, because through them cloud architects have found an answer how to build cloud environments tailored to specific needs. OpenStack supports the necessary features in order to build modern and sustainable cloud environments based on the principles of openness, reliability and efficiency. The project makes a significant contribution, especially in the areas of openness and efficiency. After years of “trial and error” approaches and cloud infrastructure with an exploratory character, OpenStack is the answer when it comes to implementing large-volume projects within production environments.

OpenStack on-premise private cloud lighthouse projects are run by Wells Fargo, Time Warner, Overstock, Expedia, Tapjoy and CERN (also hybrid cloud). CERN’s OpenStack project in particular is an impressive example and shows the capabilities of OpenStack to be the foundation of an infrastructure for massive scale. Some facts about CERN’s OpenStack project were presented by CERN Infrastructure Manager Tim Bell at the OpenStack Summit in Paris:

  • 40 million pictures per second are taken
  • 1 PB of data per second are stored
  • 100 PB archive of data (plus: 27 PB per year)
  • 400 PB per year by 2023 are estimated
  • 11,000 servers
  • 75,000 disk drives
  • 45,000 tapes

The CERN operates a total of four OpenStack based clouds. The largest cloud (Icehouse release) runs around 75,000 cores on more than 3,000 servers. The three other clouds have a total of 45,000 cores. The CERN expects to pass 150,000 cores by Q1 2015.

Further interesting OpenStack projects can be found under http://superuser.openstack.org.

Categories
IT-Infrastructure

Build or Buy? – The CIOs OpenStack Dilemma

OpenStack has become the most important open source project for cloud infrastructure solutions. Since 2010, hundreds of companies are participating in order to develop an open, standardized and versatile technology framework, which can be used to manage compute, storage and networking resources in public, private and hybrid cloud environments. Even though OpenStack is an open source solution, this does not imply that the setup, operation and maintenance are easy to handle. OpenStack can behave like a true beast. A number of CIOs who are running self-developed OpenStack infrastructure are reporting significant increases in cost and complexity. They have made several fine-tunings to fit OpenStack to their individual needs, developing OpenStack implementations that are no longer compatible with the current releases. This leads to the question whether a build or buy strategy is the right approach in deploying OpenStack in the captive IT environment.

OpenStack to gather pace

OpenStack has quickly become an essential factor in the cloud infrastructure business. Started in 2010 as a small open source project, the solution is used by hundreds of enterprises and organizations in the meantime including several big companies (PayPal, Wells Fargo, Deutsche Telekom), as well as innovative startups and developers. In the early days its initiators used OpenStack to build partly proprietary cloud environments. More than 850 companies are now supporting the project, among them IBM, Oracle, Red Hat, Cisco, Dell, Canonical, HP and Ericsson.


Note: The numbers are based on the official statistics of the OpenStack Foundation.

Alongside the continuous improvement of the technology, the adoption rate accordingly increases. This can be seen in the worldwide growth of the OpenStack projects (increase by 128 percent in 2014, Q1 (105) –> Q4 (239)). Here, on-premise private clouds are by far the preferred deployment model. In Q1 2014, the OpenStack Foundation counted 55 private cloud installations worldwide. In Q4, the number grew to 130. For the next 12 months, Crisp Research expects a 25 percent growth for OpenStack based enterprise private clouds.

OpenStack: Build or Buy?

OpenStack offers capabilities to operate environments in synergy with a variety of other open source technologies and at the same time to be cost-efficient (no or only minor license costs). However, the complexity level in this case increases dramatically. Even if CIOs tend to use OpenStack only as a cloud management layer, there is still a high degree of complexity to manage. Most of the OpenStack beginners are not aware that OpenStack has more than 500 buttons to configure OpenStack clouds the right way.

The core issue for most of the companies who want to benefit from OpenStack is: Build or Buy!

During the preparation and evaluation of the build or buy decision companies should absolutely consider the in-house experiences and technical knowledge with respect to OpenStack. IT decision makers should bring the internal skills into question and clearly define their requirements in order to compare them with the offerings of OpenStack distributors. Analogous to the Linux business, OpenStack distributors offer ready bundled OpenStack versions including support – mostly with integration services. This reduces the implementation risk and accelerates the execution of the project.

The CIO is in demand

For quite some time, CIOs and cloud architects are trying to answer the question of how they should build their cloud environments in order to match their companies’ requirements. After the last years had been used for “trial and error” approaches and most of the cloud infrastructures had an exploratory character, it is about time to implement large-volume projects within the production environments.

This raises the question which cloud design is the right one that IT architects can use to plan their cloud environments. Crisp Research advises to build modern and sustainable cloud environments based on the principles of openness, reliability and efficiency. Especially in the areas of openness and efficiency OpenStack can make a significant contribution.

The complete German analyst report “Der CIO im OpenStack Dilemma: BUY oder DIY?” can be downloaded under http://www.crisp-research.com/report-der-cio-im-openstack-dilemma-buy-oder-diy.

Categories
IT-Infrastructure

The pitfalls of cloud connectivity

The continuous shift of business-critical data, applications and processes to external cloud infrastructures is not only changing the IT operating concepts (public, private versus hybrid) for CIOs but also the network architectures and integration strategies deployed. As a result, the selection of the location to host your IT infrastructure has become a strategic decision and a potential source for competitive advantage. Frankfurt is playing a crucial role as one of the leading European cloud connectivity hubs.

The digital transformation lashes out

The digital transformation is playing an important part in our lives today. For example, an estimated 95 percent of all smartphone apps are connected to services hosted on servers in data centers located around the world. Without a direct and uninterrupted connection to these services, metadata or other information, apps are not functioning properly. In addition, most of the production data, needed by the apps, is stored on systems in a data center, with only a small percentage cached locally on the smartphone.

Many business applications are being delivered from cloud infrastructures today. From the perspective of a CIO a reliable, high-performance connection to systems and services is therefore essential. This trend will only continue to strengthen. Crisp Research estimates that in the next five years around a quarter of all business applications will be consumed as cloud services. At the same time, hybrid IT infrastructure solutions, using a mix of local IT infrastructure connected and IT infrastructure located in cloud data centers, are also becoming increasingly popular.

Data volumes: bigger pipelines required

The ever-increasing data volumes further increases the requirement of reliable, high-performance connectivity to access and store data and information any place, any time. Especially in case business-critical processes and applications are located on cloud infrastructure. For many companies today, failure to offer their customers reliable, low-latency access to applications and services can lead to significant financial and reputation damage, and represents a significant business risk. Considering that the quality of a cloud services depend predominantly on the connectivity to and performance of the back end, cloud connectivity is becoming the new currency.

Cloud connectivity is the new currency

“Cloud connectivity” could be defined technically as latency, throughput and availability:

In simple terms, cloud connectivity could be defined as the enabler of real-time access to cloud services any place, any time. As a result, connectivity has become the most important feature of today’s data centres. Connectivity means the presence of many different network providers (carrier-neutrality) as well as a redundant infrastructure of routers, switches, cabling, and network topology. CIOs are therefore increasingly looking at carrier-neutrality as a pre-requisite, facilitating a choice between many different connectivity options.

Frankfurt is the perfect example of a cloud connectivity hub

In the past 20 years a cluster of infrastructure providers for the digital economy has formed in Frankfurt, which facilitates companies to effectively and efficiently distribute their digital products and services to their customers. These providers have crafted Frankfurt as a the German capital of the digital economy delivering a wide range of integration services for IT and networks, and IT infrastructure and data centre services. More and more service providers have understood that despite the global nature of a cloud infrastructure, a local presence is crucial. This is an important finding, as no service provider that is seriously looking to do business in Germany, can do so without a local data center. Crisp Research predicts that all major, international service providers will build or expand their cloud platforms in Frankfurt within the next two to three years.

Against this backdrop, Crisp Research has researched the unique features of Frankfurt as an international hotspot for data centres and cloud connectivity. The whitepaper, titled “The Importance of Frankfurt as a Cloud Connectivity Hub” is available for download now: http://www.interxion.com/de/branchen/cloud/die-bedeutung-des-standorts-frankfurt-fur-die-cloud-connectivity/download/

Categories
IT-Infrastructure

Analyst Interview: Disaster Recovery (Video)

Disaster Recovery is playing a minor part for SMEs. In a fraction of a second, this negligent attitude can cause harm to the business. Thereby new operation models from the cloud-age offers no reason for excuses.

Categories
IT-Infrastructure

Analyst Report: CIO’s OpenStack Dilemma – BUY or DIY?

OpenStack has quickly become an important factor in the cloud infrastructure business. Started in 2010 as a small open source project, the solution is used by over a hundred companies and organizations around the world. Among others big enterprises (PayPal, Wells Fargo, Deutsche Telekom) as well as innovative cloud startups and developer. In the early years OpenStack was used by its initiators to build own proprietary cloud environments. More than 850 companies support the project, including IBM, Oracle, Red Hat, Cisco, Dell, Canonical, HP and Ericsson.

Although OpenStack is an open source solution, this does not imply, that the setup, operations and maintenance is easy to handle. OpenStack can act like a real beast. A number of CIOs, who operate own developed OpenStack infrastructures, are reporting of a significant rise of costs and complexity. To customize OpenStack for their individual requirements they made several delicate adjustments. As a result, they have developed OpenStack implementations which are no longer compatible with the current releases. This leads to the question if a “build” or “buy” strategy is the right approach to deploy OpenStack in the corporate IT environment.

My Crisp Research colleague Dr. Carlo Velten and I had a critical look on this topic and answer the key questions for CIOs and IT decision makers in our Analyst Report „Der CIO im OpenStack Dilemma: BUY oder DIY?“.

The Analyst Report „Der CIO im OpenStack Dilemma: BUY oder DIY?“ can be downloaded under http://www.crisp-research.com/report-der-cio-im-openstack-dilemma-buy-oder-diy/.

Categories
IT-Infrastructure

White Paper: Disaster-Recovery-as-a-Service

Disaster Recovery is playing a minor part for SMEs. In a fraction of a second, this negligent attitude can cause harm to the business. Thereby new operation models from the cloud-age offers no reason for excuses.

Categories
Analysis IT-Infrastructure

Fog Computing: Data, Information, Application and Services needs to be delivered more efficient to the enduser

You read it correctly, this is not about CLOUD Computing but FOG Computing. After the cloud is on a good way to be adapted in the broad, new concepts follow to enhance the utilization of scalable and flexible infrastructures, platforms, applications and further services to ensure the faster delivery of data and information to the enduser. This is exactly the core function of fog computing. The fog ensures that cloud services, compute, storage, workloads, applications and big data be provided at any edge of a network (Internet) on a trully distributed way.

What is fog computing?

The fog hast he task to deliver data and workloads closer to the user who is located at the edge of a data connection. In this context it is also spoken about „edge computing“. The fog is organizationally located below the cloud and serves as an optimized transfer medium for services and data within the cloud. The term „fog computing“ was characterized by Cisco as a new paradigm , which should support distributed devices during the wireless data transfer within the Internet of Things. Conceptual fog computing builds upon existing and common technologies like Content Delivery Networks (CDN), but based on cloud technologies it should ensure the delivery of more complex services.

As more and more data must be delivered to an ever-growing number of users, concepts are necessary which enhance the idea of the cloud and empower companies and vendors to provide their content over a widely spread platform to the enduser. Fog computing should help to transport the distributed data closer to the enduser and thus decrease latency and the number of required hops and therefore better support mobile computing and streaming services. Besides the Internet of Things, the rising demand of users to access data at any time, from any place and with any device, is another reason why the idea of fog computing will become increasingly important.

What are use cases of fog computing?

One should not be too confused by this new term. Although fog computing is a new terminology. But looking behind the courtain it quickly becomes apparent that this technology is already used in modern data centers and the cloud. A look at a few use cases illustrates this.

Seamless integration with the cloud and other services

The fog should not replace the cloud. Based on fog services the cloud should be enhanced by isolating the user data which are exclusively located at the edge of a network. From there it should allow administrators to connect analytical applications, security functions and more services directly to the cloud. The infrastructure is still based entirely on the cloud concept, but extends to the edge with fog computing.

Services to set vertical on top of the cloud

Many companies and various services already using the ideas of fog computing by delivering extensive content target-oriented to their customer. This includes among others webshops or provider of media content. A good example for this is Netflix, who is able to reach its numerous globally distributed customers. With the data management in one or two central data centers, the delivery of video-on-demand service would otherwise not be efficiently enough. Fog computing thus allows providing very large amounts of streamed data by delivering the data directly performant into the vicinity of the customer.

Enhanced support for mobile devices

With the steadily growth of mobile devices and data administrators gain more control capabilities where the users are located at any time, from where they login and how they access to the information. Besides a faster velocity for the enduser this leads to a higher level of security and data privacy by data can be controlled at various edges. Moreover fog computing allows a better integration with several cloud services and thus ensures an optimized distribution across multiple data centers.

Setup a tight geographical distribution

Fog computing extends existing cloud services by spanning up an edge network which consist of many distributed endpoints. This tight geographical distributed infrastructure offers advantages for variety of use cases. This includes a faster elicitation and analysis of big data, a better support for location-based services by the entire WAN links can be better bridged as well as the capabilities to evaluate data massively scalable in real time.

Data is closer to the user

The amount of data caused by cloud services require a caching of the data or other services which take care of this subject. This services are located close to the enduser to improve latency and optimize the data access. Instead of storing the data and information centralized in a data center far away from the user the fog ensures the direct proximity of the data to the customer.

Fog computing makes sense

You can think about buzzwords whatever you want. Only if you take a look behind the courtain it’s becoming interesting. Because the more services, data and applications are deployed to the end user, the more the vendors have the task of finding ways to optimize the deployment processes. This means that information needs to be delivered closer to the user, while latency must be reduced in order to be prepared for the Internet of Things. There is no doubt that the consumerization of IT and BYOD will increasing the use and therefore the consumption of bandwidth.

More and more users rely on mobile solutions to run their business and to bring it into balance with the personal live. Increasingly rich content and data are delivered over cloud computing platforms to the edges of the Internet where at the same time are the needs of the users getting bigger and bigger. With the increasing use of data and cloud services fog computing will play a central role and help to reduce the latency and improve the quality for the user. In the future, besides ever larger amounts of data we will also see more services that rely on data and that must be provided more efficient to the user. With fog computing administrators and providers get the capabilities to provide their customers rich content faster and more efficient and especially more economical. This leads to faster access to data, better analysis opportunities for companies and equally to a better experience for the end user.

Primarily Cisco will want to characterize the word fog computing to use it for a large-scale marketing campaign. However, at the latest when the fog generates a similar buzz as the cloud, we will find more and more CDN or other vendors who offer something in this direction as fog provider.

Categories
IT-Infrastructure

Business-Bricks-as-a-Service (BBaaS) – Business Building Blocks in the Cloud

Companies and developers stick in a dilemma. On the one hand, cloud computing should provide easy access to IT resources. But on the other hand, an enormous knowledge about distributed programming is assumed, to create solutions that are both scalable and highly available simultaneously. In particular, the issue of the responsibility for scalability and high availability for the virtual infrastructure respectively of the web application is largely suppressed by the cloud providers. This leads to a higher complexity of knowledge on the way into the cloud and to a lack of the seemingly simple use of cloud services. Furthermore, the cloud is currently lacking of complete services that represent individual modules for a specific business scenario and can be adapted easily and independently from each other.