Categories
Cloud Computing

The OpenStack appeal continues to rise

For many CIOs and cloud strategists the open cloud management framework OpenStack still counts as a pure marketing engine. But this perception is deceiving. With the “Icehouse” release and the serious support of big IT vendors the open source project is evolving into a leading cloud standard.

The OpenStack momentum continues to build

OpenStack can be used in several different scenarios as an infrastructure foundation for public, private and hybrid clouds. Crisp Research sees “Icehouse” as an important step for the OpenStack community to increase its appeal and to help users on their journey of running their own OpenStack based cloud. To be successful with OpenStack, it is important for CIOs to find the right mix of products, services and resources in the community.

The numbers of the recent “OpenStack Summit” are an indicator for the building OpenStack momentum. More than 4,000 attendees testify to the growing importance of the open source cloud infrastructure software. It is the same with the current OpenStack deployments. In comparison to Q1/2014, the worldwide projects increased by 60 percent in Q2/2014.

On-premise private cloud deployments are still in the lead. There were 55 private cloud deployments in Q1/2014 and 85 deployments in Q2/2014. Even the number of worldwide OpenStack public clouds has jumped, from 17 to 29.

Icehouse is a milestone

After a perceived hype, OpenStack is on a good way to become one of the important cloud standards for private and hybrid cloud environments alongside Microsoft’s Cloud OS. The commitments and investments of almost all big technology vendors imply a clear future development. The new “Icehouse” release is a milestone regarding stability and functionality, and former bugs have been fixed.

– –
* The numbers base on the official statistics of the OpenStack Foundation.

Categories
Cloud Computing

Amazon AWS builds a data center in Germany: Nice idea!

The Amazon Web Services will open a new cloud region targeting the German market by establishing a data center in Germany (Frankfurt). But is this so exciting for German companies?

Amazon AWS to touch down in Germany

Apparently, Amazon AWS has recognized the importance of the German market and the concerns of German companies. Crisp Research knows from reliable sources that the cloud provider will open a cloud region for the German market with a location in Frankfurt in the coming weeks.

After the announcement of Salesforce of a German data center location, Amazon is the next big U.S. cloud provider to follow the trend. This again shows the attractiveness of Germany. After all, most American companies treat the German market rather stepmotherly. Typically, the majority of American cloud providers supply the European market through data centers in Ireland (Dublin) and the Netherlands (Amsterdam). This reduces the attractiveness of cloud providers, especially in the case of medium-sized German businesses. Consultations with IT users consistently show that storing data outside of Germany and an agreement that is based on maximum European law are a no-go.

[Update]: Technical evidence

On the 4th of July 2014 German blogger Nils Jünemann published an article technically referencing on a new AWS region “eu-central-1”. Using a traceroute to “ec2.eu-central-1.amazonaws.com” he proved, that something is available. However, when I traceroute ec2.eu-central-1.amazonaws.com on the 5th of July 2014 the host was unknown.

AWS Portfolio: Slowly reaching the enterprise IT

In addition to the data center site, Amazon has announced AWS CloudTrail last year, the first service to allow companies more control over compliance. AWS CloudTrail helps you monitor and record the AWS API calls to one or more accounts. Here views from the AWS Management Console, the AWS Command Line Interface (CLI), from your own applications or third party applications are taken into account. The collected data are stored on either Amazon S3 or Amazon Glacier for evaluation and can be viewed on tools from AWS or external providers. Amazon CloudTrail can be used free of charge. However, costs are associated with storing the data on Amazon S3 and Amazon Glacier as well as with Amazon SNS notifications.

AWS CloudTrail is one of the most important services for enterprise customers that Amazon has released in recent times. The collected logs support the compliance with government regulations by allowing recording of all accesses to AWS services. One can operate more successful security audits based on the log data, identifying the precise origin of vulnerabilities and unauthorized or erroneous hits on data.

Enterprise quo vadis?

After establishing itself as a leading infrastructure provider and enabler for startups and new business models in the cloud, the company from Seattle has been trying to get one foot directly in the lucrative business environment for quite some time. However, one question remains open. Will that be enough to achieve a critical mass of German companies in order to evolve from a provider for startups and developers to a serious alternative for IT workloads for business?

Yes, under certain conditions:

  • Business related services must be launched simultaneously in all regions and not only in the U.S..
  • AWS requires a network of partners in order to reach the mass of attractive German corporate customers.
  • The localization of all information, such as white papers, how-to’s and training is critical.
  • Less self-service, more managed services and professional services, e.g. through the partner network.
  • Reducing complexity by simplifying the use of the scale-out principle.
  • Cloud Connectivity for reliable access to the services.
  • Avoidance of the service lock-in.
  • Strengthening the AWS Marketplace for easier use of scalable standard workloads and applications.
  • Consideration of hybrid cloud scenarios and strengthening of the partner Eucalyptus on the private cloud side.

Note on the Eucalyptus partnership: Nearly all Eucalyptus customers should also be AWS customers (Source: Eucalyptus). This means, conversely, that some hybrid cloud infrastructure exists between on-premise Eucalyptus infrastructure and the Amazon public cloud.

The existing question marks: Microsoft and Google

Medium-sized businesses demand from cloud providers that the data are stored in a German data center. About 75 percent consider physical data location a necessity to enforce German law more easily.

After Salesforce, IBM and Amazon are the only remaining major cloud providers who could be expected to make investments in this direction.

About Google, one can unfortunately say that nothing will happen in the near or far future. The DNA and mentality of the company in terms of data location and customer concerns differ too strongly from those of other providers.

At Microsoft, the cards are basically good. However, the Redmond company doesn’t need to play them now. Microsoft is pursuing a different strategy by using the Cloud OS Partner Network for local providers worldwide (e.g. Pironet NDH in Germany), empowering them with the so-called “Azure Pack” to offer an own Microsoft Azure based cloud infrastructure in a hosted model from a local data center.

How the trend of building local data centers will develop remains to be seen. Bottom line is, Germany and especially the location of Frankfurt, among others due to the DE-CIX, are well prepared to take additional international cloud providers. A key finding of this development is that international providers have understood the concerns and are willing to make compromises in the name of what is good for the user.

Categories
Cloud Computing

Analyst Report: Amazon AWS vs. Microsoft Azure

After Microsoft needed to fight against vendors like Novell, Oracle, IBM or HP for on-premise market shares in the last decades, with the Amazon Web Services a new giant has been established in the public cloud, who puts out feelers to the enterprise customer. A market which predominantly is dominated by Microsoft and which reveals an enormous potential for the vendors.

Market forecasts by Crisp Research show a strong growth by 40 percent per year for the next years, whereby revenues in Germany in 2018 amounted up to 28 billion euros. This free analyst report compares the service portfolio as well as the strategy of the Amazon Web Services with that of Microsoft Azure.

Categories
Insights @en

Building a hosted private cloud with the open source cloud computing infrastructure solution openQRM

Companies have recognized the benefits of the flexibility of their IT infrastructure. However, the recent past has reinforced the concern to avoid the path to the public cloud for reasons of data protection and information security. Therefore alternatives need to be evaluated. With a private cloud one is found, if this would not end in high up-front investments in own hardware and software. The middle way is to use a hosted private cloud. This type of cloud is already offered by some providers. However, there is also the possibility to build it up and run themselves. This INSIGHTS report shows how this is possible with the open source cloud computing infrastructure solution openQRM.

Why a Hosted Private Cloud?

Companies are encouraged to create more flexible IT infrastructure to scale their resource requirements depending on the situation. Ideally, the use of a public cloud is meeting these requirements. For this no upfront investments in own hardware and software are necessary. Many companies dread the way into public cloud for reasons of data protection and information security, and look around for an alternative. This is called private cloud. The main advantage of a private cloud is to produce a flexible self-service provisioning of resources for staff and projects, such as in a public cloud, which is not possible by a pure virtualization of the data center infrastructure. However, it should be noted that investments in the IT infrastructure must be made to ensure the virtual resource requirements by a physical foundation for building a private cloud.

Therefore, an appropriate balance needs to be found that allows a flexible resource obtaining for a self-service, but at the same time must not expect any high investment in the own infrastructure components and without to waive a self-determined data protection and security level. This balance exists in hosting a private cloud at an external (web) hoster. The necessary physical servers are rented on a hoster who is responsible for their maintenance. In order to secure any physical resource requirements, appropriate arrangements should be made with the hoster to use the hardware in time. Alternatives include standby server or similar approaches.

On this external server-/storage-infrastructure the cloud infrastructure software is then installed and configured as a virtual hosted private cloud. For example, according to their needs this allows employees to start own servers for software development and freeze and remove them after the project again. For the billing of the used resources, the cloud infrastructure software is responsible, which provides such functions.

openQRM Cloud

Basically, an openQRM Cloud can be used for the construction of a public and private cloud. This completely based on openQRM’s appliance model and offers fully automated deployments that can be requested by cloud users. For this openQRM Cloud supports all the virtualization and storage technologies, which are also supported by openQRM itself. It is also possible to provide physical systems over the openQRM Cloud.

Based on the openQRM Enterprise Cloud Zones, a fully distributed openQRM Cloud infrastructure can also be build. Thus, several separate data centers may be divided into logical areas or the company topology can be hierarchically and logically constructed safely separated. Moreover openQRM Enterprise Cloud Zones integrates a central cloud and multilingual portal including a Google Maps integration, so an interactive overview of all sites and systems is created.

Structure of the reference environment

For the construction of our reference setup a physical server and multiple public IP addresses are required. There are two options for installing openQRM:

  • Recommended: Configuration of a private class C subnet (192.168.xx/255.255.255.0) in which openQRM is operated. openQRM required an additional public IP address for access from the outside.
  • Option: Install openQRM in a virtual machine. In this variant openQRM controls the physical server and receives the virtual machines from the physical host for subsequent operations of the cloud.

For the assignment of public IP addresses cloud NAT can be used in both scenarios. This openQRM Cloud function will translate the IP addresses of the private openQRM Class C network into public addresses. This requires pre-and post-routing rules on the gateway / router using iptables, configured as follows:

  • iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -o br0 -j MASQUERADE
  • iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -o eth0 -j MASQUERADE
  • o More information on pre-and post-routing with iptables can be found at http://www.karlrupp.net/en/computer/nat_tutorial

For the configuration of complex network environments, the IP management plugin is recommended. This enterprise plugin allows to set any network- and IP address configurations for the managed servers. In the openQRM Cloud, it also provides a mapping of networks to cloud users and groups and also supports the automated VLAN management.

In addition, two bridges are needed:

  • One of the public interface with a public IP address.
  • One for the private interface dpe for which DHCP is configured.

The data in the cloud are later stored in the local storage of the physical server. For this purpose, there are two variants:

Recommended:

  • KVM-Storage LVM Deployment (LVM Logical Volume Deployment)
  • Requires one or more dedicated LVM volume group (s) for the virtual machines. For more complex setups a central iSCSI target or a SAN is recommended.

Option:

  • KVM-Storage BF Deployment (blockfile deployment)
  • Create a directory on the Linux server as
    • /var/lib/kvm-storage/storage1
    • /var/lib/kvm-storage/storage2
    • (The storage directories can be set arbitrarily on the plugin configuration.)

  • For more complex setups, a central NAS for the configured mount points should be used.

At the end iptables must be configured according to the rules above and the desired own safety. After that the installation of openQRM follows. Packages for popular Linux distributions are available at http://packages.openqrm.com. After openQRM has been installed and initialized the configuration follows.

Basic configuration of openQRM

The first step after initialization is editing the „/usr/share/openqrm/plugins/dns/etc/openqrm-plugin-dns.conf“, by changing the default value to the own domain.

Configure domain for the private network
# please configure your domain name for the openQRM network here!
OPENQRM_SERVER_DOMAIN=”oqnet.org”

After that we activate and start the plug-ins via the web interface of the openQRM server. The following plugins are absolutely necessary for this:

DNS Plugin

  • Used for the automated management of the DNS service for the openQRM management network.

DHCPD

  • Automatically manages the IP addresses for the openQRM management network.

KVM Storage

  • Integrates the KVM virtualization technology for the local deployment.

Cloud-Plugin

  • Allows the construction of a private and public cloud computing environment with openQRM.

Further additional plugins are recommended:

Collectd

  • A monitoring system including long-term statistics and graphics.

LCMC

  • Integrates the Linux Cluster Management Console to manage the high availability of services.

High-Availability

  • Enables automatic high availability of appliances.

I-do-it (Enterprise Plugin)

  • Provides an automated documentation system (CMDB).

Local server

  • Integrates existing and locally installed server with openQRM.

Nagios 3

  • Automatically monitors systems and services.

NoVNC

  • Provides a remote web console for accessing virtual machines and physical systems.

Puppet

  • Integrates Puppet for a fully automated configuration management and application deployment in openQRM.

SSHterm

  • Allows secure login via a web shell to the openQRM server and integrates resource

Plugins which offer more comfort in the automatic installation of virtual machines as cloud templates are:

Cobbler

  • Integrates cobbler for automated deploying of Linux system in openQRM.

FAI

  • Integrates FAI for the automated provisioning of Linux systems in openQRM.

LinuxCOE

  • Integrates LinuxCOE for the automated provisioning of Linux systems in openQRM.

Opsi

  • Integrates Opsi for the automated provisioning of Windows systems in openQRM.

Clonezilla/local-storage

  • Integrates Clonezilla for the automated provisioning of Linux and Windows systems in openQRM.

Basic configuration of the host function for the virtual machines

Case 1: openQRM is installed directly on the physical system

Next, the host must be configured to provide the virtual machines. For that an appliance type KVM Storage Host is created. This works as follows:

  • Create appliance
    • Base > Appliance > Create
  • Name: e.g. openQRM
  • Select the openQRM server itself as resource
  • Type: KVM Storage Host

This gives openQRM the information that a KVM storage is to be created on this machine.

Case 2: openQRM is installed in a virtual machine running on the physical system

Using the “local server” plugin the physical system is integrated into openQRM. To this the “openQRM-local-server” integration tool is copied from the openQRM server on the system to be integrated, e.g.

scp /usr/share/openqrm/plugins/local-server/bin/openqrm-local-server [ip-address of the physical system]:/tmp/

After that, it is executed on the system to be integrated:

ssh [ip-address of the physical system]: /tmp/openqrm-local-server integrate -u openqrm -p openqrm -q [ip-address of the openQRM server] -i br0 [-s http/https]

(In this example “br0” is the bridge to the openQRM management network.)

The integration via “local server” creates in openQRM automatically:

  • a new resource
  • a new image
  • a new kernel
  • a new appliance from the sub-components above

Next, the appliance of the currently integrated physical system must be configured to provide the virtual machines. For this the appliance is set as type KVM Storage Host. That works as follows:

  • Edit the appliance
    • Base > Appliance > Edit
  • Type: Set KVM Storage Host

This gives openQRM the information that a KVM storage is to be created on this machine.

Basic configuration of the storage function

Now, the basic configuration of the storage follows. For this purpose, a storage object of a desired type is created. This works like this:

  • Create storage
    • Base > Components > Storage > Create
    Case 1, select the resource of the openQRM server
  • Case 2, select the resource of the integrated physical system
  • Name: e.g. KVMStorage001
  • Select deployment type
    • This depends on the selected type at the beginning: KVM-Storage LVM deployment or directory (KVM-Storage BF deployment)

Preparation of virtual machine images

In order to provide virtual machine (VM) later over the cloud portal as part of finished products, an image for a VM must first be prepared. This works as follows:

  • Creating a new virtual machine with a new virtual disk and install an ISO image on it.
    • Plugins > Deployment > LinuxCOE > Create Templates
    • The created images are automatically stored in an ISO pool which each virtual machine within openQRM can access.

Subsequently a base for the master template is created. This serves as a basis to provide users a product over the order process.

  • Create a new appliance
    • Base > Appliance > Create
  • Create a new resource
    • KVM-Storage virtual machine
      • Create a new VM
      • Make settings
      • Select an ISO image
      • Create
    • Select created resource
  • Create a new image
    • Add image as KVM-Storage volume
    • Select KVM-Storage
    • Select volume group on KVM-Storage
    • Add a new logical volume
    • Select an image for the appliance
    • Edit to set a password (The previously chosen password of the ISO is overridden.)
  • Select kernel
    • From the local disk
    • (LAN boot is also possible)
  • Start appliance
    • The automatic installation can now be tracked over VNC.
    • Further adaptations can be done itself.
    • Please consider
      • Misc > Local-Server > Help >Local VMs („Local-Server for local virtual machines “)

Cleaning up

The created appliance can now be stopped and deleted afterwards. The important point was to create an image that can be used as a master template for the cloud.

The created image using the appliance includes the basic operating system which was created from the ISO image.

Configuration of the openQRM Cloud

We have now finished all preparations to start configuring the openQRM cloud. We find the necessary settings at „Plugin > Cloud > Configuration > Main Config“. All parameters which are adapted here have a direct impact on the behavior of the whole cloud.

Basically an openQRM Cloud can be run with basic settings. Depending on the needs and the own specific situation, adaptations can be make. The area “description” in the right column of the table are helpful.

However, there are parameter which are need to consider regardless of the own use case. These are:

Automatic provisioning (auto_provision)

  • Determines if systems are automatically provisioned by the cloud or if an approval of a system administrator is needed.

Provisioning of physical systems (request_physical_systems)

  • This parameter defines if besides virtual machines even physical hosts can be provisioned by the cloud.

Cloning of images (default_clone_on_deploy)

  • By default the cloud rolls out copies (clones) of an image.

High-availability (show_ha_checkbox)

  • Enables to operate the openQRM cloud including the high-availability of the provided resources.

Billing of the used resources (cloud_billing_enabled)

  • openQRM has an extensive billing system to determine own prices for all resources to get a transparent overview of the running costs.

Cloud product manager (cloud_selector)

  • Enables the product manager to provide users various resources over the cloud portal.

Currency for the settlement of resources (cloud_currency)

  • Determines the local currency with which the resources are to be settled.

Exchange ratio for resources in real currency (cloud_1000_ccus)

  • Determines how many 1000 CCUS (Cloud Computing Units) correspond to a previously fixed real currency.

Resource allocation for groups (resource_pooling)

  • Determines from which host an appointed user group receive their virtual machines.

Creating products for the openQRM Cloud

To provide our users the resources over the cloud portal we have to create products first which define the configuration of a virtual machine. The settings for that we find at „Plugin > Cloud > Configuration > Products“.

The “Cloud product management” is used to create various products which users can choose later to build own virtual machines itself over the cloud portal. Products which are available for us are:

  • Number of CPUs
  • Size of local disks
  • Size of RAM
  • Kernel type
  • Number of network interfaces
  • Pre-installed applications
  • Virtualization type
  • If a virtual machine should be high-available

Over the status line by using +/- each product can be activated or deactivated to show or hide it for the user in the cloud portal.

Please note: Products which are deactivated but are still active within a virtual machine continue to be billed.

To create a new CPU product we select the “CPU” tap and define in the area “Define a new CPU product” our wanted parameter.

The first parameter defines how many CPUs (cores), here 64, our product should have. The second parameter determines the value of the product and how many costs occur per hour during its use. In this example, 10 CCUs per hour for 64 CPUs occurs.

With the arrow keys the order on how the single products are displayed in the cloud portal can be determine. The default value is above one.

Please note: In the cloud portal standard profiles in the sizes „small“, „medium“ and „big“ exist. According to the order the profiles are automatically be determined under the respective products. That means that “small” is always the first value, “medium” the second and “big” the third.

openQRM also allows to order virtual machines with pre-configured software stacks. For this openQRM uses Puppet (Plugins > Deployment > Puppet). Thus, for example, it is possible to order the popular LAMP stack.

If we have configured our product portfolio, it’s the user’s turn to order virtual machines. This is done via the cloud portal.

openQRM Cloud-Portal

To create a new virtual machine (VM) we click on the tap “New”. An input mask follows on which we can create our
VM based on the products the administrator has determined and approved in the backend.

We choose the profile “Big” and a LAMP server. Our virtual machine now consists of the following products:

  • Type: KVM-Storage VM
  • RAM: 1 GB
  • CPU: 64 cores
  • Disk: 8 GB
  • NIC: 1

In addition the virtual machine should be “high-available”. This means, if the VM fails, automatically a substitute machine with exactly the same configuration is started to work on with.

For this configuration we will have to pay 35 CCUs per hour. This is equivalent to 0.04 euros per hour or € 0.84 per day or € 26.04 per month.

If we want to order the virtual machine we select “send”.

Below the tap “Orders” we see all current and past orderings we have made with our user. The status “active” in the first column shows that the machine is already started.

Parallel to this we receive an e-mail including the ip-address, a username and a password, we can use to log into the virtual machine.

The tap “Systems” confirms both information and shows further details of the virtual machine. In addition we have the opportunity to change the systems configuration, pause the virtual machine or to restart. Furthermore the login via a web-shell is possible.

If the virtual machine is not needed any more it can be paused. Alternatively it is possible that the administrator disposes this due to an inactivity of the system or at a specific time.

Creating a virtual machine with the „Visual Cloud Designer“

Besides the “ordinary” way of building a virtual machine, the openQRM Cloud portal enables the user to do that conveniently via drag and drop. Here the „Visual Cloud Designer“ helps, which can be find behind the tap „VCD“.

Using the slider on the left below „Cloud Components” it is possible to scroll between the products. Using the mouse allows to assemble the „Cloud Appliance“ (virtual machine) in the middle with the appropriate products.

Our virtual machine „Testosteron“ we assembled in this case with KVM-Storage, Ubuntu 12.04, 64 CPUs, 1024 MB Ram, 8 GB disk, one NIC, and software for a webserver and the high-availability feature.

With one click on “Check Costs”, openQRM tells us that we will pay 0.03 EUR per hour for this configuration.

To start the ordering process for the virtual machine we click “request”. We get the message that openQRM starts rolling out the resource and we will receive further information into our mailbox.

The e-mail includes, as described above, all access data to work with the virtual machine.

In the cloud portal under “systems” we already see the started virtual machine.

Creating a virtual machine with the „Visual Infrastructure Designer“

Besides the provisioning of single virtual machines the openQRM cloud portal also offers the opportunity to provide complete infrastructures consisting of multiple virtual machines and further components, at one click.

Thus, we use the „Visual Infrastructure Designer“. This can be found in the cloud portal behind the tap “VID”.

Using the “VID” it is possible to build and deploy a complete WYSIWYG infrastructure via drag and drop. For this purpose, it is necessary to create ready profiles with pre-configured virtual machines at first, which include for example webserver, router or gateways. These can be deployed afterwards.

Categories
Comment

The cloud computing world is hybrid! Is Dell showing us the right direction?

With a clear cut, Dell said good bye to the public cloud and align its cloud computing strategy with own OpenStack-based private cloud solutions, including a cloud service broker for other public cloud providers. At first the move comes surprising, but makes sense when you take a closer look at Dell, whose recent past and especially the current market situation.

Dell’s new cloud strategy

Instead of having an own public cloud offering on the market, Dell will sell OpenStack-based private clouds on Dell hardware and software in the future. With the recent acquisition of cloud management technology Enstratius customers will also be able to deploy their resources to more than 20 public cloud provider.

With a new “partner ecosystem”, which currently consists of three providers and that is further expanded in the future, integration opportunities between the partner’s public clouds and private clouds of Dell customers will be offered. The current partner network includes Joyent, ZeroLag and ScaleMatrix. All three are rather small names in the infrastructure-as-a-service (IaaS) market.

Further, Dell also strengthens its consulting business to show customers which workloads and processes could flow into the public cloud.

Thanks Enstratius, Dell becomes a cloud broker

A view on the current public cloud IaaS market shows that Dell is right with its strategy change. All current IaaS providers of all sizes to chafe in the fight for market share against industry leader Amazon Web Services. Since their innovative strength is limited compared to Amazon and most of them to limit themselves, with the exception of Google and Microsoft, to the provision of pure infrastructure resources (computing power, storage, etc.), the chances of success are rather low. Moreover, into a price war with Amazon the fewest should get involved. That can quickly backfire.

Rather than dealing with Amazon and other public IaaS providers in the boxing ring, Dell positioned itself on the basis of Boomi, Enstratius and other previous acquisitions as a supportive force for cloud service providers and IT departments and provides them with hardware, software and more added values.

In particular, the purchase of Enstratius was a good move and let Dell become a cloud service broker. Enstratius is a toolset for managing cloud infrastructures, including the provisioning, management and automation of applications for – currently 23 – public and private cloud solutions. It should be mentioned that Enstratius in addition to managing a single cloud infrastructure also allows the operation of multi-cloud environments. For this purpose Enstratius can be used as a software-as-a-service and installed in on-premise environments in the own data center as a software.

The cloud computing world is hybrid

Does Dell rise in the range of cloud innovators with this change in strategy? Not by a long shot! Dell is anything but a leader in the cloud market. The trumps in their cloud portfolio are entirely due to acquisitions. But, at the end of the day, that does not matter. To be able to take part in the highly competitive public cloud market, massive investment should have been made ​​that would not have been necessarily promising. Dell has been able to focus on his tried and true strengths and these are primarily in the sale of hardware and software, keyword: “Converged Infrastructures” and the consulting business. With the purchase of cloud integration service Boomi, the recent acquisition of Enstratius and the participation in the OpenStack community externally relevant knowledge was caught up to position itself in the global cloud market. In particular, the Enstratius technology will help Dell to diversify the market and take a leading role as a hybrid cloud broker.

The cloud world is not only public or only private. The truth lies somewhere in the middle and strongly depends on the particular use case of a company, which can be divided into public, private as well as hybrid cloud use cases. In the future, all three variants will be represented within a company. Thereby the private cloud does not necessarily have to be directly connected to a public cloud to span a hybrid cloud. IT departments will play a central role in this case, get back more strongly into focus and act as the company’s own IT service broker. In this role, they have the task of coordinating the use of internal and external cloud services for the respective departments to have an overall view of all cloud services for the whole enterprise. For this purpose, cloud broker services such as Dell’s will support.

Categories
Analysis

Google Compute Engine: Google is officially in the game

Google officially gets in the battle for market share in the infrastrucuture-as-a-service (IaaS) area. What was only determined for a selected group of customers starting one year ago, the company from Mountain View has now made available for the general public as part of the Google I/O 2013. It’s about their cloud computing offering, Google Compute Engine (GCE).

News about the Google Compute Engine

With App Engine, BigQuery and Cloud Storage, Google has steadily expanded its cloud portfolio since 2008. What was missing was an infrastructure-as-a-service solution that can be used as needed to start virtual machines. The Google Compute Engine (GCE) released Google to its I/O 2012 in a closed beta, to use virtual machines (VM) with the Linux operating system on the Google infrastructure, which is also used by Gmail and other services.

Together with the Google I/O 2013, the GCE has now reached the general availability. Furthermore, Google has launched the Cloud Datastore, a by Google fully managed NoSQL database for non-relational data. Independent from the GCE the service provides automatic scalability, ACID transactions, and SQL-like queries and indexes. In addition, there is a limited preview of the PHP programming language for App Engine. With that Google wants to address developers and users of open source applications such as WordPress. Beyond that, the integration has been improved with other parts of the cloud platform such as Cloud SQL and Cloud Storage. Further, Google looks at the feedback of its users, that it should be possible to develop simple modularized applications on the App Engine. In response, it is now possible to partition applications into individual components. Each with its own scaling, deployment, versioning and performance setting.

More news

Other major announcements include more granular billing, new instance types as well as an ISO 27001 certification:

  • Granular billing: Each instance type is now billed per minute, where 10 minutes will be charged at least.
  • New instance types: There are new micro and small instance types that are meant to process smaller workloads inexpensive and require little processing power.
  • More space: The size of the “Persistent Disks”, which can be connected to a virtual instance have been extended up to 8.000 percent. This means that now a persistent disk can be attached with a size of up to 10 terabytes to a virtual machine within the Compute Engine.
  • Advanced routing: The Compute Engine now supports based on Google’s own SDN (Software Defined Network) opportunities for software-defined routing. With that instances can act as gateways and VPN server. In addition it can be use to develop applications so that they run in the own local network and in the Google cloud.
  • ISO 27001 certification: The Compute Engine, App Engine and Cloud Storage are fully certified with ISO 27001:2005.

Developer: Google vs. Amazon vs. Microsoft

First, the biggest announcement for the Google Compute Engine (GCE) is its general availability. In recent months, the GCE was held up by every news as THE Amazon killer, although it was still in a closed beta, and thus there was no comparison at eye level. The true time reckoning begins now.

Many promise from the GCE that Google creates a real competitor to Amazon Web Services. The fact is that the Google Compute Engine is an IaaS offering and Google due to its core business, have the expertise to build highly scalable infrastructures and to operate them highly available. The Google App Engine also shows that Google knows how to address developers, even if the market narrows here with increasingly attractive alternatives.

A lack of diversification

Having a look at the compute engine, we see instances, storage, and services for the storing and processing of structured and unstructured data (Big Query, SQL Cloud and Cloud Datastore). Whoever sees Google as THE Amazon killer from this point, should scale down its expectations once a little. Amazon has a very diversified portfolio of cloud services that enables to use the Amazon cloud infrastructure. Google needs to tie in with it, but this should not be too difficult, since many Google services are already available. A look at the services of Amazon AWS and the Google Cloud Platform is worthwhile for this reason.

Hybrid operation for applications

Google may not be underestimated in any case. On the contrary, from a first performance comparison between the Google and Amazon cloud, Google emerged as the winner. This lies inter alia in the technologies that Google is constantly improving, and on its global high-performance network. What is particularly striking, Google now offers the possibility to develop applications for a hybrid operation in the own data center and for the Google cloud. This is an unexpected step, since Google have been rather the motto “cloud only”. However, Google has been struggling lately with technical failures similar to Amazon, which does not contribute to the strengthening of trust in Google.

A potshot is the new pricing model. Instances are now charged per minute (at least 10 minutes of use). Amazon and Microsoft still charge their instances per hour. Whether the extension of the “Persistent Disks” up to 10 terabytes will contribute a diversification we will see. Amazon is also under developers regarded as the pioneer among IaaS providers, which will make it not easier for Google to gain market share in this segment. In addition, Google may assume that, next to ordinary users, developers also do not want to play Google’s “service on / off” games.

Amazon and Microsoft are already one step ahead

Where Google with its SaaS solution Google Apps massively tries to penetrate corporate customers for quite some time, the Compute Engine is aimed primarily at developers. Amazon and Microsoft have also begun in this customer segment, but long since begun to make their infrastructures respectively platforms attractive for enterprise customers. Here is still much work for Google, if this customer segment is to be developed, which is inevitably. However, in this area it is about much more than just technology, but about creating trust and to consider organizational issues (data protection, contracts, SLAs, etc.) as valuable.

Google’s problem: volatility

No doubt, Google is by far the most innovative company on our planet. But equally the most volatile and data hungriest. This also developers and especially companies both observed and should ask the question how future-proof the Google cloud portfolio is. If the compute engine is a success, don’t worry about it! But what if it is for Google(!) a non-seller. One remembers the Google Reader, whose user numbers were not sufficient enough for Google. In addition, the compute engine has another KPI, revenue! What does Google do when it’s no longer economic?

Categories
Analysis

Rackspace differentiated its IaaS cloud offering with a higher-value support

Rackspace currently does everything it can to fight for market share in the infrastructure-as-a-service (IaaS) area against the Amazon Web Services. After the poor results in Q1/2013 no easy task. As the driving wheel behind the OpenStack movement, the former managed hosting provider attempts to anchor the topic of open source in the cloud and marketed OpenStack as the Linux of the cloud. But Rackspace challenge is not only to prepare well against Amazon. Even from within its own OpenStack rows more and more competitors grow up, all offering the same technology, API and services based on OpenStack. Be mentioned here only big names like HP, IBM and Red Hat. Due to this very similar range of services – what is a homemade problem – it is difficult for Rackspace to differentiate from the competition, on the one hand, the seemingly all-powerful Amazon Web Services, but also Windows Azure and Google, on the other hand the own OpenStack camp. Rackspace now seems to focus on his well-tried and true strengths, their “Fanatical Support” and wants to help businesses and developers intensively in the use of the Rackspace cloud services.

Help on the way to the cloud

Even as a simple managed hosting provider Rackspace has help its customers with infrastructure management. For its OpenStack based cloud-platform the standard support has now been extended. Customers will now also receive support at the application level including debugging of the application that runs on the Rackspace cloud. This means that the interaction with the customer to be significantly enhanced by not only advice the basics, but even developer-specific know-how. It even goes so far that Rackspace engineers analyze the source code of the application on request and make suggestions for an effective use on the Rackspace cloud and in particular with the Rackspace APIs and SDKs, or even help during the complete development. For developers it should be made easier to understand how their own native application works on the Rackspace cloud and OpenStack.

Support as diversification

Now you may think: Support as diversification? In times of self-service and automation in the cloud? Yes exactly, that’s not so far-fetched and not an unwise move. Necessity is the mother of invention. Rackspace has always placed much emphasis on its support, and enjoys an excellent reputation.

Furthermore, one should remember that, despite the fact of self-service and the associated terms of easy receiving resources to build a virtual infrastructure respectively to develop an own cloud-enabled application, cloud computing is not easy! I have recently described that in the article “Cloud Computing ist not simple!” and named Netflix as a very positive example. There are just a few user-companies that have permeated cloud computing such as Netflix who have written with their Simian Army like the Chaos Monkey or the Chaos Gorilla test software for a scalable and highly available operation in the cloud. However, if one looks what huge efforts Netflix makes, which are also associated with costs, cloud computing is not something to take lightly, if you want to use it seriously.

For this reason, it is a logical and for me right step by Rackspace to expand their support and help where it matters in the cloud, the scalable and available development of applications that take into account of the characteristics of the cloud. Whether that is enough to catch up Amazon with big steps I dare to doubt. But within the providers that also rely on OpenStack, it is a good way to differentiate themselves from the competition.

Categories
Analysis

Enterprise Cloud Portal: T-Systems consolidates its cloud portfolio

With its Enterprise Cloud Portal German Telekom subsidiary T-Systems presents its first cloud service-wide offering for corporate customers. On the portal, companies can inform about the cloud solutions from T-Systems, test them and order directly. The currently offered services include solutions for mobile device management, Dynamic Services for Infrastructure and the Enterprise Marketplace. A look at the site shows that great emphasis was placed on the compatibility with tablets.

Past the IT department

With its cloud portal T-Systems want to enable also non-technical users in large companies to access specific cloud solutions. The cloud provider refers to a study by Gartner, which says that up to 2015, about 35 percent of IT spending are selected and managed outside the IT department. Be mentioned here, for example, marketing, purchasing and accounting.

Mobile Device Management

The mobile device management from the cloud should help businesses in the administration of mobile devices with different operating systems, such as iOS and Android via a standardized web platform. In addition to security settings, control access rights to functions and applications can be made. In case of loss of the device, the data can be deleted remotely. A test of the mobile device management is free for the first four weeks for up to three mobile devices.

Dynamic Services for Infrastructure

For infrastructure-as-a-service (IaaS) two offerings are ready: On the one hand, the “Dynamic Services for Infrastructure” (DSI) from a hosted private cloud. Secondly, the “DSI with vCloud Datacenter Services” as a hybrid variant. The management of the resources does the client itself via a web-based portal or using its own VMware management software. Clear pricing models to make the cost of the infrastructure transparent. Thus, for example, a server from the hosted private cloud costs from 9 cents per hour in the package “Small”. For the hybrid solution the package price for a virtual data center in the smallest version is exactly at 999,84€ per month.

Enterprise Marketplace

The Enterprise Market Place includes, among other things, further IaaS solutions including operating systems for Linux and Windows Server, platform-as-a-service (PaaS) solutions, including Tomcat and Microsoft SQL Server as well as a growing number of software-as-a-service (SaaS) offerings like Doculife, CA Nimsoft TAXOR, TIS, WeSustain, Metasonic, ARAS, Tibco Tibbr, Sugar CRM, Microsoft Enterprise Search and Microsoft Lync. In addition, companies should therefore be given the opportunity to apply a variety of applications highly safe in need-based formats, but also can migrate to host their own applications. The full availability of the Enterprise Market Place is planned for this summer. Currently, there is already a preview on the cloud portal.

Comment

With the Enterprise Cloud Portal T-Systems summarizes his entire cloud portfolio together under one umbrella. I had analyzed “The cloud portfolio of T-Systems” in an article for the German Computerwoche in 2011. At that time the offering was made of single and independent services. However, already at that time I came to the conclusion that T-Systems has a very well sophisticated and well-rounded cloud portfolio. This can be seen now in the consolidated Enterprise Cloud Portal. From SaaS over PaaS to IaaS and other solutions for mobile devices can be found. With it T-Systems is one of the few providers that have a full cloud stack and which is now even bundled into a single portal.

Especially in the Enterprise Marketplace is a lot potential. At this year’s CeBIT, I could take a first look at it which was in my opinion at this time still in an alpha state. Some basic and necessary essential functions for an IaaS offering, automatic scalability and high-availability may be mentioned only, were still missing. But that was in March and I’m assuming that T-Systems has already made ​​more progress here. In addition, I have already heard from a reputable source, that T-Systems/ Telekom will gradually change their cloud infrastructure to OpenStack, which will also give the Enterprise Market Place another boost in compatibility.

Where T-Systems sees an advantage for non-technical users in enterprises, should cause worry lines for IT managers. Indeed, I am also of the opinion that the IT department will become and even need to be a service broker. However, I think it is quite questionable if each department can simply run off and buy IT services externally as desired. Certainly, the blame lies with the IT departments themselves because they have built up a bad reputation over the years and are considered as slow and not innovative. I have philosophized about it here two years ago in detail (cloud computing and the shadow IT).

A certain supervisory authority in the form of a service broker is still necessary, because otherwise it is an uncontrolled proliferation of external services about which one will lose track. This can be controlled, of course, if one obtains the services from a single provider. And that is exactly the goal of T-Systems and its extensive Enterprise Cloud Portal. A customer should explicitly and across departments, refer the services from the T-Systems Cloud in order to avoid sprawl and to keep track. The question is whether this can be set internally by the customers that way. Because there are plenty of other services in the sea.

In the end I would like to address a topic that is currently causing a stir in the end customer market, but offers corporate customers a great advantage. The end-to-end offering of services. T-Systems is due to its situation, to be a subsidiary of Deutsche Telekom, one of the few cloud providers who can offer a service level from the services at application level or even virtual machine level in the data center, including the data line. This enables customers to maintain a continuous Quality-of-Service (QoS) and a comprehensive Service Level Agreement (SLA), which many other cloud providers can not afford.

Categories
Comment

Value added services are the future of infrastructure-as-a-service

The title of this article may confuse a bit. After all, infrastructure-as-a-service (IaaS) are services already. But I have my reasons. From the beginning, in its Magic Quadrant Gartner identifies the Amazon Web Services as the leading provider of the IaaS market. However, whom I miss in it is Windows Azure. But why is it just Amazon, which lead the market and why do Windows Azure in my opinion also belongs in the same quadrant as Amazon.

Early presence and a strong offer pay off

One reason for Amazon’s success is undisputed in the early presence in the market. As the first IaaS provider (2006), they have shaped the market and thus sets the direction of cloud computing, an important influence on many providers. Microsoft followed with Windows Azure relatively late (2010), but has expanded its portfolio quickly.

A further but much more concise reason are the services of both providers. As with the other IaaS providers in the market, for Amazon AWS and Microsoft Windows Azure pure infrastructure is not in the foreground, but rather solutions around it. The service portfolio of both providers compared to the rest of the market is very strong and offers much more than just virtual servers and storage. And that’s the sticking point.

Make the infrastructure usable

The core of IaaS is offering computing power, storage and network capacity as a service, based on the pay as you go model. Most cloud providers in the market heed that. But not Amazon AWS and Windows Azure. Both offer many value-added services around their infrastructure and make them so useful. With it customers are able to use the “dumb” infrastructure directly productive.

No matter which provider from the public cloud area to start on, usually the offer is compute, storage and database. One or the other offer a CDN (Content Delivery Network), and tools for monitoring in addition. That’s all! However, a look at the services of Amazon AWS and Windows Azure shows, how extensive their portfolio is by this time.

Value added services are the key and the future of infrastructure-as-a-service, with them the infrastructure can be use more profitable.

Categories
Analysis

Traditional webhoster and their fight with real infrastructure-as-a-services

Some days ago I had a briefing with an european webhoster who presented his product line-up and the strategic alignment. I reluctantly would like to use the term cloud-washing. But this call showed me again that traditional webhoster have massive problems to define and build a real infrastructure-as-a-service offering.

Difficulties with modernity

It is what I often see in the portfolio of a “cloud” services provider. These are mostly classical webhoster, who did not want to miss the cloud hype and jump on the bandwagon by cloud-enabling their offering on the paper. There are exceptions, but these are very rare. At this point, the providers purely and simply jumped on the hype and understand the Internet as the cloud.

The portfolio of the provider I talked to had the same shape. It includes dedicated server und cloud server, which a few months/ years ago had been named virtual server. These “cloud servers” are offered as five fixed different variety of configurations incl. 1 to 4 cores, storage and ram for a monthly fee of x EUR/ USD. Not to forget: the deployment period is indicated from 1 to 5 workdays. Further questions revealed that there is no on demand offering and for example no API to manage or start and stop additional server. Additional services or software images around the offering also lack just as a pay as you go model.

Internal structure and strategy: check

Upon request the provider acknowledged that, except for the word itself, the server offerings have nothing to do with cloud computing. But he seems to be on the right way. About a year ago he converted the internal infrastructure to CloudStack to benefit from a better provisioning for the customer servers. But, customers can’t take advantage of it so far. Moreover they switched with KVM to a modern, open and widespread hypervisor. Also the issue of network virtualization was implemented a few weeks ago. On his own admission, the provider already do have a cloud strategy which will give the customers the opportunity to get resources on demand plus a pay as you go model. Since he his now self-aware that it is absolutely necessary.

Nevertheless, considering these experiences each user who seriously wants to go to the cloud should be informed in advance intensively.