Kategorien
Analysis

Top 10 article on CloudUser in 2013

The year 2013 is coming to an end and it’s time for a little summary of the most-read articles on CloudUser. Here are the Top 10 of all English-speaking articles in 2013 (01.01.2013 – 30.12.2013).

1. Building a hosted private cloud with the open source cloud computing infrastructure solution openQRM

A how-to and backgrounds to build an own private cloud with the open source cloud infrastructure software openQRM.

2. Security Comparison: TeamDrive vs. ownCloud

A copious analysis on the security architectures behind TeamDrive and ownCloud.

3. The cloud computing market in Germany 2013

An inventory and analysis of the German cloud computing market in 2013.

4. My cloud computing predictions for 2014

My cloud computing predictions for the next year – 2014.

5. Caught in a gilded cage. OpenStack provider are trapped.

A critical comment on the issues of OpenStack and its community.

6. Amazon EBS: When does Amazon AWS eliminate its single point of failure?

An analysis on the abnormalities during most Amazon AWS outages where Amazon EBS played a leading role.

7. Google Compute Engine seems to be no solution for the long haul

A critical comment on Google’s public statements on the future of the Google Compute Engine.

8. The Amazon Web Services to grab at the enterprise IT. A reality check.

An analysis after the AWS re:invent on Amazon’s opportunities to embrace the enterprise IT.

9. Disgusting: Protonet and its cloud marketing

A comment on the cloud marketing strategy of Protonet – a social NAS solution.

10. Netflix releases more “Monkeys” as open source – Eucalyptus Cloud will be pleased

An analysis on Netflix open source tools and why Eucalyptus benefits from it.

Fazit

Compared to the German Top 10, where private cloud and security topics are the most-read, amongst the English-speaking articles no favorite trend is readily identifiable.

Kategorien
Analysis

Diversification? Microsoft’s Cloud OS Partner Network mutates to an "OpenStack Community"

At first sight the announcement of the new Microsoft Cloud OS Partner Network sounds indeed interesting. Who doesn’t want to use the Microsoft public cloud directly, as of now can select from one of several partner to access the Microsoft technologies indirectly. It is also possible to span a hybrid cloud between the Microsoft cloud and the partner clouds or the own data center. For Microsoft and its technological extension within the cloud it is indeed a clever move. But for the so-called „Cloud OS Partner Network“ this activity could backfire.

Microsoft Cloud OS Partner Network

The Cloud OS Partner Network consists of more than 25 cloud service provider worldwide, which, according to Microsoft, focuses especially on hybrid cloud scenarios based on Microsoft’s cloud platform. For this purpose, they rely on a combination of the Windows Server with Hyper-V, System Center and the Windows Azure Pack. With that, Microsoft tries to underwrite its vision to establish its Cloud OS as a basis for customer data center, service provider clouds and the Microsoft public cloud.

For this reason, the Cloud OS Partner Network serves more than 90 different markets with a total customer base of three million companies worldwide. Overall 2.4 million server and more than 425 data center build the technological base.

Among others the partner network includes provider like T-Systems, Fujitsu, Dimension Data, CSC and Capgemini.

Advantages for Microsoft and the customer: Locality and extension

For Microsoft, the Cloud OS Partner Network is a clever move to, measured by the distribution of Microsoft cloud technologies, gains more worldwide market share. In addition, it fits perfectly into Microsoft’s proven strategy to serve the customer not directly but over a network of partner.

The partner network also comes towards the customer. Companies who avoided the Microsoft public cloud (Windows Azure), due to data locality reasons or local policies like data privacy, are now able to find a provider in their own country and do not have to dispense with the desired technologies. For Microsoft, another advantage is the fact, not coercively to build a data center in each country, but to concentrate on the existing or the strategically important once.

With that, Microsoft can lean back a little and let again make a partner network do the uncomfortable sales. The incomes come, as before the age of the cloud, over selling licenses to the partner.

Downside for the partner: Diversification

You can find great stories about this partner network. But the fact is, with the Cloud OS Partner Network, Microsoft creates a similar competitive situation you can find in the OpenStack Community. Especially in the public cloud, with Rackspace and HP, there exist just two „top provider“, who only play a minor part in the worldwide cloud circus. Notably HP fights more with itself and is therefore not able to concentrate on innovations. However, the main problem of both and all the other providers is that they are not able to differentiate from each other. Instead, most of the providers stand in a direct competition to each other and currently not diverge significantly. This is due to the fact, that all set on the same technological base. An analysis on the current situation of the OpenStack community can be found under „Caught in a gilded cage. OpenStack provider are trapped.

The situation for the Cloud OS Partner Network is even more uncomfortable. Unlike in OpenStack, Microsoft is the only technology supplier and decides where it is going on. The partner network need to swallow what is set before them and just can adapt further technology stacks which lead to more overhead and thus further cost for development and operations.

Except for the local markets, all Cloud OS service provider are in a direct competition and based solely on Microsoft technologies are not able to differentiate otherwise. Good support and professional services are extremely important and an advantage but no USP in the cloud.

If the Cloud OS Partner Network flops, Microsoft will get away with a black eye. The great harm the partners will carry home.

Technology as a competitive advantage

Looking at the real successful cloud provider on the market it becomes clear that those who have developed their clouds based on an own technology stack and therefore differentiate technological from the rest. These are Amazon, Google or Microsoft and precisely not Rackspace or HP who both set on OpenStack.

This, Cloud OS partner like Dimension Data, CSC or Capgemini should keep in mind. In particular CSC and Dimension Data have big claims to have a say in the cloud.

Kategorien
Analysis

HP Cloud Portfolio: Overview & Analysis

HP’s cloud portfolio consists of a range of cloud-based solutions and services. The HP Public Cloud is a true infrastructure as a service (IaaS) offering and the current core product, which is marketed generous. The HP Enterprise Services – Virtual Private Cloud provides a hosted Private Cloud from HP. The Public Cloud services are delivered exclusively from the US with data centers in the west and east. Even if HPs sales force is distributed around the world English is the only supported language.

Portfolio

The IaaS core offering includes compute power (HP Cloud Compute), storage (HP Cloud Storage) and network capacity. Furthermore, the value-added services HP Cloud Load Balancer, HP Cloud Relational DB, HP Cloud DNS, HP Cloud Messaging, HP Cloud CDN, HP Cloud Object Storage, HP Cloud Block Storage and HP Cloud Monitoring are available with which a virtual infrastructure for own applications and services can be build.

The HP Cloud infrastructure based on OpenStack and is multi-tenant. The virtual machines are virtualized with KVM and can be booked in fixed sizes (Extra Small to Double Extra Large) per hour. The local storage of a virtual machine is not persistent. Long-term data can be stored and connected on an independent block storage. Own virtual machine images cannot be uploaded to the cloud. The load balancer is currently still in a private beta. The infrastructure is a multi-fault domain, which is considered by the service level agreement. A multi-factor authorization is currently not offered.

The HP Enterprise Cloud Services offer a variety of solutions geared to business. Including recovery-as-a-service, Dedicated Private Cloud and Hosted Private Cloud. The application services include solutions for collaboration, messaging, mobility and unified communications. Specific business applications include HP Enterprise Cloud Services for Microsoft Dynamics CRM, HP Enterprise Cloud Services for Oracle and HP Enterprise Cloud Services for SAP. Professional services complement the Enterprise Cloud Services portfolio. The Enterprise Cloud Services – Virtual Private Cloud is a multi-tenant and by HP hosted Private Cloud and is aimed at SAP, Oracle and other enterprise applications.

Analysis

HP has a lot of experience in building and operating IT infrastructures and achieve the same objectives in the areas of Public and Private Cloud infrastructures. For this HP depends on own hardware components and an extensive partner network. HP has a global sales force and an equally high marketing budget and is therefore able to reach a variety of customers. Even if the data center for the Public Cloud are exclusively in the US.

In recent years HP has invested much effort and development. Nevertheless, the HP Public Cloud compute service is only available to the general public since December 2012. Due to this, HP has no significant track record for its Public Cloud. There is only a limited interoperability between the HP Public Cloud based on OpenStack, and the Private Cloud (HP CloudSystem, HP Converged Cloud) based on HP Cloud OS. Since the HP Public Cloud does not have the ability to upload own virtual machine images on the basis of a self-service, customers currently cannot transfer workloads from the Private Cloud to the Public Cloud. Even if the Private Cloud is based on OpenStack.

INSIGHTS Report

The INSIGHTS Report „HP Cloud Portfolio – Overview & Analysis“ can be downloaded for free as a PDF.

Kategorien
Analysis

Multi-Cloud is "The New Normal"

Not least the disaster of Nirvanix had shown that one should not rely on a single cloud provider. But regardless of spreading the risk, the usage of a multi-cloud strategy is a recommended approach which already is conscious or unconscious practionzed.

Hybrid or multi-cloud?

What actually means multi-cloud? Or what is the difference to a hybrid cloud? Well, by definition a hybrid cloud connects a private cloud or an on-premise IT infrastructure with a public cloud to supply the local infrastructure with further resources on-demand. This resources could be compute power, storage but even services or software. Is a local email system integrated with a SaaS CRM system it can spoken about a hybrid cloud. That means a hybrid cloud stands not just for IaaS or PaaS scenarios.

The multi-cloud approach extends the hybrid cloud idea by the amount of connected clouds. Exactly spoken, this can be n-clouds which are integrated in some kind of forms. Here, for example, cloud infrastructures are connected that applications can use different infrastructures or services in parallel or depending on the workload or the current price. Even the parallel or distributed storage of data over multiple clouds is imaginable to ensure the availability and redundance of the data. At the moment multi-cloud is intensively discussed in the IaaS area. Therefore taking a look on Ben Kepes‘ and Paul Miller’s Mapping Session on multi-cloud as well as Paul’s Sector RoadMap: Multicloud management in 2013 is recommendable.

What is often neglected, multi-cloud has a special importance in the SaaS area. The amount of new SaaS applications is growing from day to day and with that the demand to integrate this varying solutions and let exchange the data. Today, the cloud market moves uncontrolled into the direction of isolated applications while each solution offers an added value, but in the result leads to many small data silos. With this kind of progress enterprises already fought vainly in pre cloud times.

Spread the risk

Even if the cloud marketing always promise the availability and security of data, systems and applications, the responsibilty to ensure this lies in the own hands (This refers to an IaaS public cloud.). Although cloud vendor mostly provide ways and means using IaaS, the customer has to do things on its own. Outages known from the Amazon Web Services or unpredictable closures like Nirvanix should lead to more sensitivity while using cloud services. The risk needs to be spread consciously. Here not all eggs shut be put into one basked but strategically distributed over severall.

Best-of-Breed

The best-of-breed strategy is the most widespread approach within IT enterprise architectures. Here multiple integrated industry solution for various sections within a company are used. The idea behind best-of-breed is to use the best solution in each case, an all-in-one solution usually cannot offer. This means one assembled the best services and applications for its purpose. Following this approach in the cloud, one is already in a multi-cloud scenario what means that approxiamtely 90 percent of all companies using cloud services are multi-cloud users. If the used cloud services are integrated remains doubtful.

Which is recommended on the one hand, is on the other side unavoidable in many cases. Although there are already a few good all-in-one solutions on the market. Nevertheless, most pearls are implemented independently, and must be combined with other solutions, for example email and office /collaboration with CRM and ERP. In respect of the risk this has its advantage, especially in the cloud. If a provider fails only a partial service is not available and not the overall productivity environment.

Avoid data silos: APIs and integration

Such best-of-breed approaches cloud marketplaces attempt to create, by grouping individual cloud solutions into different categories to offer companies a broad portfolio of different solutions, with which a cloud productivity suite can be put together.

Nevertheless, very highly controlled and supposedly integrated marketplaces show a crucial point with massive integration problems. There are many individual SaaS applications that do not interact. This means that there is no common database and for example the email service can not access the data in the CRM service and vice versa. This creates, as described above, individual data silos and isolated applications within the marketplace.

This example also illustrates the biggest problem with a multi-cloud approach. The integration of the many different and usually independent operating cloud services and their interfaces to each other.

Multi-cloud is „The New Normal“ in the cloud

The topic of multi-cloud is currently highly debated especially in IaaS environment to spread the risk and to take advantage of the cost and benefits of different cloud infrastructures. But even in the SaaS environment the subject must necessarily become a greater importance to avoid data silos and isolated applications in the future, to simplify the integration and to support companies in their adaptation of their best-of-breed strategy.

Notwithstanding these expectations, the multi-cloud use is already reality by companies using multiple cloud solutions from many different vendors, even if the services are not yet (fully) integrated.

Kategorien
Analysis

Enterprise Cloud Computing: T-Systems to launch "Enterprise Marketplace"

According to Deutsche Telekom it already could reach about 10.000 mid-size business for its Business Marketplace. Now a similar success should be achieved in the corporate customer environment. For this the Enterprise Marketplace is available to purchase software, hardware or packaged solutions on demand and even deploy firm-specific in-house development for the own employees.

SaaS, IaaS and ready packages

The Enterprise Marketplace is aimed to the specific requirements of large business and offers besides Software-as-a-Service solutions (SaaS) also pre-configured images (appliances) and pre-configured overall packages which can be integrated into existing system landscapes. All services including integrated in-house developments are located in a hosted private cloud and are delivered through the backbone of the Telekom.

Standard solutions and in-house development

The Enterprise Marketplace offers mostly standardized services. The usage of these services includes new releases, updates and patches. The accounting is on a monthly base and based on the use of the respective service. T-Systems also provides ready appliances like a pre-configured Apache Application Server which can be managed and expanded with own software. In the future customers will be able to choose which T-Systems data center they would like to use.

Within the Enterprise Marketplace different offerings from the marketplace can be combined e.g. a webserver with a database and a content management system. Furthermore own solutions like monitoring tools can be integrated. These own solutions can also be provided to the remaining marketplace participants. Who and how many participants get access to the solution the customer decides on its own.

In addition the performance of each appliance in the Enterprise Marketplace can be individually customized. Therefore the user can affect the amount of CPUs as well as the RAM size and storage and the connection bandwidth. In a next expansion stage managed services take care of patches, updates and new releases.

Marketplace with solutions from external vendors

A custom dashboard provides an overview of all deployed or approved applications within the Enterprise Marketplace. Current solutions on the marketplace include webserver (e.g. Apache and Microsoft IIS), application server (e.g. Tomcat and JBoss), SQL databases, Microsoft Enterprise Search as well as open source packages (e.g. LAMP stack or Joomla). In addition T-Systems partners with a number of SaaS solution provider who have been specifically tested for use on the T-Systems infrastructure. This includes among others the tax software TAXOR, the enterprise social media solution tibbr, the business process management suite T-Systems Process Cloud as well as the sustainability management WeSustain.

Cloud computing at enterprise level

T-Systems with its Enterprise Marketplace already arrived where other providers like Amazon, Microsoft, Google, HP and Rackspace starts its journey – at the lucrative corporate customers. While the Business Marketplace primarily presents as a marketplace for SaaS solutions, T-Systems and the Enterprise Marketplace go ahead and also provide infrastructure ressources and complete solutions for companies including the integration of in-house development as well as managed services.

Compared to the current cloud players T-Systems does not operate a public cloud but instead focused on providing a (hosted/ virtual) private cloud. This can pay off in the medium term. Current market figures from IDC promise a growth for the public cloud by 2013 with 47.3 billion dollars to 107 billion dollars in 2017. According to IDC the popularity of the private cloud will decline for the benefit of the virtual private cloud (hosted private cloud) once the public cloud provider start to offer appropriate solutions to meet the concerns of the customers.

Similar observations we have made ​​during our GigaOM study „The state of Europe’s homegrown cloud market„. Cloud customers are increasingly demanding managed services. Although they want to benefit from the properties of the public cloud – flexible use of the resources, pay per use – but this at the same time within a very secure environment including the support for the integration of applications and systems by the provider. At present, all of this the public cloud player cannot offer in this form since they mainly focus on their self-service offerings. This is fine, because they effectively operate certain use cases and have achieved significant market share in a specific target customer segment. However, if they want to get the big fish in the enterprise segment, they will have to align their strategy to the T-Systems portfolio in order to meet the individual needs of the customers.

A note on the selection process of partners for the Enterprise Marketplace: Discussions with partners that offer their solutions on the Business Marketplace confirm a tough selection process, which can be a real „agony“ for the partner. Specific adjustments specifically to the security infrastructure of T-Systems are not individual cases and show a high quality management by T-Systems.

Kategorien
Analysis

A view of Europe's own cloud computing market

Europe’s cloud market is dominated by Amazon Web Services, Microsoft Windows Azure, and Rackspace. So the same providers that serve the rest of the world. Each of these global providers has a local presence in Europe and all have made efforts to satisfy European-specific concerns with respect to privacy and data protection. Nevertheless, a growing number of local providers are developing cloud computing offerings in the European market. For GigaOM Research, Paul Miller and I explore in more detail the importance of having these local entrants and asks whether Europe’s growing concern with U.S. dominance in the cloud is a real driver for change. The goal is to discover whether there is a single European cloud market these companies can address or whether there are several different markets.

Key highlights from the report

  • European concern about the dominance of U.S. cloud providers
  • Rationale for developing cloud solutions within Europe
  • The value of transnational cloud infrastructure
  • The value of local or regional cloud infrastructure
  • A representative sample of Europe’s native cloud providers

European cloud providers covered in the report

  • ProfitBricks
  • T-Systems
  • DomainFactory
  • City Cloud
  • Colt
  • CloudSigma
  • GreenQloud

Get the report

To read the full report go to „The state of Europe’s homegrown cloud market„.

Kategorien
Analysis

GigaOM Analyst Webinar – The Future of Cloud in Europe [Recording]

On July 9 Jo Maitland, Jon Collins, George Anadiotis and I talked about the opportunities and challenges of the cloud in Europe and countries such as Germany or the UK, and gave an insight into the cloud computing market in Europe. The recording of the international GigaOM analyst webinar „The Future of Cloud in Europe“ is online now.

Background of the webinar

The European Commission unveiled its “pro cloud” strategy a year ago, hoping to reignite the stagnant economy through innovation. The Commissioner proclaimed boldly that the cloud must “happen not to Europe, but with Europe”. And rightly so. A year later, three GigaOM Research analysts from Europe Jo Collins (Inter Orbis), George Anadiotis (Linked Data Orchestration) and Rene Buest (New Age Disruption) – moderated by Jo Maitland (GigaOM Research) – looked at who the emerging cloud players are in the region and their edge over U.S. providers. They digged into the issues for cloud buyers in Europe and the untapped opportunities for providers. Can Europe build a vibrant cloud computing ecosystem? That’s a tough question today as U.S. cloud providers still dominant the industry.

Questions which were answered

  • What’s driving cloud opportunities and adoption in Europe?
  • What are the inhibitors to adoption of cloud in Europe?
  • Are there trends and opportunities within specific countries (UK, Germany, peripheral EU countries?)
  • Which European providers show promise and why?
  • What are the untapped opportunities for cloud in Europe?
  • Predictions for the future of cloud in Europe.

The recording of the analyst webinar

Kategorien
Analysis

Windows Azure Infrastructure Services – Microsoft is not yet on par with Amazon AWS

That Microsoft, as one of the world’s leading IT companies eventually have to fight with an „online store“ and a „search engine“ for market share, probably, no one ever dared to dream in Redmond. But that is the reality. Amazon and its Amazon Web Services (AWS) are the engine of innovation in the cloud computing market. And even Google is catching up steadily. Google has specifically in the platform-as-a-service (PaaS) market with the App Engine and the software-as-a-service (SaaS) market with Google Apps already well positioned products. Amazon, however, is in the area of infrastructure-as-a-service (IaaS) the absolute market leader. Here also Microsoft attacks now. After Windows Azure was positioned as a pure PaaS on market at the beginning, more and more IaaS components were added successively. With the new release, Microsoft has now officially rolled out the Windows Azure infrastructure services. For many this step comes too late, as a large market share in this area already have been spent to AWS. However, where it initially looks disadvantageous also some benefits, that are overlooked by most, are hidden.

Windows Azure Infrastructure Services at a glance

Basically, the Azure infrastructure services are nothing new. In a public release preview this have already been presented in June 2012. According to Microsoft, „… more than 1.4 million virtual machines have been created and used by hundreds of millions of processor hours.“ In addition, today already more than 50 percent of Fortune 500 companies use Windows Azure and thereby manage a total of more than four trillion data and information on Windows Azure. The capacity for compute and storage solutions double in about every six to nine months. According to Microsoft, every day nearly 1,000 new customers register on Windows Azure.

With the release of Windows Azure infrastructure services, Microsoft’s cloud computing stack has now officially completed. In addition to the operation of virtual machines, the update includes the associated network components. Furthermore, Microsoft now offers support for virtual machines and also the most common Microsoft server workloads such as Microsoft BizTalk or SQL Server 2012. In addition to Windows the Linux operating system is fully valid supported on the virtual machines. The Windows Azure Virtual Networks should also allow hybrid operations.

New instances and updated SLAs

In addition to new virtual instances, for example with more storage capacity of 28GB and 56GB, virtual images are also prepared, such as for BizTalk Server and SQL Server. Prepared Linux images, inter alia CentOS, Ubuntu and Suse Linux Enterprise Server (SLES), are provided by commercial vendors. Furthermore, there are numerous open source applications prepared in the VM Images depot on self-service basis. Microsoft server products including Microsoft Dynamics NAV 2013, SharePoint Server 2013, BizTalk Server 2013 have already been tested from Microsoft to run on the virtual machines.

Furthermore, the Service Level Agreements (SLAs) have been revised. Microsoft guarantees 99.95 percent availability including financial security, if there is a failure on Microsoft’s side. In addition to an SLA for cloud services Microsoft offers seven SLAs specifically for memory, SQL database, SQL Reporting, Service Bus, Caching, CDN and media services.

7/24/365 Support

A Microsoft support team is available every day around the clock. The support plans are divided into four levels from developer support through Premier Support.

Price reduction for virtual machines and cloud services

Such as Amazon AWS also Microsoft passes his savings through economies of scale to its customers. This immediately following new prices and extensions are available:

  • Virtual machines (Windows, default instances), are up to 31 May in reduced prices. The new general availability rates apply from 1 June 2013. For a small instance the new price is € 0.0671 per hour.
  • The prices for virtual machines (Linux) for default instances have been reduced by 25 percent. From 16 April 2013, prices for small, medium, large and extra large instances be reduced by 25 percent. The price for a small Linux instance will be reduced from € 0.0596 per hour to € 0.0447 per hour in all regions.
  • The prices for virtual networks start at € 0.0373 per hour, effective from 1 June 2013. Up to 1 June, customers can use the virtual network for free.
  • The prices of cloud services for Web and Worker roles were reduced by 33 percent for default instances. From 16 April 2013 the price drops for small, medium, large and extra large instances by 33 percent. The price for a small worker role will be reduced from € 0.0894 per hour to € 0.0596 per hour in all regions.

Not too late for the big part of the pie

Even though Microsoft is very late in the highly competitive market for infrastructure-as-a-service does not mean that they missed the boat. In many countries the adaptation of cloud computing just started. In addition, the big money is made at the established corporate clients and only then with the startups. Even Amazon has understood that and has taken the appropriate measures.

Furthermore, the importance of the private cloud, and thus the hybrid cloud increases worldwide. And here the hand already looks quite different. Microsoft has with its Windows Server 2012 a well-placed product for the private cloud, which can be seamlessly integrated with Windows Azure. Here Amazon AWS can just quickly be active with a possible acquisition of Eucalyptus. A first intensive cooperation between the two companies already exists.

However, the Windows Azure infrastructure services are primarily public cloud services. And here it must be said that the diversity of the service portfolio of the Amazon Web Services is still significantly greater than of Windows Azure. For example, services such as Elastic IP or CloudFormation are missing. Nevertheless, Microsoft with its portfolio is currently the only public cloud provider on the market, who can be seriously dangerous for Amazon AWS. Because „infrastructure means more than just infrastructure“ and therefore it is about „making the infrastructure usable„.

See also: Amazon Web Services vs. Microsoft Windows Azure – A direct comparison (to be updated)

And what about Google?

Google should not be underestimated in any case. On the contrary, in a first performance comparison between the Google Cloud Platform and Amazon AWS, Google emerged as the winner. However, the current service portfolio of the Google Cloud Platform is confined in the core of computing power, storage and databases. Other value added services that rely on the platform, are still missing. In addition, Google can currently only be seen as a pure public cloud provider. In the private/ hybrid cloud environment are no products to be found yet. This needs to be improved with collaborations and acquisitions to meet the needs of conservative corporate customers in the future. Especially since Google still has a not to be underestimated reputation problem in data protection and data acquisitiveness. Here more transparency must be shown.

Microsoft is not yet on par with Amazon AWS

With the official release of the Windows Azure infrastructure services, Microsoft has begun to catch up with the Amazon Web Services in infrastructure-as-a-service market. But a game at eye level can not be mentioned here. Because something new, or even innovations can not be found in the new Azure release. Instead, Microsoft only tries to catch up the technology advantage of Amazon AWS with the extension of infrastructure resources, … but that’s it. The degree of innovation by Amazon should not be underestimate, who expand its cloud platform with other disruptive services and functions at regular intervals.

Nevertheless, in the attractive environment for enterprise customers Microsoft is in a good position and has expanded its portfolio with the Azure infrastructure services with another important component towards Amazon. In addition, Microsoft already has a very large on-premise customer base that needs to be transferred to the cloud now. Among them renowned and financially well-positioned companies. And this is precisely the area in which Amazon has still to build trust. Moreover, one should not neglect the ever-growing private cloud market. Here, the hands on both sides are equally quite different.

That Microsoft is not yet on par with Amazon in the IaaS area does not mean that they will not be successful. It is not necessarily decisive, to be the first on the market and have the best product, but to persuade its existing and potential customers expect to provide an added value. And it would not be the first time that Microsoft would do this.

Kategorien
Analysis

ProfitBricks under the hood: IaaS from Germany – be curious

Last week I had a briefing with ProfitBricks to get a better understanding of the infrastructure-as-a-service (IaaS) offering from Germany and to ask specific questions. I examined ProfitBricks critically with two articles in the past, number one here and number two here. First, because I don’t like marketing phrases that just promise much more than really is behind it and secondly because even the technical promises must be kept.

ProfitBricks details

ProfitBricks presents itself as a typical infrastructure-as-a-service providers. A graphical web interface is used to form customized servers to an entire data center. A complex network structure shall ensure a real isolation of the customer’s network in the virtual datacenter. A meshed network with a redundant Infiniband connection and a highly redundant storage (including automatic backup) provides the performance and availability of data.

In addition, ProfitBricks has its own security team and highly experienced system administrators that provide a round the clock support.

Locations

ProfitBricks has data centers or co-locations in Germany (Karlsruhe) and the USA (Las Vegas). However, both data centers are not connected with each other – physical nor virtual. So no data exchange between Germany and the United States can take place in this way.

The infrastructure components

At ProfitBricks servers can be equip from 1 to 48 cores, and between 1 GB and 196 GB of RAM. The maximum is currently at 60 cores and 250 GB of RAM per server, which can be enabled through a support contact. Storage is available between 1 GB and 5000 GB. However, this can only be assigned to one server directly. So there is no central storage. To realize this, one has to build its own central storage appliance and distribute the storage over it.

Within a self-designed data center (see below), two zones (similar to Amazon Availability Zones) are available. This allows to configure two servers, whereby one of them will not notice about problems in the area of the second server.

There is no centralized firewall. Instead, each of all NICs in a server can be configured with their own rules. A central firewall can achieved by creating a dedicated firewall appliance (Linux + IPTables or a prepared commercial firewall as an ISO image).

Although a load balancer is available, ProfitBricks recommends to build an own one based on an appliance because among others the ProfitBricks one has no monitoring included.

ProfitBricks do not offer additional own value-added services. And on their own admission, this will never happen. Instead, the provider relies on a network of partners that will provide appropriate services for the infrastructure platform.

Currently unique: The Data Center Designer

What really convinced me at ProfitBricks is the „Data Center Design (DCD)“. Such a one no IaaS provider worldwide has it in this format.

Based on these graphical web interface one is able to create a complete virtual data center individually and activate the configuration or modify it with a mouse click – whether it’s about servers, storage, load balancers, firewalls, or the corresponding network.

Is a data center design ready, it can be saved and deployed. Previously the user obtained more information about a check through the system. Here is determined whether everything is configured correctly – e.g. that all servers have a boot drive with the corresponding image. Then the total cost per month for this virtual data center is itemized.

However, the DCD has still a weak point. Is a data center deployed, no single server can be removed from the design or stopped via the web interface. Therefore the entire data center must be un-deployed. Then the server is removed and the data center must be re-deployed. Using the proprietary SOAP API that supports, among other things Java and C #, a single server should be removed. This web feature is to follow, as well as a REST API, in the future.

The customer is mostly left on its own

ProfitBricks offers a German support, which has either worked for years as an administrator, or was involved in the development of the system. Support is included for free. Even if one evaluates the platform only with a test account.

Otherwise ProfitBricks is a common self-service like all the other IaaS providers. This means that a customer is self-responsible for the design of its virtual infrastructure and how an application on the infrastructure can scale and provided highly available.

With additional questions and approaches, for example for configuring a separate firewall appliance or a dedicated load balancer, external partners should help.

Prices

The billing follows exact to the minute per hour. The cost break down on this as follows:

  • 1 Core = 0,04 EUR per hour
  • (Windows Server plus 0,01 EUR per hour)
  • 1 GB RAM = 0,09 EUR per hour
  • 1 GB storage = 0,09 EUR per 30 days
  • 1 GB traffic = 0,06 EUR per GB traffic

For the US market:

  • 1 Core = 0,05 USD per hour
  • (Windows Server plus 0,02 USD per hour)
  • 1 GB RAM = 0,015 USD per hour
  • 1 GB storage = 0,09 USD per 30 days
  • 1 GB traffic = 0,08 USD per GB traffic

Live Vertical Scaling

ProfitBricks supports the so-called Live Vertical Scaling. This means that further resources such as CPU and RAM can be add to a virtual server during operation. This feature must be enabled separately for each server, and the server must then be restarted once.

However, what I have noted here and what has be confirmed by ProfitBricks during the briefing, the operating system, database, software and the own application must support it. The systems need to realize that suddenly more cores and RAM are available and use it. And in the opposite case also deal with, when the resources scale down again.

ProfitBricks is interesting

ProfitBricks is an interesting infrastructure-as-a-service offering. Especially in the very cloud-sparse (IaaS) Germany with a data center in Germany. Particularly noteworthy is the Data Center Designer (the only USP), which is currently unique in the world and provides convenience features, that the other IaaS providers are neglecting. Admittedly the designer rakes at one point or another (Example: removing server), but that will certainly change in a next version.

At the end of the day ProfitBricks is a pure infrastructure-as-a-service provider that has its strengths in infrastructure operations. This also showed the briefing. Therefore an interview with CEO Achim Weiss confuses me, which I had read a few weeks ago. Besides enterprises he called Internet startups as ProfitBricks target customers. Today, I consider this as an utopia. Without a services portfolio like the offering of the Amazon Web Services, this target group can not be achieved. The service gap can and should be closed by service partners. Another but quite legitimate approach if the strengths are in a different area.

Kategorien
Analysis

Amazon Web Services vs. Microsoft Windows Azure – A direct comparison

Many companies are currently in the evaluation of public cloud services such as IaaS. The first thoughts brush the two large and supposedly known providers in the scene – Amazon Web Services and Microsoft Windows Azure. Both have an extensive and growing range of cloud services today. But, if you want to compare both portfolios the challenges increase with the number of services.

Amazon Cloud vs. Windows Azure

The following table shows the cloud service portfolio towards 1:1 and provides clarity. Who provides what in which area, what is the name of the respective service and under what URL to find more information about it.

Feature

Amazon Web Services

Microsoft Windows Azure

Computing power

Virtual machines Elastic Compute Cloud Role Instances
High Performance Computing Cluster Compute Instances HPC Scheduler
MapReduce Elastic Map Reduce Hadoop on Azure
Dynamic scaling Auto Scaling Auto Scaling Application Block

Storage

Unstructured storage Simple Storage Service Azure Blob
Flexible entities SimpleDB Azure Tables
Block Level Storage Elastic Block Store Azure Drive
Archiving Amazon Glacier
Storage Gateway AWS Storage Gateway

Databases

RDBMS Relational Database Service SQL Azure
NoSQL DynamoDB Azure Tables

Caching

CDN CloudFront CDN
In-Memory ElastiCache Cache

Network

Load Balancer Elastic Load Balancer Fabric Controller / Traffic Manager
Hybrid Cloud Virtual Private Cloud Azure Connect
Peering Direct Connect
DNS Route 53

Messaging & Applications

Async Messaging Simple Queue Service Azure Queues
Push Notifications Simple Notification Service Service Bus
Bulk Email Simple Email Service
Workflows Amazon Simple Workflow Service
Search Amazon CloudSearch

Monitoring

Resource monitoring CloudWatch System Center

Securiry

Identity Management Identity Access Management Azure Active Directory

Deployment

Resource creation CloudFormation
Web Application Container Elastic Beanstalk Web Role