Cloud Computing

Amazon AWS builds a data center in Germany: Nice idea!

The Amazon Web Services will open a new cloud region targeting the German market by establishing a data center in Germany (Frankfurt). But is this so exciting for German companies?

Amazon AWS to touch down in Germany

Apparently, Amazon AWS has recognized the importance of the German market and the concerns of German companies. Crisp Research knows from reliable sources that the cloud provider will open a cloud region for the German market with a location in Frankfurt in the coming weeks.

After the announcement of Salesforce of a German data center location, Amazon is the next big U.S. cloud provider to follow the trend. This again shows the attractiveness of Germany. After all, most American companies treat the German market rather stepmotherly. Typically, the majority of American cloud providers supply the European market through data centers in Ireland (Dublin) and the Netherlands (Amsterdam). This reduces the attractiveness of cloud providers, especially in the case of medium-sized German businesses. Consultations with IT users consistently show that storing data outside of Germany and an agreement that is based on maximum European law are a no-go.

[Update]: Technical evidence

On the 4th of July 2014 German blogger Nils Jünemann published an article technically referencing on a new AWS region “eu-central-1”. Using a traceroute to “” he proved, that something is available. However, when I traceroute on the 5th of July 2014 the host was unknown.

AWS Portfolio: Slowly reaching the enterprise IT

In addition to the data center site, Amazon has announced AWS CloudTrail last year, the first service to allow companies more control over compliance. AWS CloudTrail helps you monitor and record the AWS API calls to one or more accounts. Here views from the AWS Management Console, the AWS Command Line Interface (CLI), from your own applications or third party applications are taken into account. The collected data are stored on either Amazon S3 or Amazon Glacier for evaluation and can be viewed on tools from AWS or external providers. Amazon CloudTrail can be used free of charge. However, costs are associated with storing the data on Amazon S3 and Amazon Glacier as well as with Amazon SNS notifications.

AWS CloudTrail is one of the most important services for enterprise customers that Amazon has released in recent times. The collected logs support the compliance with government regulations by allowing recording of all accesses to AWS services. One can operate more successful security audits based on the log data, identifying the precise origin of vulnerabilities and unauthorized or erroneous hits on data.

Enterprise quo vadis?

After establishing itself as a leading infrastructure provider and enabler for startups and new business models in the cloud, the company from Seattle has been trying to get one foot directly in the lucrative business environment for quite some time. However, one question remains open. Will that be enough to achieve a critical mass of German companies in order to evolve from a provider for startups and developers to a serious alternative for IT workloads for business?

Yes, under certain conditions:

  • Business related services must be launched simultaneously in all regions and not only in the U.S..
  • AWS requires a network of partners in order to reach the mass of attractive German corporate customers.
  • The localization of all information, such as white papers, how-to’s and training is critical.
  • Less self-service, more managed services and professional services, e.g. through the partner network.
  • Reducing complexity by simplifying the use of the scale-out principle.
  • Cloud Connectivity for reliable access to the services.
  • Avoidance of the service lock-in.
  • Strengthening the AWS Marketplace for easier use of scalable standard workloads and applications.
  • Consideration of hybrid cloud scenarios and strengthening of the partner Eucalyptus on the private cloud side.

Note on the Eucalyptus partnership: Nearly all Eucalyptus customers should also be AWS customers (Source: Eucalyptus). This means, conversely, that some hybrid cloud infrastructure exists between on-premise Eucalyptus infrastructure and the Amazon public cloud.

The existing question marks: Microsoft and Google

Medium-sized businesses demand from cloud providers that the data are stored in a German data center. About 75 percent consider physical data location a necessity to enforce German law more easily.

After Salesforce, IBM and Amazon are the only remaining major cloud providers who could be expected to make investments in this direction.

About Google, one can unfortunately say that nothing will happen in the near or far future. The DNA and mentality of the company in terms of data location and customer concerns differ too strongly from those of other providers.

At Microsoft, the cards are basically good. However, the Redmond company doesn’t need to play them now. Microsoft is pursuing a different strategy by using the Cloud OS Partner Network for local providers worldwide (e.g. Pironet NDH in Germany), empowering them with the so-called “Azure Pack” to offer an own Microsoft Azure based cloud infrastructure in a hosted model from a local data center.

How the trend of building local data centers will develop remains to be seen. Bottom line is, Germany and especially the location of Frankfurt, among others due to the DE-CIX, are well prepared to take additional international cloud providers. A key finding of this development is that international providers have understood the concerns and are willing to make compromises in the name of what is good for the user.

Cloud Computing

Analyst Report: Amazon AWS vs. Microsoft Azure

After Microsoft needed to fight against vendors like Novell, Oracle, IBM or HP for on-premise market shares in the last decades, with the Amazon Web Services a new giant has been established in the public cloud, who puts out feelers to the enterprise customer. A market which predominantly is dominated by Microsoft and which reveals an enormous potential for the vendors.

Market forecasts by Crisp Research show a strong growth by 40 percent per year for the next years, whereby revenues in Germany in 2018 amounted up to 28 billion euros. This free analyst report compares the service portfolio as well as the strategy of the Amazon Web Services with that of Microsoft Azure.


Eucalyptus Cloud 3.3 approaches more and more to Amazon AWS and integrates open source tools from Netflix

I already had written about it, after Netflix announced they will provide some of their tools as open source. Now it happened. In its new 3.3 release Eucalyptus has integrated exactly these Netflix tools and offers more functionality for the first time in terms of availability and management for applications within a private cloud infrastructure. Furthermore, new Amazon Web Services (AWS) related functions are provided.

New features in Eucalyptus 3.3

Eucalyptus 3.3 was extended in addition to the Netflix tools with AWS-compatible features such as auto scaling, load balancing, and CloudWatch. With Auto Scaling rules can be created to support workloads with additional virtual machines automatically when a certain load limit is reached. Here, the mechanism is supposed to be exactly the same as on the public cloud infrastructure of the Amazon Web Services. Furthermore, it is now possible to scale workloads automatically to AWS.

Chaos Monkey, Asgard and Edda

The Chaos Monkey is a service running on the Amazon Web Services searching for Auto Scaling Groups (ASG) and randomly kills instances (virtual machines) per group. The software has been developed flexible enough that it also works on platforms other cloud providers. The service is fully configurable, but by default runs on ordinary weekdays from 09.00 clock until 15.00 clock. In most cases, Netflix has written its applications so that they continue to function when an instance suddenly having problems. In special cases this does not happen consciously, so that their own people have to fix the problem in order to learn from it. The Chaos Monkey only runs a few hours a day so that the developers can not 100% rely on it.

Asgard is a web interface that allows to control the deployment of applications and a manage a cloud. Netflix itself uses Asgard to control its virtual infrastructure on Amazon Web Services.

Edda is a service that Netflix uses to continuously retrieve its needed AWS resources via the AWS APIs. Edda can search through the active resources and find out the status. The background of Edda is that virtual instances in the cloud are constantly in motion. This means that they can fail and new ones need to be started. It’s the same with IP addresses that can be re-used by different applications. Here it is important to keep track of where Edda supported.

Netflix originally wrote these tools for the AWS cloud infrastructure. Through the open source release, and Eucalyptus adaptation, they can now also be used in a private cloud.

Cooperation: Eucalyptus and Amazon Web Services

In March 2012, the Amazon Web Services and Eucalyptus had announced a collaboration to better support the migration of data between the Amazon cloud and private clouds. The collaboration is structured differently. First, developers from both companies should focus on creating solutions to help enterprise customers to migrate existing data between data centers and the AWS cloud. Furthermore, and more importantly, however, is that customers should be able to use the same management tools and their knowledge of both platforms. In addition, Amazon Web Services will provide Eucalyptus with further information in order to improve compatibility with the AWS APIs.

The first fruits of this cooperation with the Eucalyptus Release 3.3 can now be carried. Eucalyptus approaches ever closer to the functions of Amazon Web Services. My theory, that the Amazon Web Services may use Eucalyptus to build CIAs private cloud, is not entirely unjustified.

Acquisition not unlikely

CEO Marten Mickos seems to come a little closer to his goal. During a conversation in June 2012, he told me that his first act as the new CEO of Eucalyptus was to pick up the phone, call Amazon and to express an interest in working together.

As I already wrote it in the article “Netflix releases more Monkeys as open source – Eucalyptus Cloud will be pleased” Netflix has played strong in the arms of Eucalyptus with the publication of its Monkeys. This will not least to strengthen the cooperation of Amazon Web Services and Eucalyptus, but make Eucalyptus for Amazon more attractive as takeover target.

Why I am of this opinion, I have described in detail in “Amazon acquires Eucalyptus cloud – It’s merely a matter of time“.


Amazon Web Services (AWS) may use Eucalyptus to build CIAs private cloud

More and more rumors appear that Amazon is building a private cloud for the CIA for espionage activities. The first question that arises here is why for an intelligence? Second, why Amazon AWS as a public cloud provider, because they are a service provider and not a software company. But, if we think a little further, Eucalyptus immediately comes into play.

The Amazon deal with the CIA

According to Federal Computer Week, the CIA and the Amazon Web Services (AWS) have signed a $ 600 million contract with a term of 10 years. The background is that Amazon AWS should build a private cloud for the CIA on the governments infrastructure. More information are not available yet.

However, the question is why the CIA has asked Amazon. Is there more behind the $$$? Amazon is one of the few companies that have the knowledge and staff to successfully operate a cloud infrastructure. Despite some preventable injuries, the infrastructure is designed to be extremely rugged and smart. However, Amazon has a downside. They are a service provider, not a software vendor. That means they do not have the experience on how to unroll software, provides customers with updates and more. Moreover, they are likely or hopefully not use the same source code for the cloud of the CIA.

Now a cooperation might come into play, which Amazon received some time ago with Eucalyptus. This could solve the problem of the lack of software for on-premise clouds and experience with maintenance, software fixes etc. for customers.

The cooperation between Amazon AWS and Eucalyptus

In short, Eucalyptus is a stand-alone fork of the Amazon cloud. With that you are able to build your own cloud with the basic features and functions of the Amazon cloud.

Amazon Web Services and the private cloud infrastructure software provider Eucalyptus announced in March 2012, to work more closely in the future to support the better migration of data between the Amazon cloud and private clouds. Initially, developers from both companies will focus on creating solutions to help enterprise customers to migrate data between existing data centers and the AWS cloud. Furthermore, and even more important is that the customer should be able to use the same management tools and their knowledge for both platforms. In addition, the Amazon Web Services will provide Eucalyptus with further information to improve the compatibility with the AWS APIs.

Amazon Eucalyptus private cloud for the CIA

Such cooperation could now bear fruits for both Amazon and Eucalyptus. Because of the very close similarity of Eucalyptus to the Amazon cloud, there is already a ready and directly usable cloud software that Amazon knows and can bring his knowledge with. Services that Eucalyptus supports not yet but needed by the CIA can be reproduced successively. This in turn could help Eucalyptus in their development by either flow back knowledge from the development or even parts of services in the general product.


I have my problems with the seemingly official cooperation between Amazon and the CIA. A keyword here is trust what Amazon through the cooperation with an intelligence not further promotes.

Management @en

Amazon Web Services suffered a 20-hour outage over Christmas

After a rather bumpy 2012 with some heavy outages the cloud infrastructure of the Amazon Web Services again experienced some problems over Christmas. During a 20 hour outage several big customers were affected, including Netflix and Heroku. This time the main problem was Amazons Elastic Load Balancer (ELB).

Region US-East-1 is a very big problem

This outage is the last out of a series of catastrophic failure in Amazon’s US-East Region-1. It is the oldest and most popular region in Amazon’s cloud computing infrastructure. This new outage precisely in US-East-1 raises new questions about the stability of this region and what Amazon has actually learned and actually improved from the past outages. Amazon customer had recently expressed criticism of the Amazon cloud and especially on Amazon EBS (Amazon Elastic Block Store) and the services that depend on it.

Amazon Elastic Load Balancer (ELB)

Besides the Amazon Elastic Beanstalk API the Amazon Elastic Load Balancer (ELB) was mainly affected by the outage. Amazon ELB belongs to one of the important services if you try to build a scalable and highly available infrastructure in the Amazon cloud. With ELB users can move loads and capacities between different availability zones (Amazon independent data centers), to ensure availability when it comes to problems in one data center.

Nevertheless: both Amazon Elastic Beanstalk and Amazon ELB rely on Amazon EBS, which is known as the “error prone-backbone of the Amazon Web Services“.


Lessons learned: Amazon EBS is the error-prone backbone of AWS

In a blog article writes about their own experiences using the Amazon Web Services. Beside the good things of the cloud infrastructure for them and other startups you can also derived from the context that Amazon EBS is the single point of failure in Amazon’s infrastructure.

Problems with Amazon EC2 criticized Amazon EC2’s constraints regarding performance and reliability on which you absolutely have to pay attention as a customer and should incorporate into your own planning. The biggest problem is seeing in AWS zone-concept. The Amazon Web Services consist on several worldwide distributed “regions”. Within this regions Amazon divided in so called “availability zones”. These are independent data center. mentions three things they have learned from this concept so far.

Virtual hardware does not last as long as real hardware uses AWS for about 3 years. Within this period, the maximum duration of a virtual machine was about 200 days. The probability that a machine goes in the state “retired” after this period is very high. Furthermore Amazon’s “retirement process” is unpredictable. Sometimes you’ll notify ten days in advance that a virtual machine is going to be shut down. Sometimes the retirement notification email arrives 2 hours after the machine has already failed. While it is relatively simple to start a new virtual machine you must be aware that it is also necessary to early use an automated deployment solution.

Use more than one availability zone and plan redundancy across zones made ​​the experience that rather an entire availability zone fails than a single virtual machine. That means for the planning of failure scenarios, having a master and a slave in the same zone is as useless as having no slave at all. In case the master failures, it is possibly because the availability zone is not available.

Use multiple regions

The US-EAST region is the most famous and also oldest and cheapest of all AWS regions worldwide. However, this area is also very prone to error. Examples were in April 2011, March 2012 and June 2012 (twice). therefore believes that the frequent regions-wide instability is due to the same reason: Amazon EBS.

Confidence in Amazon EBS is gone

The Amazon Elastic Block Store (EBS) is recommended by AWS to store all your data on it. This makes sense. If a virtual machine goes down the EBS volume can be connected to a new virtual machine without losing data. EBS volumes should also be used to save snapshots, backups of databases or operating systems on it. sees some challenges in using EBS.

I/O rates of EBS volumes are bad made ​​the experiences that the I/O rates of EBS volumes in comparison to the local store on the virtual host (Ephemeral Storage) are significantly worse. Since EBS volumes are essentially network drives they also do not have a good performance. Meanwhile AWS provides Provisioned IOPS to give EBS volumes a higher performance. Because of the price they are too unattractive for

EBS fails at regional level and not per volume found two different types of behavior for EBS. Either all EBS volumes work or none! Two of the three AWS outages are due to problems with Amazon EBS. If your disaster recovery builds on top of moving EBS volumes around, but the downtime is due to an EBS failure, you have a problem. had just been struggling with this problem many times.

The error status of EBS on Ubuntu is very serious

Since EBS volumes are disguised as block devices, this leads to problems in the Linux operating system. With it made very bad experiences. A failing EBS volume causes an entire virtual machine to lock up, leaving it inaccessible and affecting even operations that don’t have any direct requirement of disk activity.

Many services of the Amazon Cloud rely on Amazon EBS

Because many other AWS services are built on EBS, they fail when EBS fails. These include e.g. Elastic Load Balancer (ELB), Relational Database Service (RDS) or Elastic Beanstalk. As noticed EBS is nearly always the core of major outages at Amazon. If EBS fails and the traffic subsequently shall transfer to another region it’s not possible because the load balancer also runs on EBS. In addition, no new virtual machine can be started manually because the AWS Management Console also runs on EBS.


Reading the experiences of I get the impression that Amazon do not live this so often propagandized “building blocks” as it actually should. Even when it is primary about the offering of various cloud services (be able to use them independently), why let they depend the majority of these services from a single service (EBS) and with that create a single point of failure?

Management @en

Amazon improves EC2 with automatic failover and details their billing reports

Amazon improves one of their biggest “weak points” and with that comes towards their customers. It’s about the failover for individual EC2 instances. Typically, a customer must take care themselves to ensure that a new EC2 instance boots up, when a running fails. Amazon has now optimized its infrastructure and introduces automatic failover for EC2 instances. Furthermore there are more detailed information on the bills.

Automatic failover for Amazon EC2

As an Amazon customer it is not easy to build your own infrastructure in the AWS cloud. For the promised high availability in public cloud computing it is necessary that the customer ensures that for themselves, which many users have not been implemented.

Amazon comes towards their customers now and extends its Auto Scaling function with the Amazon EC2 status checks. That means, when an instance in an Auto Scaling group becomes unreachable and fails a status check, it will be replaced automatically.

As Amazon writes, it is not necessary to take any action to begin using EC2 status checks in Auto Scaling groups. Auto Scaling already incorporates these checks as part of the periodic health checks it already performs.

More details within the invoices

Furthermore new detailed billing reports give access to new reports which include hourly line items of the Amazon infrastructure.