Categories
News

Amazon kämpft in North Virginia mit seiner Cloud

Seit 1:41 AM PDT kämpfen die Amazon Web Services mit Problemen in ihrer Amazon Elastic Compute Cloud in North Virginia. In Bezug auf Ihre Status Seite, begann das Problem mit Latenzen und Fehlerraten innerhalb der EBS Volumes und Verbindungsproblemen zu den EC2 Instanzen in der Region US-EAST-1.

Viele Webseiten und Anbieter, die EC2 nutzen, um ihre Services anzubieten, wie Hootsuite oder Mobypicture sind aktuell nicht erreichbar.

Hier sind die derzeit aktuellen Fakten zu der Amazon Elastic Compute Cloud (N. Virginia) auf Basis von http://status.aws.amazon.com.

1:41 AM PDT We are currently investigating latency and error rates with EBS volumes and connectivity issues reaching EC2 instances in the US-EAST-1 region.

2:18 AM PDT We can confirm connectivity errors impacting EC2 instances and increased latencies impacting EBS volumes in multiple availability zones in the US-EAST-1 region. Increased error rates are affecting EBS CreateVolume API calls. We continue to work towards resolution.

2:49 AM PDT We are continuing to see connectivity errors impacting EC2 instances, increased latencies impacting EBS volumes in multiple availability zones in the US-EAST-1 region, and increased error rates affecting EBS CreateVolume API calls. We are also experiencing delayed launches for EBS backed EC2 instances in affected availability zones in the US-EAST-1 region. We continue to work towards resolution.

3:20 AM PDT Delayed EC2 instance launches and EBS API error rates are recovering. We’re continuing to work towards full resolution.

4:09 AM PDT EBS volume latency and API errors have recovered in one of the two impacted Availability Zones in US-EAST-1. We are continuing to work to resolve the issues in the second impacted Availability Zone. The errors, which started at 12:55AM PDT, began recovering at 2:55am PDT

5:02 AM PDT Latency has recovered for a portion of the impacted EBS volumes. We are continuing to work to resolve the remaining issues with EBS volume latency and error rates in a single Availability Zone.

Amazon CloudWatch (N. Virginia)

2:26 AM PDT We are working on restoring connectivity to a small number of EC2, EBS, and RDS resources in multiple availability zones in the US-EAST-1 region. While we restore connectivity, CloudWatch metrics for those resources will be delayed.

3:04 AM PDT We are continuing to see connectivity issues impacting EC2, EBS, and RDS resources in multiple availability zones in the US-EAST-1 region. While we restore connectivity, CloudWatch metrics for those resources will be delayed. We continue to work towards resolution.

4:47 AM PDT CloudWatch metrics are delayed for some EBS and RDS resources in the US-EAST-1 region. The delays began at 12:55AM PDT. We have isolated the impact to a single availability zone, and are working towards a full resolution.

Amazon Relational Database Service (N. Virginia)

1:48 AM PDT We are currently investigating connectivity and latency issues with RDS database instances in the US-EAST-1 region.

2:16 AM PDT We can confirm connectivity issues impacting RDS database instances across multiple availability zones in the US-EAST-1 region.

3:05 AM PDT We are continuing to see connectivity issues impacting some RDS database instances in multiple availability zones in the US-EAST-1 region. Some Multi AZ failovers are taking longer than expected. We continue to work towards resolution.

4:03 AM PDT We are making progress on failovers for Multi AZ instances and restore access to them. This event is also impacting RDS instance creation times in a single Availability Zone. We continue to work towards the resolution.

5:06 AM PDT IO latency issues have recovered in one of the two impacted Availability Zones in US-EAST-1. We continue to make progress on restoring access and resolving IO latency issues for remaining affected RDS database instances.

AWS CloudFormation (N. Virginia)

3:29 AM PDT We are experiencing delays in creating and deleting stacks that include EBS, EC2 and RDS resources in multiple availability zones in the US-EAST-1 region. Existing stacks are not impacted.

AWS Elastic Beanstalk

3:16 AM PDT We can confirm increased error rates impacting Elastic Beanstalk APIs and console, and we continue to work towards resolution.

4:18 AM PDT We continue to see increased error rates impacting Elastic Beanstalk APIs and console, and we are working towards resolution.

By Rene Buest

Rene Buest is Gartner Analyst covering Infrastructure Services & Digital Operations. Prior to that he was Director of Technology Research at Arago, Senior Analyst and Cloud Practice Lead at Crisp Research, Principal Analyst at New Age Disruption and member of the worldwide Gigaom Research Analyst Network. Rene is considered as top cloud computing analyst in Germany and one of the worldwide top analysts in this area. In addition, he is one of the world’s top cloud computing influencers and belongs to the top 100 cloud computing experts on Twitter and Google+. Since the mid-90s he is focused on the strategic use of information technology in businesses and the IT impact on our society as well as disruptive technologies.

Rene Buest is the author of numerous professional technology articles. He regularly writes for well-known IT publications like Computerwoche, CIO Magazin, LANline as well as Silicon.de and is cited in German and international media – including New York Times, Forbes Magazin, Handelsblatt, Frankfurter Allgemeine Zeitung, Wirtschaftswoche, Computerwoche, CIO, Manager Magazin and Harvard Business Manager. Furthermore Rene Buest is speaker and participant of experts rounds. He is founder of CloudUser.de and writes about cloud computing, IT infrastructure, technologies, management and strategies. He holds a diploma in computer engineering from the Hochschule Bremen (Dipl.-Informatiker (FH)) as well as a M.Sc. in IT-Management and Information Systems from the FHDW Paderborn.

Leave a Reply