Categories
Management @en

Design for failure in the AWS Cloud!

It’s definitely very important to understand how the Cloud of your Cloud Provider works. So take your time to study the whitepaper and manuals on how you can save your applications against infrastructure failures.

Therefore Amazon released a website where you can find some whitepapers on designing fault-tolerant applications, learn about cloud architectures and web app hosting.

Really interesting is that these whitepapers are from 2010 respectively January 2011.

AWS Cloud Architecture Best Practices Whitepaper Read

The Cloud reinforces some old concepts of building highly scalable Internet architectures and introduces some new concepts that entirely change the way applications are built and deployed. To leverage the full benefit of the Cloud, including its elasticity and scalability, it is important to understand AWS services, features, and best practices. This whitepaper provides a technical overview of all AWS services and highlights various architectural best practices to help you design efficient, scalable architectures.

Building Fault-Tolerant Applications on AWS Whitepaper Read

AWS provides you with the necessary tools, features and geographic regions that enable you to build reliable, affordable fault-tolerant systems that operate with a minimal amount of human interaction. This whitepaper discusses all the fault-tolerant features that you can use to build highly reliable and highly available applications in the AWS Cloud.

Web Hosting Best Practices Whitepaper Read

Hosting highly-available and scalable web applications can be a complex and expensive proposition. Traditional scalable web architectures have not only needed to implement complex solutions to ensure high levels of reliability, but have also required an accurate forecast of traffic to provide a high level of customer service. AWS provides the reliable, scalable, secure, and highly performing infrastructure required for the most demanding web applications – while enabling an elastic, scale-out and scale-down infrastructure model to match IT costs with real-time customer traffic patterns. This whitepaper will review Web application hosting solution in detail, including how each of the services can be used to create a highly available, scalable Web application.

Leveraging Different Storage Options in the AWS Cloud WhitepaperRead

The AWS Cloud platform includes a variety of Cloud-based data storage options. While these alternatives allow architects and developers to make design choices that best meet their application’s needs, the number of choices can sometimes cause confusion. In this whitepaper, we provide an overview of each storage option, describe ideal usage scenarios, and examine other important storage-specific characteristics (such as elasticity and cost) so that you can decide which storage option to use when.

AWS Security Best Practices Whitepaper Read

Security should be implemented in every layer of your Cloud application architecture. In this whitepaper, you will learn about some specific tools, features and guidelines on how to secure your Cloud application in the AWS environment. We will suggest strategies how security can be built into the application from the ground up.

Source: http://aws.amazon.com/architecture

Categories
Comment

Searching for the Killer PaaS!

There are a variety of Platform as a Service (PaaS) offerings on the market by now. Each supports more or less every modern programming language. But one thing they have all together! Each offering uses a maximum of one Infrastructure as a Service (IaaS) provider. Whether it is a proprietary or a third party. As expected, the infrastructure of AWS is still the preferred choice.

Let’s have a look on a small variety of PaaS offerings and the underlying IaaS offering.

So what is missing is an independent hosted PaaS offering. A real PaaS innovation! A kind of PaaS Broker. SteamCannon goes in this direction. However, he is 1. not an independent hosted service (there is just one AWS AMI available) and 2. only AWS EC2 is currently supported.

Of course, the service must be run on a platform. For this reason, the desire for an independent hosted service is a little bit mistakable initially. But even here, the power of the cloud must be used. The Killer PaaS must not only run as an AMI on one infrastructure, but must be distributed across multiple providers. This has already ensured its reliability.

The Killer

Why I describe this service as a Killer PaaS is easy. It must support all IaaS provider on the market and not just one, plus all popular programming languages. The functionality is quite simple in theory. As a user of the Killer PaaS I have previously registered at different IaaS providers such as AWS, Rackspace, GoGrid, etc. for an account. Alternatively, the Killer PaaS offers me to create one. The credentials for each IaaS provider are deposited, which will be accessible to the Killer PaaS. Here I already can set my primary, secondary, etc. provider. As a user I have nothing to do more.

If I would like to run my applications in the cloud, I upload the code on the killer PaaS. It is now taking care of the rest and deployed the application on my deposited infrastructures. It can either make it arbitrary, since it knows the status of the infrastructure with respect to performance, etc., or it takes the settings I previously defined and distributed the applications to the primary provider. If this is too busy on the secondary, etc.

The killer PaaS is so smart that it distributed the entire application across multiple vendors and thus ensures the best possible performance and availability. If I decided to let the application run on a primary provider and he has now performance or other problems, the Killer PaaS makes sure that more (or all) resources automatically be used from another provider. I know cases in which AWS users couldn’t run new instances in a AWS region because not enough resources were available. However, my application may not notice anything like this. If my application suddenly exposed to an enormous burden, e.g. due to a run of visitors, and the IaaS provider also has a resource bottleneck, the Killer PaaS makes sure that available resources from another IaaS providers will be applied, or other measures are taken.

With such a Killer PaaS even many questions can be answered with respect to SLAs. The reliability and availability of the application here is ensured by the Killer PaaS. Since it is also run over several IaaS provider in the cloud, its availability is also ensured.

So what is needed is an independent Cloud Management Platform such as enstratus, just for PaaS.

Categories
Management @en

Cloud Computing does not solve all your problems automatically…

…, on the contrary, it initially creates problems! Especially if you want to provide yourself as a provider of Cloud Computing services.

It is a misconception, if you think, that the construction of a private cloud implies, that you no longer have to take care about the high availability of your infrastructure (physical machines, virtual machines, master-slave replication, hot standby, etc.).

Because: The Cloud will make it on their own!

Be careful!!!The Cloud is preparing a (private) cloud operator initially more problems than you think.

A cloud does not work by itself. It must be developed and equipped with intelligence. This applies to the construction of a private cloud as well as for the use of a public cloud offering (in the case of IaaS). Scripts must be written and new software must might be developed, which is distributed over the cloud. It is also important to read the white paper of the respective providers. Build up know how(!) and understand how the cloud works to use it for your own needs. Another possibility is, of course, to obtain advice from professionals (in addition).

There is no different, for example by using a public cloud offering of the Amazon Web Services. If an instance A has a problem, it can suddenly no longer be attainable, like any normal physical server as well. Now you might think: “I just take an instance B as a backup for instance A!” “And then?” One might think that the instance B automatically takes over the tasks of the instance A. It’s not that simple! Scripts etc., must ensure that the instance B is to assume the duties of instance A, if A suddenly is no longer available. Of course Instance B must also be prepared!
Here for example, the important content of the instance A, including all configurations, etc. can be stored in an EBS (Elastic Block Store) and not in the local instance store. Afterwards a script must ensure that the instance B is powered up automatically with the configurations and all the data from the EBS, if the instance A is no longer available.

The Cloud just gives us, in the area of Infrastructure as a Service, the capabilities to get the (infinite) number of resources out of a quasi-infinite pool of resources, at that time when we need them. We thus receive a highly scalable virtual private data center from the provider. This means, conversely for the operator of a Cloud (private, public), that he must hold that amount of physical resources as well, so that the requested virtual resources can be provided at any time and that sufficient resources are always available to the users.

Categories
Management @en

How can you identify true Cloud Computing?

People often ask me, how they can identify a true Cloud Computing offering. Often they say: “Hey, our data is processed by a service provider in his data center. Well, we are using the Cloud, right?”

Hm, Careful! Many marketing departments of webhosting provider abused Cloud Computing in the last month, whereby a dilution of the term has taken place. What you have known as a “Managed Server” before is now a “Cloud Server”. Eventually they just change the badge.

To identify a true Cloud Computing offering, you shall take care of the following characteristics:

  • On Demand:
    I obtain the resources at the moment, when I actually need them. Afterwards I just “give them back”.
  • Pay as you Go:
    I just pay for the resources which I am actually using, when I am using them. Thereby it will be deducted per user, per gigabyte or per minute/hour.
  • No basic fee:
    Using a Cloud Computing offering I do not have to pay a monthly/annual basic fee!
  • High Availability:
    When I need the resources I can use them exactly at this time.
  • High Scalability:
    The resources adapt to my needs. This means that they either grow with my needs, if I need more performance or they become smaller if the requirements decrease.
  • High Reliability:
    Resources I am using during a specific period of time are actually available when I need them.
  • Blackbox:
    I don’t have to care about the things inside the Cloud offering. I just use the service over an open, well documented API.
  • Automation:
    After the establishment regarding my needs, I do not have to take any interventions manually, while using the offering. That means, I do not have to change the performance of the Server or the size of the storage space manually. The Cloud provider allocates capabilities (like tools etc.) for automation.
  • Access via the Internet:
    This is discussible! However, the cost advantage which is obtained by Cloud Computing is obsolete if an expensive exclusive leased line is required, for example to use the resources of a provider.
Categories
Kommentar

Are the Amazon Web Services the standard of the Cloud?

After the article The Amazon Web Services are the measure of all things in the Cloud Computing market! we should answer the question if the Amazon Web Services (AWS) are the standard of the Cloud as well.

The Amazon Web Services are actual and undisputed the market leader in the field of the Infrastructure as a Service (IaaS) provider and cover all segments and capabilities of Cloud Computing so far.

Due to the long-standing experience Amazon has a significant advance before all other provider in this segment. The expertise arosed by setting up a Private Cloud to fulfill the claims of their own infrastructure (scalability of the Amazon webshop etc.), from what finally the Public Cloud services (Amazon Web Services) originated.

First of all we can select from a variety of “standards”, because every provider attempt to seek his proprietary solution as a standard in the market. Therefore we cannot assume that the Amazon Web Services are the standard of the Cloud. Moreover a standard needs a certain period to become a standard.

So, what are the evidences, that the Amazon Web Services are already the standard respectively the coming standard of Cloud Computing?

A view on the current offerings of third-party suppliers in the Cloud Computing market shows that AWS has a high significance. Most of all Amazons storage service Amazon S3 is very popular. With JungleDisk, CloudBerry S3 Explorer, CloudBerry Backup or Elephant Drive are certain clients available to transfer the data from your local PC or server into the Amazon Cloud. Moreover, with S3 Curl, S3cmd or S3 Sync further open source solutions are available which implements the Amazon S3 API and can be use to store the data in the Amazon Cloud.

A further obvious evidence for the Amazon Web Services to become the Cloud Computing standard is the offering of the German Cloud Computing provider ScaleUp Technologies. They are offering a Cloud Storage, which fully implements and adopts the Amazon S3 API.

In addition, with Eucalyptus respectively the Ubuntu Enterprise Cloud “Clones” of the Amazon Elastic Compute Cloud (EC2) are available, with them it is possible to build an own Private Cloud along the lines of EC2. With the Euca2ools you already find an adaption of the EC2 API.

If you take a look on the list of the AWS Solution Provider, you also see the current importance of AWS.

Not each AWS product has the current potential to referred as a standard. In my mind, just the Amazon Simple Storage Service (Amazon S3) as well as the Amazon Elastic Compute Cloud (Amazon EC2) can be regarded as a standard.

Based on the current offerings and adoptions you can see that S3 is widely used and recognized and it must be assumed that further third-party suppliers will jump on the bandwagon. Moreover, most of the providers will refrain from reinventing the wheel. AWS was the first provider in the IaaS Cloud Computing market and had, as a result, enough time to popularize their proprietary standards. Furthermore, the remain big providers missed to trail quick and to present their own offerings. If they wait any longer, the time will decide for AWS’s benefit.

Amazon S3 is the current defacto standard in the field of Cloud storage. Probably Amazon EC2 will trace shortly and establish oneself. If and when the remaining AWS offerings also become a defacto standard remains to be seen.

Categories
Management @de

Cloud Computing and the financial benefits!

Besides the technical pros of Cloud Computing comparing to a traditional data center, the financial amenities stood in the foreground as well.

Upfront costs and expenditure type
In a traditional data center an enterprise has to invest in upfront costs for hard- and software and need to assume the capital expenditure (capex) and costs for the operating expense (opex). A Cloud Computing provider however invests well-directed in his infrastructure and offer this as a service. Therefore an enterprise deliver the capital expenditure to the Cloud provider and just pay the operating expense if it’s needed.

Cash Flow and operational costs
As described above, an enterprise has to purchase it’s servers and software in advance. Using services offered by a Cloud provider, costs will only occur when the service is actually used. Regarding the operational costs, an enterprise constantly tries to downsize costs for development, deployment, maintenance etc. This outlay could be dispensed with the service delivery from a Cloud provider. In this case the Cloud provider is responsible for the life cycle of the hardware and software components

Financial risk
At first, investments in an own data center are always made in advance, whereby an enterprise had to assume a huge financial risk without the certainty to get a ROI. Getting services from a Cloud provider, the financial risk is reduced to a time frame of one month, by what the ROI could be measured contemporary as well.

In spite of the recurring discussion – Cloud Value is only about Cost – this myth was invalidated by the statement “Cloud Value is about Business Agility, Opportunities and Investment”. Because Cloud Computing stands for more benefits and possibilities as just convert your costs into variable ones. It allows a company to become more flexible and agile.

Categories
Management @en Uncategorized

SOA: Important facts to create a stable architecture

A cheap jerseys from China SOA is often defined as death. But in most cases, failures are already made in the planning 2002: phase, because HaCk3D of a nonexistent process optimization. Often it Co is said: “Hey, let’s make a SOA.” A further Problem: The responsibility for a SOA is often fully assigned to the IT wholesale nfl jerseys department, due to the fact that they will know what is needed. This is fundamentally wrong, because the organisation and process optimization belongs to the general management respectively to a delegate department/ team. The IT department should advise the business leave actively and security has in to show how the information technology can help at this point to reach the goal.

Afterwards cheap mlb jerseys the IT functionality must be established as a service to support business critical processes and design a stable SOA. Therefor following aspects must consider during the implementation.

  • loose coupling: A less degree of multiple hard- and software components among one another
  • distributable: The cheap jerseys system should not be confined locally
  • a clear definition: An explicit requirements definition is indispensable
  • divisibility: The entire system can be divided into subcomponents
  • commutability: The individual subcomponents can be cheap nba jerseys replaced

And at least 10 standards, standards, standards

Categories
Management @en Uncategorized

Is there any possibility to leave or switch the Cloud? The need for a transparent Cloud!

Think about the following scenario. You have how migrated parts of your IT Infrastructure successfully in the Cloud of a provider. You think: “Well, everything is fine. We are saving costs, our infrastructure is now scalable and elastic and our software is always state of the art.” But,… what if certain things happen? ???????? Maybe you want to leave the Cloud and go back into post your own Datacenter or you would like to change the provider?

Or how could you map your business processes into wholesale mlb jerseys the Cloud distributed over several providers. Maybe one provider works on process A and an other provider works on process B, a third provider works on process C using process A and B. Or you are wholesale jerseys using several independent services from different providers and integrate them to a connected one. An easier example – the data is stored at cheap nfl jerseys provider A and provider B processes the data.

Is this possible? How does it works?

One critical point of Cloud Computing is the lack of standards. Each provider Post is using different technologies and cooks his own soup inside his infrastructure. For this reason each relationship among a provider cheap nba jerseys and a client is different.

The need for a transparent Cloud is indispensable!

One answer could be libcloud (http://libcloud.org). Libcloud is a National standard library for Cloud providers like Amazon, Rackspace, Slicehost Computing: and many more including an uniform API. Developed by Cloudkick (https://www.cloudkick.com), libcloud has become an independent project. It is written in Python and free of charge (Apache License 2.0) to interact with different Cloud providers. Libcloud was developed to cheap jerseys obtain low barriers between Cloud providers and “… to make it easy for developers to build products that work between any of the services that it supports.”[1]

[1] http://libcloud.org

Categories
Management @en Uncategorized

IT-Strategy: 10 facts how agility supports your strategy

Agility could be a competitive advantage for each business. wholesale MLB jerseys There for here wholesale jerseys are some facts how agility could support your IT strategy.

1. Infrastructure
– Your infrastructure should be flexible and scalable.
– Standardize your technologie is the wholesale NFL jerseys goal.

2. Datamanagement
– Centralize your data management (single source)
– Use standardized interfaces to access the data.

3. Information logistics
– Use Calls company-wide uniform defined key data (KPIs) and computational procedures.
– Separate your data from your applications

4. Management informationsystems
– Use informationsystems on each management level
– Be flexible for business requirements

5. Company-wide integration
– Use technologie kits (cf. LEGO) connecting your partner and distributors.
– Use standardize interchange formats.

6. E-Business ability
– Having scalable Webserver and CMS.
– Having trust, a high security standard.

7. Communication systems
– Using integrative E-Mail and wholesale NFL jerseys Groupware solutions
– Integrate your mobile systems and devices
– Using VoIP

8. IT-Governance
– Having a fast decision process oriented on your business strategy.
– Save ressources for short-term projects.

9. Enterprise Ressource Planning
– Using cheap NBA jerseys a coherent business logic.
– Optimize your processes.
– Avoid redundancies.

10. Loose coupling
– Use autonomous functional components
– Standardize your interfaces
– Separate the functionality and process logic

However!

IT-Agility is not for free. Flexibility is in contrast to cost optimization and performance optimization. It´s not necessary for each business or each business area. If cost Хорватии and performance optimization is a competitive advantage, always agility doesn’t greatly matter. This means that IT-Agility should be adopted in business areas, where flexibility and reactivity are the key factors. Have a look on your business strategy to adopt IT-Agility.

Categories
Management @en Uncategorized

Cloud Computing: a question of trust, availability and security

In spite of many advantages like scalable processing power or storage space, the acceptability of Cloud Computing will stay or fall with the faith in the Cloud! I am pointing out three basic facts Cloud providers would be faced with, when they are offering their Cloud Computing services.

Availability

Using Amazon services or Google apps does not give you the same functioning guarantee as using your own applications. Amazon guaranteed 99,9% availability for S3 and 99,95% for the Elastic Compute Cloud (EC2). Google also promised an availability of 99,9% for their “Google Apps Premier Edition” including Mail, Calendar, Docs, Sites and Talk. In February 2008 Amazon S3 was down after a failure and Google was even affected by a downtime in Mai 2009. Just looking back to the undersea Internet cable which was broken last year and cut off the Mideast from the information highway, Google, Amazon etc. are not able to promise these SLAs, because they have no influence for such problems.

Companies must carefully identify their critical processes from the non critical ones first. After this classification they should host the critical Move ones within their own datacenter Tool and maybe sourcing out the non critical ones to a cloud provider. This might be a lot of work but could be a benefit.

Cloud providers must care for an anytime availability of their services. 99,9 % availability is a standard by now and advertised from any service provider – but for a Cloud Computing service it is to insufficient. 100% should be the goal! The electric utility model might champions be a good pattern in this case. It’s not as simple as that! But then, a company won’t use Cloud services/ applications for critical business process if the availability is not clear.

Security

Keeping crucial data secure has always been a high priority in Information Technology. wholesale mlb jerseys Using Cloud Computing, companies have to take A their information outside their own sphere and basically transfer them through a public data network.

SLAs (Service Level Agreements) are essential which closely describe how Cloud Computing providers are planning and organizing on protecting the data. This may cause a lot of cheap jerseys litigations someday, if any company did not take care of the information.

A hybrid Cloud might be a good solution to avoid those kinds of problems. The company operates on a Private Cloud for crucial information stored within the own datacenter and uses the Public Cloud of a provider to add more features to the Private Cloud. Secure network connections are indispensable in this case and meet a today standard. This approach does not solve the problem of knowing what alse happends to my information I am sending into the “blackbox”.

Trust

Carry on the last sentence above there are doubts about what might happen to the information in the Cloud as well. Regarding to the data management and local data privacy, brewery many companies such as insurance or financial institutes seeing a lot of problems using Cloud Computing. Using a Private Cloud is no issue, but a Public Cloud doesn’t even enter the equation. This is due to the fact that insurance companies are handling with social data and no letter may not be written or stored on an external system. Insurance companies subject to supervision of many national laws. For example, the data of a german insurance company may cheap mlb jerseys not be hosted on an american host.

Faith and local laws are big hurdles for Cloud Computing. If a word of data abuse in the Cloud gets out to the public, an irreparable damage will be the direct consequence – maybe for the whole Cloud!