Categories
Cloud Computing

Analyst Strategy Paper: The significance of Frankfurt as a location for Cloud Connectivity

Due to continuous relocation of business-critical data, applications and processes to external cloud infrastructures, the IT-operating concepts (public, private, hybrid), as well as network architectures and connectivity strategies are significantly changing for CIO’s. On the one hand, modern technology is required to provide applications in a performance-oriented, stable and secure manner, on the other hand, the location is significantly decisive for optimal “Cloud-Connectivity“.

Against this background, Crisp Research assesses the role of Frankfurt as data center location and connectivity hub with this strategy paper.

The strategy paper can be downloaded free of charge under “The significance of Frankfurt as a location for Cloud Connectivity“.

Categories
Cloud Computing

Analyst Report: Amazon AWS vs. Microsoft Azure

After Microsoft needed to fight against vendors like Novell, Oracle, IBM or HP for on-premise market shares in the last decades, with the Amazon Web Services a new giant has been established in the public cloud, who puts out feelers to the enterprise customer. A market which predominantly is dominated by Microsoft and which reveals an enormous potential for the vendors.

Market forecasts by Crisp Research show a strong growth by 40 percent per year for the next years, whereby revenues in Germany in 2018 amounted up to 28 billion euros. This free analyst report compares the service portfolio as well as the strategy of the Amazon Web Services with that of Microsoft Azure.

Categories
Analysis IT-Infrastructure

Fog Computing: Data, Information, Application and Services needs to be delivered more efficient to the enduser

You read it correctly, this is not about CLOUD Computing but FOG Computing. After the cloud is on a good way to be adapted in the broad, new concepts follow to enhance the utilization of scalable and flexible infrastructures, platforms, applications and further services to ensure the faster delivery of data and information to the enduser. This is exactly the core function of fog computing. The fog ensures that cloud services, compute, storage, workloads, applications and big data be provided at any edge of a network (Internet) on a trully distributed way.

What is fog computing?

The fog hast he task to deliver data and workloads closer to the user who is located at the edge of a data connection. In this context it is also spoken about „edge computing“. The fog is organizationally located below the cloud and serves as an optimized transfer medium for services and data within the cloud. The term „fog computing“ was characterized by Cisco as a new paradigm , which should support distributed devices during the wireless data transfer within the Internet of Things. Conceptual fog computing builds upon existing and common technologies like Content Delivery Networks (CDN), but based on cloud technologies it should ensure the delivery of more complex services.

As more and more data must be delivered to an ever-growing number of users, concepts are necessary which enhance the idea of the cloud and empower companies and vendors to provide their content over a widely spread platform to the enduser. Fog computing should help to transport the distributed data closer to the enduser and thus decrease latency and the number of required hops and therefore better support mobile computing and streaming services. Besides the Internet of Things, the rising demand of users to access data at any time, from any place and with any device, is another reason why the idea of fog computing will become increasingly important.

What are use cases of fog computing?

One should not be too confused by this new term. Although fog computing is a new terminology. But looking behind the courtain it quickly becomes apparent that this technology is already used in modern data centers and the cloud. A look at a few use cases illustrates this.

Seamless integration with the cloud and other services

The fog should not replace the cloud. Based on fog services the cloud should be enhanced by isolating the user data which are exclusively located at the edge of a network. From there it should allow administrators to connect analytical applications, security functions and more services directly to the cloud. The infrastructure is still based entirely on the cloud concept, but extends to the edge with fog computing.

Services to set vertical on top of the cloud

Many companies and various services already using the ideas of fog computing by delivering extensive content target-oriented to their customer. This includes among others webshops or provider of media content. A good example for this is Netflix, who is able to reach its numerous globally distributed customers. With the data management in one or two central data centers, the delivery of video-on-demand service would otherwise not be efficiently enough. Fog computing thus allows providing very large amounts of streamed data by delivering the data directly performant into the vicinity of the customer.

Enhanced support for mobile devices

With the steadily growth of mobile devices and data administrators gain more control capabilities where the users are located at any time, from where they login and how they access to the information. Besides a faster velocity for the enduser this leads to a higher level of security and data privacy by data can be controlled at various edges. Moreover fog computing allows a better integration with several cloud services and thus ensures an optimized distribution across multiple data centers.

Setup a tight geographical distribution

Fog computing extends existing cloud services by spanning up an edge network which consist of many distributed endpoints. This tight geographical distributed infrastructure offers advantages for variety of use cases. This includes a faster elicitation and analysis of big data, a better support for location-based services by the entire WAN links can be better bridged as well as the capabilities to evaluate data massively scalable in real time.

Data is closer to the user

The amount of data caused by cloud services require a caching of the data or other services which take care of this subject. This services are located close to the enduser to improve latency and optimize the data access. Instead of storing the data and information centralized in a data center far away from the user the fog ensures the direct proximity of the data to the customer.

Fog computing makes sense

You can think about buzzwords whatever you want. Only if you take a look behind the courtain it’s becoming interesting. Because the more services, data and applications are deployed to the end user, the more the vendors have the task of finding ways to optimize the deployment processes. This means that information needs to be delivered closer to the user, while latency must be reduced in order to be prepared for the Internet of Things. There is no doubt that the consumerization of IT and BYOD will increasing the use and therefore the consumption of bandwidth.

More and more users rely on mobile solutions to run their business and to bring it into balance with the personal live. Increasingly rich content and data are delivered over cloud computing platforms to the edges of the Internet where at the same time are the needs of the users getting bigger and bigger. With the increasing use of data and cloud services fog computing will play a central role and help to reduce the latency and improve the quality for the user. In the future, besides ever larger amounts of data we will also see more services that rely on data and that must be provided more efficient to the user. With fog computing administrators and providers get the capabilities to provide their customers rich content faster and more efficient and especially more economical. This leads to faster access to data, better analysis opportunities for companies and equally to a better experience for the end user.

Primarily Cisco will want to characterize the word fog computing to use it for a large-scale marketing campaign. However, at the latest when the fog generates a similar buzz as the cloud, we will find more and more CDN or other vendors who offer something in this direction as fog provider.

Categories
Analysis

Survey: Your trust in the Cloud. Europe is the safe haven. End-to-end encryption creates trust.

After the revelations about PRISM I had started a small anonymous survey on the current confidence in the cloud, to see how the scandal has changed on the personal relationship to the cloud. The significance of the result is more or less a success. The participation was anything but representative. With at least 1499 visits the interest in the survey was relatively large. A participation of 53 attendees is then rather sobering. Thus, the survey is not representative, but at least shows a trend. In this context I would like to thank Open-Xchange and Marlon Wurmitzer of GigaOM for the support.

The survey

The survey consisted of nine questions and was publicly hosted on twtpoll. It exclusively asked questions about trust in the cloud and how this can possibly be strengthened. In addition, the intermediate results were publicly available at each time. The survey was distributed in German and English speaking countries on the social networks (Twitter, Facebook, Google Plus) and the business networks XING and LinkedIn because this issue affects not a specific target audience, but has an impact on all of us. This led on twtpoll to 1,442 views across the web and 57 views of mobile devices and ended with 53 respondents.

The survey should not be considered as representative for this reason, but shows a tendency.

The survey results

Despite the PRISM scandal the confidence in the cloud is still present. 42 percent continue to have a high confidence, 8 percent even a very high level of confidence. For 15 percent the confidence in the cloud is very low; 21 percent appreciate the confidence is low. Another 15 percent are neutral towards the cloud.

The confidence in the current cloud provider is balanced. 30 percent of respondents still have a high level of confidence, 19 percent even a very high level of trust in their providers. This compares to 15 percent each, which have a low or very low confidence. 21 percent are undecided.

The impact on the confidence in the cloud by PRISM leads to no surprise. Only 9 percent see no affect for them; 8 percent a little. 32 percent are neutral. However, 38 percent of the participants are strongly influenced by the PRISM revelations and 13 percent most intensive.

62 percent of the participants used services of cloud provider, which are accused of supporting PRISM. 38 percent are at other providers.

As to be expected, PRISM has also affected the reputation of the cloud provider. For 36 percent the revelations have strongly influenced the confidence, for 13 percent even very strong. However, even 32 percent are neutral. For 11 percent the revelations have only a slight influence. For 8 percent is no influence at all.

Despite of PRISM 58 percent want to continue to use cloud services. 42 percent have already played with the idea to leave the cloud due to the incidents.

A clear sign goes to the provider when it comes to the topic of openness. 43 percent (very high) and 26 percent (high) expect an unconditional openness of the cloud provider. 25 percent are undecided. For only 2 percent (low) and 4 percent (very low) it does not matter.

74 percent see in a 100 percent end-to-end encryption the ability to increase confidence in the cloud. 26 percent think it as no potential.

The question of the most secure/ trusted region revealed no surprises. With 92 percent Europe counts after the PRISM revelations as the top region in the world. Africa received 4 percent, North America and Asia-Pacific each 2 percent. For South America was not voted.

Comment

Even if the revelations about PRISM to cause indignation at the first moment and still continue to provide for uncertainty, the economic life must go on. The tendency of the survey shows that confidence in the cloud has not suffered too much. But at this point it must be said: Cling together, swing together! We all have not precipitate into the cloud ruin overnight. The crux is that the world is increasingly interconnected using cloud technologies and the cloud thus serves as a focal point of modern communications and collaboration infrastructure.

For that reason we can not go back many steps. A hardliner might naturally terminate all digital and analog communication with immediate effect. Whether that is promising is doubtful, because the dependency has become too large and the modern corporate existence is determined by the digital communication.

The sometimes high number of neutral responses to the trust may have to do with that we all has always played the thought in the subconscious, that we are observed in our communication. Due to the current revelations we have it now in black and white. The extent of surveillance, meanwhile also of the disclosure of TEMPORA by the British Secret Service, has surprised. Related to TEMPORA, hence the survey result for Europe as a trusted region is disputable. But against surveillance at strategic intersections in the internetalso the cloud providers themselves are powerless.

Bottom line the economic-(life) has to go on. But at all the revelations one can see, that we can not rely on governments, from which regulations and securities are repeatedly required. On the contrary, even these have evinced interest to read data along. And one we must always bear in mind again. How should laws and rules help, when they are broken again and again by the highest authority.

Companies and users must therefore now assume more responsibility, take the reins in their hands, and provide the broadest sense for their desired security (end-to-end encryption) itself. Numerous solutions from the open source but also from the professional sector help to achieve the objectives. Provider of cloud and IT solutions are now challenged to show more openness as they may want to do.

Graphics on the survey results

1. How is your current trust in the cloud in general?

2. How is your current trust in the cloud provider of your choice?

3. How does the PRISM uncoverings influence your trust in the cloud?

4. Is your current cloud provider one of the accused?

5. How does the PRISM uncoverings influence your trust in the cloud provider of your choice?

6. Did you already think about to leave the cloud e.g. your cloud provider due to the PRISM uncoverings?

7. How important is the unconditional openness of your provider in times of PRISM and surveillance?

8. Do you think a 100% end-to-end encryption without any access and other opportunities of third parties can strengthen the trust?

9. In your mind which world region is the safest/ trustworthiest to store data in?

Categories
Insights @en

Building a hosted private cloud with the open source cloud computing infrastructure solution openQRM

Companies have recognized the benefits of the flexibility of their IT infrastructure. However, the recent past has reinforced the concern to avoid the path to the public cloud for reasons of data protection and information security. Therefore alternatives need to be evaluated. With a private cloud one is found, if this would not end in high up-front investments in own hardware and software. The middle way is to use a hosted private cloud. This type of cloud is already offered by some providers. However, there is also the possibility to build it up and run themselves. This INSIGHTS report shows how this is possible with the open source cloud computing infrastructure solution openQRM.

Why a Hosted Private Cloud?

Companies are encouraged to create more flexible IT infrastructure to scale their resource requirements depending on the situation. Ideally, the use of a public cloud is meeting these requirements. For this no upfront investments in own hardware and software are necessary. Many companies dread the way into public cloud for reasons of data protection and information security, and look around for an alternative. This is called private cloud. The main advantage of a private cloud is to produce a flexible self-service provisioning of resources for staff and projects, such as in a public cloud, which is not possible by a pure virtualization of the data center infrastructure. However, it should be noted that investments in the IT infrastructure must be made to ensure the virtual resource requirements by a physical foundation for building a private cloud.

Therefore, an appropriate balance needs to be found that allows a flexible resource obtaining for a self-service, but at the same time must not expect any high investment in the own infrastructure components and without to waive a self-determined data protection and security level. This balance exists in hosting a private cloud at an external (web) hoster. The necessary physical servers are rented on a hoster who is responsible for their maintenance. In order to secure any physical resource requirements, appropriate arrangements should be made with the hoster to use the hardware in time. Alternatives include standby server or similar approaches.

On this external server-/storage-infrastructure the cloud infrastructure software is then installed and configured as a virtual hosted private cloud. For example, according to their needs this allows employees to start own servers for software development and freeze and remove them after the project again. For the billing of the used resources, the cloud infrastructure software is responsible, which provides such functions.

openQRM Cloud

Basically, an openQRM Cloud can be used for the construction of a public and private cloud. This completely based on openQRM’s appliance model and offers fully automated deployments that can be requested by cloud users. For this openQRM Cloud supports all the virtualization and storage technologies, which are also supported by openQRM itself. It is also possible to provide physical systems over the openQRM Cloud.

Based on the openQRM Enterprise Cloud Zones, a fully distributed openQRM Cloud infrastructure can also be build. Thus, several separate data centers may be divided into logical areas or the company topology can be hierarchically and logically constructed safely separated. Moreover openQRM Enterprise Cloud Zones integrates a central cloud and multilingual portal including a Google Maps integration, so an interactive overview of all sites and systems is created.

Structure of the reference environment

For the construction of our reference setup a physical server and multiple public IP addresses are required. There are two options for installing openQRM:

  • Recommended: Configuration of a private class C subnet (192.168.xx/255.255.255.0) in which openQRM is operated. openQRM required an additional public IP address for access from the outside.
  • Option: Install openQRM in a virtual machine. In this variant openQRM controls the physical server and receives the virtual machines from the physical host for subsequent operations of the cloud.

For the assignment of public IP addresses cloud NAT can be used in both scenarios. This openQRM Cloud function will translate the IP addresses of the private openQRM Class C network into public addresses. This requires pre-and post-routing rules on the gateway / router using iptables, configured as follows:

  • iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -o br0 -j MASQUERADE
  • iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -o eth0 -j MASQUERADE
  • o More information on pre-and post-routing with iptables can be found at http://www.karlrupp.net/en/computer/nat_tutorial

For the configuration of complex network environments, the IP management plugin is recommended. This enterprise plugin allows to set any network- and IP address configurations for the managed servers. In the openQRM Cloud, it also provides a mapping of networks to cloud users and groups and also supports the automated VLAN management.

In addition, two bridges are needed:

  • One of the public interface with a public IP address.
  • One for the private interface dpe for which DHCP is configured.

The data in the cloud are later stored in the local storage of the physical server. For this purpose, there are two variants:

Recommended:

  • KVM-Storage LVM Deployment (LVM Logical Volume Deployment)
  • Requires one or more dedicated LVM volume group (s) for the virtual machines. For more complex setups a central iSCSI target or a SAN is recommended.

Option:

  • KVM-Storage BF Deployment (blockfile deployment)
  • Create a directory on the Linux server as
    • /var/lib/kvm-storage/storage1
    • /var/lib/kvm-storage/storage2
    • (The storage directories can be set arbitrarily on the plugin configuration.)

  • For more complex setups, a central NAS for the configured mount points should be used.

At the end iptables must be configured according to the rules above and the desired own safety. After that the installation of openQRM follows. Packages for popular Linux distributions are available at http://packages.openqrm.com. After openQRM has been installed and initialized the configuration follows.

Basic configuration of openQRM

The first step after initialization is editing the „/usr/share/openqrm/plugins/dns/etc/openqrm-plugin-dns.conf“, by changing the default value to the own domain.

Configure domain for the private network
# please configure your domain name for the openQRM network here!
OPENQRM_SERVER_DOMAIN=”oqnet.org”

After that we activate and start the plug-ins via the web interface of the openQRM server. The following plugins are absolutely necessary for this:

DNS Plugin

  • Used for the automated management of the DNS service for the openQRM management network.

DHCPD

  • Automatically manages the IP addresses for the openQRM management network.

KVM Storage

  • Integrates the KVM virtualization technology for the local deployment.

Cloud-Plugin

  • Allows the construction of a private and public cloud computing environment with openQRM.

Further additional plugins are recommended:

Collectd

  • A monitoring system including long-term statistics and graphics.

LCMC

  • Integrates the Linux Cluster Management Console to manage the high availability of services.

High-Availability

  • Enables automatic high availability of appliances.

I-do-it (Enterprise Plugin)

  • Provides an automated documentation system (CMDB).

Local server

  • Integrates existing and locally installed server with openQRM.

Nagios 3

  • Automatically monitors systems and services.

NoVNC

  • Provides a remote web console for accessing virtual machines and physical systems.

Puppet

  • Integrates Puppet for a fully automated configuration management and application deployment in openQRM.

SSHterm

  • Allows secure login via a web shell to the openQRM server and integrates resource

Plugins which offer more comfort in the automatic installation of virtual machines as cloud templates are:

Cobbler

  • Integrates cobbler for automated deploying of Linux system in openQRM.

FAI

  • Integrates FAI for the automated provisioning of Linux systems in openQRM.

LinuxCOE

  • Integrates LinuxCOE for the automated provisioning of Linux systems in openQRM.

Opsi

  • Integrates Opsi for the automated provisioning of Windows systems in openQRM.

Clonezilla/local-storage

  • Integrates Clonezilla for the automated provisioning of Linux and Windows systems in openQRM.

Basic configuration of the host function for the virtual machines

Case 1: openQRM is installed directly on the physical system

Next, the host must be configured to provide the virtual machines. For that an appliance type KVM Storage Host is created. This works as follows:

  • Create appliance
    • Base > Appliance > Create
  • Name: e.g. openQRM
  • Select the openQRM server itself as resource
  • Type: KVM Storage Host

This gives openQRM the information that a KVM storage is to be created on this machine.

Case 2: openQRM is installed in a virtual machine running on the physical system

Using the “local server” plugin the physical system is integrated into openQRM. To this the “openQRM-local-server” integration tool is copied from the openQRM server on the system to be integrated, e.g.

scp /usr/share/openqrm/plugins/local-server/bin/openqrm-local-server [ip-address of the physical system]:/tmp/

After that, it is executed on the system to be integrated:

ssh [ip-address of the physical system]: /tmp/openqrm-local-server integrate -u openqrm -p openqrm -q [ip-address of the openQRM server] -i br0 [-s http/https]

(In this example “br0” is the bridge to the openQRM management network.)

The integration via “local server” creates in openQRM automatically:

  • a new resource
  • a new image
  • a new kernel
  • a new appliance from the sub-components above

Next, the appliance of the currently integrated physical system must be configured to provide the virtual machines. For this the appliance is set as type KVM Storage Host. That works as follows:

  • Edit the appliance
    • Base > Appliance > Edit
  • Type: Set KVM Storage Host

This gives openQRM the information that a KVM storage is to be created on this machine.

Basic configuration of the storage function

Now, the basic configuration of the storage follows. For this purpose, a storage object of a desired type is created. This works like this:

  • Create storage
    • Base > Components > Storage > Create
    Case 1, select the resource of the openQRM server
  • Case 2, select the resource of the integrated physical system
  • Name: e.g. KVMStorage001
  • Select deployment type
    • This depends on the selected type at the beginning: KVM-Storage LVM deployment or directory (KVM-Storage BF deployment)

Preparation of virtual machine images

In order to provide virtual machine (VM) later over the cloud portal as part of finished products, an image for a VM must first be prepared. This works as follows:

  • Creating a new virtual machine with a new virtual disk and install an ISO image on it.
    • Plugins > Deployment > LinuxCOE > Create Templates
    • The created images are automatically stored in an ISO pool which each virtual machine within openQRM can access.

Subsequently a base for the master template is created. This serves as a basis to provide users a product over the order process.

  • Create a new appliance
    • Base > Appliance > Create
  • Create a new resource
    • KVM-Storage virtual machine
      • Create a new VM
      • Make settings
      • Select an ISO image
      • Create
    • Select created resource
  • Create a new image
    • Add image as KVM-Storage volume
    • Select KVM-Storage
    • Select volume group on KVM-Storage
    • Add a new logical volume
    • Select an image for the appliance
    • Edit to set a password (The previously chosen password of the ISO is overridden.)
  • Select kernel
    • From the local disk
    • (LAN boot is also possible)
  • Start appliance
    • The automatic installation can now be tracked over VNC.
    • Further adaptations can be done itself.
    • Please consider
      • Misc > Local-Server > Help >Local VMs („Local-Server for local virtual machines “)

Cleaning up

The created appliance can now be stopped and deleted afterwards. The important point was to create an image that can be used as a master template for the cloud.

The created image using the appliance includes the basic operating system which was created from the ISO image.

Configuration of the openQRM Cloud

We have now finished all preparations to start configuring the openQRM cloud. We find the necessary settings at „Plugin > Cloud > Configuration > Main Config“. All parameters which are adapted here have a direct impact on the behavior of the whole cloud.

Basically an openQRM Cloud can be run with basic settings. Depending on the needs and the own specific situation, adaptations can be make. The area “description” in the right column of the table are helpful.

However, there are parameter which are need to consider regardless of the own use case. These are:

Automatic provisioning (auto_provision)

  • Determines if systems are automatically provisioned by the cloud or if an approval of a system administrator is needed.

Provisioning of physical systems (request_physical_systems)

  • This parameter defines if besides virtual machines even physical hosts can be provisioned by the cloud.

Cloning of images (default_clone_on_deploy)

  • By default the cloud rolls out copies (clones) of an image.

High-availability (show_ha_checkbox)

  • Enables to operate the openQRM cloud including the high-availability of the provided resources.

Billing of the used resources (cloud_billing_enabled)

  • openQRM has an extensive billing system to determine own prices for all resources to get a transparent overview of the running costs.

Cloud product manager (cloud_selector)

  • Enables the product manager to provide users various resources over the cloud portal.

Currency for the settlement of resources (cloud_currency)

  • Determines the local currency with which the resources are to be settled.

Exchange ratio for resources in real currency (cloud_1000_ccus)

  • Determines how many 1000 CCUS (Cloud Computing Units) correspond to a previously fixed real currency.

Resource allocation for groups (resource_pooling)

  • Determines from which host an appointed user group receive their virtual machines.

Creating products for the openQRM Cloud

To provide our users the resources over the cloud portal we have to create products first which define the configuration of a virtual machine. The settings for that we find at „Plugin > Cloud > Configuration > Products“.

The “Cloud product management” is used to create various products which users can choose later to build own virtual machines itself over the cloud portal. Products which are available for us are:

  • Number of CPUs
  • Size of local disks
  • Size of RAM
  • Kernel type
  • Number of network interfaces
  • Pre-installed applications
  • Virtualization type
  • If a virtual machine should be high-available

Over the status line by using +/- each product can be activated or deactivated to show or hide it for the user in the cloud portal.

Please note: Products which are deactivated but are still active within a virtual machine continue to be billed.

To create a new CPU product we select the “CPU” tap and define in the area “Define a new CPU product” our wanted parameter.

The first parameter defines how many CPUs (cores), here 64, our product should have. The second parameter determines the value of the product and how many costs occur per hour during its use. In this example, 10 CCUs per hour for 64 CPUs occurs.

With the arrow keys the order on how the single products are displayed in the cloud portal can be determine. The default value is above one.

Please note: In the cloud portal standard profiles in the sizes „small“, „medium“ and „big“ exist. According to the order the profiles are automatically be determined under the respective products. That means that “small” is always the first value, “medium” the second and “big” the third.

openQRM also allows to order virtual machines with pre-configured software stacks. For this openQRM uses Puppet (Plugins > Deployment > Puppet). Thus, for example, it is possible to order the popular LAMP stack.

If we have configured our product portfolio, it’s the user’s turn to order virtual machines. This is done via the cloud portal.

openQRM Cloud-Portal

To create a new virtual machine (VM) we click on the tap “New”. An input mask follows on which we can create our
VM based on the products the administrator has determined and approved in the backend.

We choose the profile “Big” and a LAMP server. Our virtual machine now consists of the following products:

  • Type: KVM-Storage VM
  • RAM: 1 GB
  • CPU: 64 cores
  • Disk: 8 GB
  • NIC: 1

In addition the virtual machine should be “high-available”. This means, if the VM fails, automatically a substitute machine with exactly the same configuration is started to work on with.

For this configuration we will have to pay 35 CCUs per hour. This is equivalent to 0.04 euros per hour or € 0.84 per day or € 26.04 per month.

If we want to order the virtual machine we select “send”.

Below the tap “Orders” we see all current and past orderings we have made with our user. The status “active” in the first column shows that the machine is already started.

Parallel to this we receive an e-mail including the ip-address, a username and a password, we can use to log into the virtual machine.

The tap “Systems” confirms both information and shows further details of the virtual machine. In addition we have the opportunity to change the systems configuration, pause the virtual machine or to restart. Furthermore the login via a web-shell is possible.

If the virtual machine is not needed any more it can be paused. Alternatively it is possible that the administrator disposes this due to an inactivity of the system or at a specific time.

Creating a virtual machine with the „Visual Cloud Designer“

Besides the “ordinary” way of building a virtual machine, the openQRM Cloud portal enables the user to do that conveniently via drag and drop. Here the „Visual Cloud Designer“ helps, which can be find behind the tap „VCD“.

Using the slider on the left below „Cloud Components” it is possible to scroll between the products. Using the mouse allows to assemble the „Cloud Appliance“ (virtual machine) in the middle with the appropriate products.

Our virtual machine „Testosteron“ we assembled in this case with KVM-Storage, Ubuntu 12.04, 64 CPUs, 1024 MB Ram, 8 GB disk, one NIC, and software for a webserver and the high-availability feature.

With one click on “Check Costs”, openQRM tells us that we will pay 0.03 EUR per hour for this configuration.

To start the ordering process for the virtual machine we click “request”. We get the message that openQRM starts rolling out the resource and we will receive further information into our mailbox.

The e-mail includes, as described above, all access data to work with the virtual machine.

In the cloud portal under “systems” we already see the started virtual machine.

Creating a virtual machine with the „Visual Infrastructure Designer“

Besides the provisioning of single virtual machines the openQRM cloud portal also offers the opportunity to provide complete infrastructures consisting of multiple virtual machines and further components, at one click.

Thus, we use the „Visual Infrastructure Designer“. This can be found in the cloud portal behind the tap “VID”.

Using the “VID” it is possible to build and deploy a complete WYSIWYG infrastructure via drag and drop. For this purpose, it is necessary to create ready profiles with pre-configured virtual machines at first, which include for example webserver, router or gateways. These can be deployed afterwards.

Categories
Analysis

How to protect a companies data from surveillance in the cloud?

With PRISM the U.S. government has further increased the uncertainty among Internet users and companies, and therefore strengthened the loss of confidence in U.S. vendors enormously. After the Patriot Act, which was often cited as the main argument against the use of cloud solutions from US-based providers, the surveillance by the NSA be the final straw. From a business perspective, under these present circumstances, the decision can only be to opt out of a cloud provider in the United States, even if it has a subsidiary with a location and a data center in Europe or Germany. That I already pointed out in this article. Nevertheless, the economic life must go on, which can also work with the cloud. However, here is a need for pay attention to the technical security, which is discussed in this article.

Affected parties

This whole issue is not necessarily just for companies but for every user of actively communicating in the cloud and shares and synchronized its data. Although the issue of data protection can not be neglected in this context. For companies it is usually still more at stake when internal company information is intercepted or voice and video communication is observed. At this point it must be mentioned that this has nothing to do primarily with the cloud. Data communication was operated long before cloud infrastructures and services. However, the cloud leads to an increasingly interconnection, and act as a focal point of modern communications and collaboration infrastructure in the future.

The current security situation

The PRISM scandal shows the full extent of the possibilities that allows U.S. security agencies, unimpeded and regardlessly access the global data communication. For this, the U.S. government officially use the “National Security Letter (NSL)” of the U.S. Patriot Act and the “Foreign Intelligence Surveillance Act (FISA).” Due to these anti-terror laws, the U.S. vendor firms and their subsidiaries abroad are obliged to provide further details about requested information.

As part of the PRISM revelations it is also speculated about supposed interfaces, “copy-rooms” or backdoors at the providers with which third parties can directly and freely tap the data. However, the provider opposed this vehemently.

U.S. vendors. I’m good, thanks?

While choosing a cloud provider* different segments are considered that can be roughly divided into technical and organizational areas. In this case the technical area is reflecting the technical security and the organizational the legal security.

The organizational security is to be treated with caution. The Patriot Act opens the U.S. security agencies legally the doors if there is a suspected case. How far this remains within the legal framework, meanwhile many to doubt. At this point, trust is essential.

Technologically the data centers of cloud providers can be classified as safe. The effort and investment which are operated by the vendors cannot be provide by a normal company. But again, 100% safety can never be guaranteed. If possible, the user should also use its own security mechanisms. Furthermore, the rumors about government hits by the NSA should not be ignored.

About two U.S. phone companies confirmed reports are circulating that are talking about direct access to the communication by the NSA and strong saved rooms that are equipped with modern surveillance technologies. In this context, the provider of on-premise IT solutions should also be considered how far these are undermined.

From both terms and the current security situation, U.S. vendors should be treated with caution. This also applies to its subsidiaries in the EU. After all, they are even not able to meet at least the necessary legal safety.

But even the German secret service should not be ignored. Recent reports indicate that the “Federal Intelligence Service (BND)” will also massively expand the surveillance of the internet. This amounts to a budget of 100 million Euro, of which the federal government already released five million EUR. Compared to the NSA, the BND will not store the complete data traffic on the Internet, but only check for certain suspicious content. For this purpose he may read along up to 20 percent of the communication data between Germany and abroad, according to the G 10 Act.

Hardliners have to adjust all digital and analog communication immediately. But this will not work, because the dependency has become too large and the modern business life is determined by the communication. Therefore, despite surveillance, other legal ways must be found to ensure secure communication and data transmission.

* In this context a cloud provider can be a service provider or a provider of private cloud or IT hardware and software solutions.

Requirements for secure cloud services and IT solutions

First, it must be clearly stated that there is no universal remedy. The risk shall be made ​​by the user, who is not aware of the dangerous situation or who has stolen corporate data on purpose. Regardless of this, the PRISM findings lead to a new safety assessment in the IT sector. And it is hoped that this also increases the security awareness of users.

Companies can obtain support from cloud services and IT solutions, which have made ​​the issue of an unconditional security to be part of their leitmotif from the beginning. Under present circumstances these providers should preferred be from Europe or Germany.

Even if there are already first reports of implications and influences by the U.S. government and U.S. providers to the European Commission, which have prevented an “Anti-FISA” clause in the EU data protection reform, exist no similar laws such as the U.S. Patriot Act, or FISA in Europe.

Therefore also European and German IT vendors, which are not subject to the Patriot Act and not infiltrated by the state, can help U.S. users to operate their secure data communication.

Criteria for vendor selection

On the subject of security it is always about trust. This trust a provider only achieved through openness, by giving its customers a technologically look in the cards. IT vendors are often in the criticism to be sealed and do not provide information on their proprietary security protocols. This is partly because there are also provider willing to talk about it and make no secret. Thus, it is important to find this kind of provider.

In addition to the subjective issue of trust, it is in particular the implemented security, which plays a very important role. Here it should be ensured that the provider uses current encryption mechanisms. This includes:

  • Advanced Encryption Standard – AES 256 to encrypt the data.
  • Diffie-Hellman und RSA 3072 for key exchange.
  • Message Digest 5/6 – MD5/MD6 for the hash function.

Furthermore, the importance of end-to-end encryption of all communication takes is getting stronger. This means that the whole process, which a user passes through the solution, is encrypted continuously from the beginning to the end. This includes inter alia:

  • The user registration
  • The Login
  • The data transfer (send/receive)
  • Transfer of key pairs (public/private key)
  • The storage location on the server
  • The storage location on the local device
  • The session while a document is edited

In this context it is very important to understand that the private key which is used to access the data and the system only may exclusively be owned by the user. And is only stored encrypted on the local system of the user. The vendor may have no ways to restore this private key and never get access to the stored data. Caution: There are cloud storage provider that can restore both the private key, as can also obtain access to the data of the user.

Furthermore, there are vendor which discuss the control over the own data. This is indeed true. However, sooner or later it is inevitably to communicate externally and then a hard end-to-end encryption is essential.

Management advisory

In this context, I would like to mention TeamDrive, which I have analyzed recently. The German file sharing and synchronization solution for businesses is awarded with the Data Protection Seal of the “Independent Centre for Privacy Protection Schleswig-Holstein (ULD)” and is a Gartner “Cool Vendor in Privacy” 2013. From time to time TeamDrive is described as proprietary and closed in the media. I can not confirm this. For my analysis TeamDrive willingly gave me extensive information (partly under NDA). Even the self developed protocol will be disclosed on request for an audit.

More information on selecting a secure share, sync and collaboration solution

I want to point out my security comparison between TeamDrive and ownCloud, in which I compared both security architectures. The comparison also provides further clues to consider when choosing a secure share, sync and collaboration solution.

Categories
Analysis

Survey: How is your current trust in the cloud?

After the revelations on PRISM I have started a small anonymous survey to see what is the current situation with the confidence in the cloud and how the scandal has changed on the personal relationship to the cloud.

The questions

  • How is your current trust in the cloud in general?
  • How is your current trust in the cloud provider of your choice?
  • How does the PRISM uncoverings influence your trust in the cloud?
  • Is your current cloud provider one of the accused?
  • How does the PRISM uncoverings influence your trust in the cloud provider of your choice?
  • Did you already think about to leave the cloud e.g. your cloud provider due to the PRISM uncoverings?
  • How important is the unconditional openness of your provider in times of PRISM and surveillance?
  • Do you think a 100% end-to-end encryption without any access and other opportunities of third parties can strengthen the trust?
  • In your mind which world region is the safest/ trustworthiest to store data in?

To participate in the survey please choose this way:

Your trust in the Cloud! – After the PRISM uncoverings how is your trust in the cloud?

Categories
Comment

PRISM plays into German and European cloud computing providers hands

The U.S. government and above all PRISM has done the U.S. cloud computing providers a bad turn. First discussions now kindle if the public cloud market is moribund. Not by a long shot. On the contrary, European and German cloud computing providers play this scandal into the hands and will ensure that the European cloud computing market will grow stronger in the future than predicted. Because the trust in the United States and its vendors, the U.S. government massively destroyed itself and thus have them on its conscience, whereby companies, today, have to look for alternatives.

We’ve all known it

There have always been suspicions and concerns of companies to store their data in a public cloud of a U.S. provider. Here, the Patriot Act was the focus of discussion in the Q&A sessions or panels after presentations or moderations that I have kept. With PRISM the discussion now reached its peak and confirm, unfortunately, those who have always used eavesdropping by the United States and other countries as an argument.

David Lithicum has already thanked the NSA for the murder of the cloud. I argue with a step back and say that the NSA “would be” responsible for the death of U.S. cloud providers. If it comes to, that remains to be seen. Human decisions are not always rational nature.

Notwithstanding the above, the public cloud is not completely death. Even before the announcement of the PRISM scandal, companies had the task to classify their data according to business critical and public. This now needs to be further strengthen, because completely abandon the public cloud would be wrong.

Bye Bye USA! Welcome Europe und Germany

As I wrote above, I see less death of the cloud itself, but much more to come the death of U.S. providers. Hence I include those who have their locations and data centers here in Europe or Germany. Because the trust is so heavily destroyed that all declarations and appeasement end in smoke in no time.

The fact is that U.S. providers and their subsidiary companies are subordinate to the Patriot Act and therefore also the “Foreign Intelligence Surveillance Act (FISA)”, which requires them to provide information about requested information. The provider currently trying to actively strengthen themselves by claiming more responsibility from the U.S. government, to keep at least the rest of trust what is left behind. This is commendable but also necessary. Nevertheless, the discussion about the supposed interfaces, “copy-rooms” or backdoors at the vendors, with which third parties can freely tap the data, left a very bad aftertaste.

This should now encourage more European and German cloud providers. After all, not to be subject to the U.S. influence should played out as an even greater competitive advantage than ever. These include inter alia the location of the data center, the legal framework, the contract, but also the technical security (end-to-end encryption).

Depending on how the U.S. government will react in the next time, it will be exciting to see how U.S. provider will behave on the European market. So far, there are always 100% subsidiaries of the large U.S. companies that are here locally only as an offshoot and are fully subordinated to the mother in the United States.

Even though I do not advocate a pure “Euro-Cloud” neither a “German Cloud”. But, under these current circumstances, there can only be a European solution. Viviane Reding, EU Commissioner for Justice, is now needed to enforce an unconditional privacy regulation for Europe, which European companies strengthens against the U.S. companies from this point in the competition.

The courage of the providers is required

It appears, that there will be no second Amazon, Google, Microsoft or Salesforce from Europe or even Germany. The large ones, especially T-Systems and SAP strengthen their current cloud business and giving companies a real alternative to U.S. providers. Even bright spots of startups are sporadic seen on the horizon. What is missing are inter alia real and good infrastructure-as-a-service (IaaS) offerings of young companies who do not only have infrastructure resources in their portfolio, but rely on services similar to Amazon. The problem with IaaS are the high capital requirements that are necessary for it to ensure massive scalability and more.

Other startups who are offering for example platform-as-a-service (PaaS), in many cases, set in the background on the infrastructure of Amazon – U.S. provider. But here have providers such as T-Systems the duty not to focus exclusively on enterprises and also allow developers to build their ideas and solutions on a cloud infrastructure in Germany and Europe through the “Amazon Way”. There is still a lack of a real(!) German-European alternatives to Amazon Web Services, Google, Microsoft and Salesforce!

How should companies behave now?

Among all these aspects one have to advise companies, to look for a provider that is located in a country that guarantees the required legal conditions for the company itself regarding data protection and information security. And that can currently only be a provider from Europe or Germany. Incidentally, that was even before PRISM. Furthermore, companies themselves have the duty to classify their data and to evaluate mission-critical information at a much higher level of protection than less important and publicly available information.

How it actually looks at U.S. companies is hard to say. After all, 56 percent of the U.S. population find the eavesdropping of telephone calls as acceptable. Europeans, and especially the Germans, will see that from a different angle. In particular, we Germans will not accept a Stasi 2.0, which instead of rely on the spies from the ranks (neighbors, friends, parents, children, etc.), on machines and services.

Categories
Comment

If you sell your ASP as SaaS you do something basically wrong!

Despite the continuing spread of the cloud and the software-as-a-service (SaaS) model, you always meet again and again people, who are anchored in the firm belief to offer cloud computing since 20 years. Because ASP (Application Service Providing) was finally nothing else. The situation is similar with traditional outsourcers, whose pre-sales team will gladly agreed for an appointment after a call to model the customers ‘tailored’ cloud computing server infrastructure locally, which may be paid by the customer in advance. In this post it’s only about why ASP has nothing to do with SaaS and cloud.

ASP: 50 customers and 50 applications

Let’s be honest. A company that wants to distribute an application will come sooner or later to the idea that it somehow wants to maximize its profits. In this context, economies of scale have an important role to place an efficient solution on the market that is designed that it remains profitable in spite of its own growth. Unfortunately, an ASP model can not prove exactly that. Because ASP has one problem. It does not scale dynamically. But is only as fast as the administrator must ensure that another server is purchased, installed in the server room, equipped with the operating system and other basic software plus the actual client software.

Furthermore, in the typical ASP model, for each customer an instance of the respective software is needed. In the worst case (depending on performance) at least even a separate (physical) server for each customer is required. This means in numbers, that for 50 customers who want to use exactly the same application but separated from each other, 50 installations of the application and 50 servers are required. Topics such as databases should not be forgotten, where in many cases, up to three times as many databases must be used as applications are provided to customers.

Just consider yourself the effort (cost) that an ASP provider operates to integrate and manage the hosted systems, to service new customers and beyond that to provide them with patches and upgrades. This is unprofitable!

SaaS: 50 customers and 1 application

Compared to ASP SaaS sets on a much more efficient and more profitable model. Instead of running one application for each customer, only one instance of an application for all customers is used. This means that for 50 customers only one instance of the application is required, that all together are using but isolated from each other. Thus, the expenses for the operation and management of the application is reduced in particular. Where an administrator at the ASP model had to update each of the 50 software installation, it is sufficient for SaaS, if a single instance is updated. If new customers want to take advantage to access the application, it is automatically set up, without an administrator needs to install a new server and set up the application for them. This saves both time and capital. This means that the application grows profitable with the requirements of new customers.

Multi-tenancy is the key to success

The concept behind SaaS, which accounts for a significant difference between ASP and SaaS, is called multi-tenancy. Here are several mandates, ie customers, hosted on the same server or software system, without beeing able to look in the data, settings, etc. of each other. This means that only a customer can see and edit its data. A single mandate within the system forms a related to the data and organizationally closed unity.

As noted above, the benefits of a multitenancy system is to centrally install the application, to maintain and optimize memory requirements for data storage. This is due to the fact that data and objects are held across clients and just need to be stored for an installed system and not per mandate. In this context, it must be stressed again that a software system is not multitenant by giving each client its own instance of software. Within the multi-tenancy method all clients using one instance of an application that is centrally managed.

If you still have the question why not to sell ASP as SaaS, read “Software-as-a-Service: Why your application belongs to the cloud as well“.

Categories
Comment

PRISM: Even the University of Furtwangen is planning interfaces for cloud monitoring in Germany

PRISM is the latest buzzword. That similar should happen in Germany too, causes to worry. Despite that this interview was published in the German Manager Magazin, it seems to be a little lost in the rest of the German media landscape, unfortunately. Because in Germany we are also on the way to integrate interfaces for monitoring cloud solutions. Developed and promoted by the University of Furtwangen!

Third-party organizations should be able to check data streams

In an interview with the Manager Magazin under the apparently innocuous title “SAFETY IN CLOUD COMPUTING – “The customer come off second best” says Professor Christoph Reich of the computer science department at the University of Furtwangen and director of its cloud research center: “We want to define interfaces that third party organizations get the opportunity to review the data streams.

This statement was in response to the question of how it can be technically realized that some kind of accountability chain to be set up that works across vendors. This has the background, that the property can be transferred, so that personal data are particularly well protected. For this reason, these accountability chain must not break if another provider comes into play.

So far so good. However, it will be exciting. In the subsequent question by the Manager Magazin:

Would also the federal government be a potential customer? German law enforcement agencies even ask for a single interface to monitor cloud communications data in real time.

Professor Reich answers:

In principle this is going in this direction. But a judicially actionable verifiability looks very different. If you want to record evidential data, you need special memory for that, these are very expensive. We want to give customers the opportunity to get visualized where their data are.

Regardless that the University of Furtwangen do not have this “special memory”, but certainly a government organization. And as long as the interfaces are available, the data can be captured and stored at a third location.

Cloud computing lives from the trust

I already wrote about it three years ago “Cloud computing is a question of trust, availability and security“. The University of Furtwangen would also like to create more trust in the cloud with their “surveillance project”.

I just wonder how to continue building confidence, if you want to integrate interfaces for the potential (government) monitoring of data streams in cloud solutions?