Categories
Cloud Computing

Analyst Strategy Paper: The significance of Frankfurt as a location for Cloud Connectivity

Due to continuous relocation of business-critical data, applications and processes to external cloud infrastructures, the IT-operating concepts (public, private, hybrid), as well as network architectures and connectivity strategies are significantly changing for CIO’s. On the one hand, modern technology is required to provide applications in a performance-oriented, stable and secure manner, on the other hand, the location is significantly decisive for optimal “Cloud-Connectivity“.

Against this background, Crisp Research assesses the role of Frankfurt as data center location and connectivity hub with this strategy paper.

The strategy paper can be downloaded free of charge under “The significance of Frankfurt as a location for Cloud Connectivity“.

Categories
Cloud Computing

The OpenStack appeal continues to rise

For many CIOs and cloud strategists the open cloud management framework OpenStack still counts as a pure marketing engine. But this perception is deceiving. With the “Icehouse” release and the serious support of big IT vendors the open source project is evolving into a leading cloud standard.

The OpenStack momentum continues to build

OpenStack can be used in several different scenarios as an infrastructure foundation for public, private and hybrid clouds. Crisp Research sees “Icehouse” as an important step for the OpenStack community to increase its appeal and to help users on their journey of running their own OpenStack based cloud. To be successful with OpenStack, it is important for CIOs to find the right mix of products, services and resources in the community.

The numbers of the recent “OpenStack Summit” are an indicator for the building OpenStack momentum. More than 4,000 attendees testify to the growing importance of the open source cloud infrastructure software. It is the same with the current OpenStack deployments. In comparison to Q1/2014, the worldwide projects increased by 60 percent in Q2/2014.

On-premise private cloud deployments are still in the lead. There were 55 private cloud deployments in Q1/2014 and 85 deployments in Q2/2014. Even the number of worldwide OpenStack public clouds has jumped, from 17 to 29.

Icehouse is a milestone

After a perceived hype, OpenStack is on a good way to become one of the important cloud standards for private and hybrid cloud environments alongside Microsoft’s Cloud OS. The commitments and investments of almost all big technology vendors imply a clear future development. The new “Icehouse” release is a milestone regarding stability and functionality, and former bugs have been fixed.

– –
* The numbers base on the official statistics of the OpenStack Foundation.

Categories
Cloud Computing

Amazon AWS builds a data center in Germany: Nice idea!

The Amazon Web Services will open a new cloud region targeting the German market by establishing a data center in Germany (Frankfurt). But is this so exciting for German companies?

Amazon AWS to touch down in Germany

Apparently, Amazon AWS has recognized the importance of the German market and the concerns of German companies. Crisp Research knows from reliable sources that the cloud provider will open a cloud region for the German market with a location in Frankfurt in the coming weeks.

After the announcement of Salesforce of a German data center location, Amazon is the next big U.S. cloud provider to follow the trend. This again shows the attractiveness of Germany. After all, most American companies treat the German market rather stepmotherly. Typically, the majority of American cloud providers supply the European market through data centers in Ireland (Dublin) and the Netherlands (Amsterdam). This reduces the attractiveness of cloud providers, especially in the case of medium-sized German businesses. Consultations with IT users consistently show that storing data outside of Germany and an agreement that is based on maximum European law are a no-go.

[Update]: Technical evidence

On the 4th of July 2014 German blogger Nils Jünemann published an article technically referencing on a new AWS region “eu-central-1”. Using a traceroute to “ec2.eu-central-1.amazonaws.com” he proved, that something is available. However, when I traceroute ec2.eu-central-1.amazonaws.com on the 5th of July 2014 the host was unknown.

AWS Portfolio: Slowly reaching the enterprise IT

In addition to the data center site, Amazon has announced AWS CloudTrail last year, the first service to allow companies more control over compliance. AWS CloudTrail helps you monitor and record the AWS API calls to one or more accounts. Here views from the AWS Management Console, the AWS Command Line Interface (CLI), from your own applications or third party applications are taken into account. The collected data are stored on either Amazon S3 or Amazon Glacier for evaluation and can be viewed on tools from AWS or external providers. Amazon CloudTrail can be used free of charge. However, costs are associated with storing the data on Amazon S3 and Amazon Glacier as well as with Amazon SNS notifications.

AWS CloudTrail is one of the most important services for enterprise customers that Amazon has released in recent times. The collected logs support the compliance with government regulations by allowing recording of all accesses to AWS services. One can operate more successful security audits based on the log data, identifying the precise origin of vulnerabilities and unauthorized or erroneous hits on data.

Enterprise quo vadis?

After establishing itself as a leading infrastructure provider and enabler for startups and new business models in the cloud, the company from Seattle has been trying to get one foot directly in the lucrative business environment for quite some time. However, one question remains open. Will that be enough to achieve a critical mass of German companies in order to evolve from a provider for startups and developers to a serious alternative for IT workloads for business?

Yes, under certain conditions:

  • Business related services must be launched simultaneously in all regions and not only in the U.S..
  • AWS requires a network of partners in order to reach the mass of attractive German corporate customers.
  • The localization of all information, such as white papers, how-to’s and training is critical.
  • Less self-service, more managed services and professional services, e.g. through the partner network.
  • Reducing complexity by simplifying the use of the scale-out principle.
  • Cloud Connectivity for reliable access to the services.
  • Avoidance of the service lock-in.
  • Strengthening the AWS Marketplace for easier use of scalable standard workloads and applications.
  • Consideration of hybrid cloud scenarios and strengthening of the partner Eucalyptus on the private cloud side.

Note on the Eucalyptus partnership: Nearly all Eucalyptus customers should also be AWS customers (Source: Eucalyptus). This means, conversely, that some hybrid cloud infrastructure exists between on-premise Eucalyptus infrastructure and the Amazon public cloud.

The existing question marks: Microsoft and Google

Medium-sized businesses demand from cloud providers that the data are stored in a German data center. About 75 percent consider physical data location a necessity to enforce German law more easily.

After Salesforce, IBM and Amazon are the only remaining major cloud providers who could be expected to make investments in this direction.

About Google, one can unfortunately say that nothing will happen in the near or far future. The DNA and mentality of the company in terms of data location and customer concerns differ too strongly from those of other providers.

At Microsoft, the cards are basically good. However, the Redmond company doesn’t need to play them now. Microsoft is pursuing a different strategy by using the Cloud OS Partner Network for local providers worldwide (e.g. Pironet NDH in Germany), empowering them with the so-called “Azure Pack” to offer an own Microsoft Azure based cloud infrastructure in a hosted model from a local data center.

How the trend of building local data centers will develop remains to be seen. Bottom line is, Germany and especially the location of Frankfurt, among others due to the DE-CIX, are well prepared to take additional international cloud providers. A key finding of this development is that international providers have understood the concerns and are willing to make compromises in the name of what is good for the user.

Categories
Cloud Computing

Analyst Report: Amazon AWS vs. Microsoft Azure

After Microsoft needed to fight against vendors like Novell, Oracle, IBM or HP for on-premise market shares in the last decades, with the Amazon Web Services a new giant has been established in the public cloud, who puts out feelers to the enterprise customer. A market which predominantly is dominated by Microsoft and which reveals an enormous potential for the vendors.

Market forecasts by Crisp Research show a strong growth by 40 percent per year for the next years, whereby revenues in Germany in 2018 amounted up to 28 billion euros. This free analyst report compares the service portfolio as well as the strategy of the Amazon Web Services with that of Microsoft Azure.

Categories
Cloud Computing

Analyst Report: OpenStack and the Enterprise adoption

The OpenStack momentum is getting bigger in Germany. The amount of requests by CIOs, who evaluate the open source based cloud management framework as an alternative to commercial solutions, distinctly raised in the last 6 month. This is verified by internal statistics from Crisp Research.

For more and more enterprises the question rises, if OpenStack is becoming an inherent part of their cloud integration strategy and thus needs to be considered during the planning and implementation of hybrid cloud infrastructures.

As part of several current research projects Crisp Research considers the question of whether OpenStack is already enterprise ready and which obstacles needs to be overcome to become a enterprise-grade cloud management platform. In this free analyst report a summary of the development, the components and the current deployment scenarios is given.

Categories
Analysis IT-Infrastructure

Fog Computing: Data, Information, Application and Services needs to be delivered more efficient to the enduser

You read it correctly, this is not about CLOUD Computing but FOG Computing. After the cloud is on a good way to be adapted in the broad, new concepts follow to enhance the utilization of scalable and flexible infrastructures, platforms, applications and further services to ensure the faster delivery of data and information to the enduser. This is exactly the core function of fog computing. The fog ensures that cloud services, compute, storage, workloads, applications and big data be provided at any edge of a network (Internet) on a trully distributed way.

What is fog computing?

The fog hast he task to deliver data and workloads closer to the user who is located at the edge of a data connection. In this context it is also spoken about „edge computing“. The fog is organizationally located below the cloud and serves as an optimized transfer medium for services and data within the cloud. The term „fog computing“ was characterized by Cisco as a new paradigm , which should support distributed devices during the wireless data transfer within the Internet of Things. Conceptual fog computing builds upon existing and common technologies like Content Delivery Networks (CDN), but based on cloud technologies it should ensure the delivery of more complex services.

As more and more data must be delivered to an ever-growing number of users, concepts are necessary which enhance the idea of the cloud and empower companies and vendors to provide their content over a widely spread platform to the enduser. Fog computing should help to transport the distributed data closer to the enduser and thus decrease latency and the number of required hops and therefore better support mobile computing and streaming services. Besides the Internet of Things, the rising demand of users to access data at any time, from any place and with any device, is another reason why the idea of fog computing will become increasingly important.

What are use cases of fog computing?

One should not be too confused by this new term. Although fog computing is a new terminology. But looking behind the courtain it quickly becomes apparent that this technology is already used in modern data centers and the cloud. A look at a few use cases illustrates this.

Seamless integration with the cloud and other services

The fog should not replace the cloud. Based on fog services the cloud should be enhanced by isolating the user data which are exclusively located at the edge of a network. From there it should allow administrators to connect analytical applications, security functions and more services directly to the cloud. The infrastructure is still based entirely on the cloud concept, but extends to the edge with fog computing.

Services to set vertical on top of the cloud

Many companies and various services already using the ideas of fog computing by delivering extensive content target-oriented to their customer. This includes among others webshops or provider of media content. A good example for this is Netflix, who is able to reach its numerous globally distributed customers. With the data management in one or two central data centers, the delivery of video-on-demand service would otherwise not be efficiently enough. Fog computing thus allows providing very large amounts of streamed data by delivering the data directly performant into the vicinity of the customer.

Enhanced support for mobile devices

With the steadily growth of mobile devices and data administrators gain more control capabilities where the users are located at any time, from where they login and how they access to the information. Besides a faster velocity for the enduser this leads to a higher level of security and data privacy by data can be controlled at various edges. Moreover fog computing allows a better integration with several cloud services and thus ensures an optimized distribution across multiple data centers.

Setup a tight geographical distribution

Fog computing extends existing cloud services by spanning up an edge network which consist of many distributed endpoints. This tight geographical distributed infrastructure offers advantages for variety of use cases. This includes a faster elicitation and analysis of big data, a better support for location-based services by the entire WAN links can be better bridged as well as the capabilities to evaluate data massively scalable in real time.

Data is closer to the user

The amount of data caused by cloud services require a caching of the data or other services which take care of this subject. This services are located close to the enduser to improve latency and optimize the data access. Instead of storing the data and information centralized in a data center far away from the user the fog ensures the direct proximity of the data to the customer.

Fog computing makes sense

You can think about buzzwords whatever you want. Only if you take a look behind the courtain it’s becoming interesting. Because the more services, data and applications are deployed to the end user, the more the vendors have the task of finding ways to optimize the deployment processes. This means that information needs to be delivered closer to the user, while latency must be reduced in order to be prepared for the Internet of Things. There is no doubt that the consumerization of IT and BYOD will increasing the use and therefore the consumption of bandwidth.

More and more users rely on mobile solutions to run their business and to bring it into balance with the personal live. Increasingly rich content and data are delivered over cloud computing platforms to the edges of the Internet where at the same time are the needs of the users getting bigger and bigger. With the increasing use of data and cloud services fog computing will play a central role and help to reduce the latency and improve the quality for the user. In the future, besides ever larger amounts of data we will also see more services that rely on data and that must be provided more efficient to the user. With fog computing administrators and providers get the capabilities to provide their customers rich content faster and more efficient and especially more economical. This leads to faster access to data, better analysis opportunities for companies and equally to a better experience for the end user.

Primarily Cisco will want to characterize the word fog computing to use it for a large-scale marketing campaign. However, at the latest when the fog generates a similar buzz as the cloud, we will find more and more CDN or other vendors who offer something in this direction as fog provider.

Categories
Analysis

Survey: Your trust in the Cloud. Europe is the safe haven. End-to-end encryption creates trust.

After the revelations about PRISM I had started a small anonymous survey on the current confidence in the cloud, to see how the scandal has changed on the personal relationship to the cloud. The significance of the result is more or less a success. The participation was anything but representative. With at least 1499 visits the interest in the survey was relatively large. A participation of 53 attendees is then rather sobering. Thus, the survey is not representative, but at least shows a trend. In this context I would like to thank Open-Xchange and Marlon Wurmitzer of GigaOM for the support.

The survey

The survey consisted of nine questions and was publicly hosted on twtpoll. It exclusively asked questions about trust in the cloud and how this can possibly be strengthened. In addition, the intermediate results were publicly available at each time. The survey was distributed in German and English speaking countries on the social networks (Twitter, Facebook, Google Plus) and the business networks XING and LinkedIn because this issue affects not a specific target audience, but has an impact on all of us. This led on twtpoll to 1,442 views across the web and 57 views of mobile devices and ended with 53 respondents.

The survey should not be considered as representative for this reason, but shows a tendency.

The survey results

Despite the PRISM scandal the confidence in the cloud is still present. 42 percent continue to have a high confidence, 8 percent even a very high level of confidence. For 15 percent the confidence in the cloud is very low; 21 percent appreciate the confidence is low. Another 15 percent are neutral towards the cloud.

The confidence in the current cloud provider is balanced. 30 percent of respondents still have a high level of confidence, 19 percent even a very high level of trust in their providers. This compares to 15 percent each, which have a low or very low confidence. 21 percent are undecided.

The impact on the confidence in the cloud by PRISM leads to no surprise. Only 9 percent see no affect for them; 8 percent a little. 32 percent are neutral. However, 38 percent of the participants are strongly influenced by the PRISM revelations and 13 percent most intensive.

62 percent of the participants used services of cloud provider, which are accused of supporting PRISM. 38 percent are at other providers.

As to be expected, PRISM has also affected the reputation of the cloud provider. For 36 percent the revelations have strongly influenced the confidence, for 13 percent even very strong. However, even 32 percent are neutral. For 11 percent the revelations have only a slight influence. For 8 percent is no influence at all.

Despite of PRISM 58 percent want to continue to use cloud services. 42 percent have already played with the idea to leave the cloud due to the incidents.

A clear sign goes to the provider when it comes to the topic of openness. 43 percent (very high) and 26 percent (high) expect an unconditional openness of the cloud provider. 25 percent are undecided. For only 2 percent (low) and 4 percent (very low) it does not matter.

74 percent see in a 100 percent end-to-end encryption the ability to increase confidence in the cloud. 26 percent think it as no potential.

The question of the most secure/ trusted region revealed no surprises. With 92 percent Europe counts after the PRISM revelations as the top region in the world. Africa received 4 percent, North America and Asia-Pacific each 2 percent. For South America was not voted.

Comment

Even if the revelations about PRISM to cause indignation at the first moment and still continue to provide for uncertainty, the economic life must go on. The tendency of the survey shows that confidence in the cloud has not suffered too much. But at this point it must be said: Cling together, swing together! We all have not precipitate into the cloud ruin overnight. The crux is that the world is increasingly interconnected using cloud technologies and the cloud thus serves as a focal point of modern communications and collaboration infrastructure.

For that reason we can not go back many steps. A hardliner might naturally terminate all digital and analog communication with immediate effect. Whether that is promising is doubtful, because the dependency has become too large and the modern corporate existence is determined by the digital communication.

The sometimes high number of neutral responses to the trust may have to do with that we all has always played the thought in the subconscious, that we are observed in our communication. Due to the current revelations we have it now in black and white. The extent of surveillance, meanwhile also of the disclosure of TEMPORA by the British Secret Service, has surprised. Related to TEMPORA, hence the survey result for Europe as a trusted region is disputable. But against surveillance at strategic intersections in the internetalso the cloud providers themselves are powerless.

Bottom line the economic-(life) has to go on. But at all the revelations one can see, that we can not rely on governments, from which regulations and securities are repeatedly required. On the contrary, even these have evinced interest to read data along. And one we must always bear in mind again. How should laws and rules help, when they are broken again and again by the highest authority.

Companies and users must therefore now assume more responsibility, take the reins in their hands, and provide the broadest sense for their desired security (end-to-end encryption) itself. Numerous solutions from the open source but also from the professional sector help to achieve the objectives. Provider of cloud and IT solutions are now challenged to show more openness as they may want to do.

Graphics on the survey results

1. How is your current trust in the cloud in general?

2. How is your current trust in the cloud provider of your choice?

3. How does the PRISM uncoverings influence your trust in the cloud?

4. Is your current cloud provider one of the accused?

5. How does the PRISM uncoverings influence your trust in the cloud provider of your choice?

6. Did you already think about to leave the cloud e.g. your cloud provider due to the PRISM uncoverings?

7. How important is the unconditional openness of your provider in times of PRISM and surveillance?

8. Do you think a 100% end-to-end encryption without any access and other opportunities of third parties can strengthen the trust?

9. In your mind which world region is the safest/ trustworthiest to store data in?

Categories
Insights @en

Building a hosted private cloud with the open source cloud computing infrastructure solution openQRM

Companies have recognized the benefits of the flexibility of their IT infrastructure. However, the recent past has reinforced the concern to avoid the path to the public cloud for reasons of data protection and information security. Therefore alternatives need to be evaluated. With a private cloud one is found, if this would not end in high up-front investments in own hardware and software. The middle way is to use a hosted private cloud. This type of cloud is already offered by some providers. However, there is also the possibility to build it up and run themselves. This INSIGHTS report shows how this is possible with the open source cloud computing infrastructure solution openQRM.

Why a Hosted Private Cloud?

Companies are encouraged to create more flexible IT infrastructure to scale their resource requirements depending on the situation. Ideally, the use of a public cloud is meeting these requirements. For this no upfront investments in own hardware and software are necessary. Many companies dread the way into public cloud for reasons of data protection and information security, and look around for an alternative. This is called private cloud. The main advantage of a private cloud is to produce a flexible self-service provisioning of resources for staff and projects, such as in a public cloud, which is not possible by a pure virtualization of the data center infrastructure. However, it should be noted that investments in the IT infrastructure must be made to ensure the virtual resource requirements by a physical foundation for building a private cloud.

Therefore, an appropriate balance needs to be found that allows a flexible resource obtaining for a self-service, but at the same time must not expect any high investment in the own infrastructure components and without to waive a self-determined data protection and security level. This balance exists in hosting a private cloud at an external (web) hoster. The necessary physical servers are rented on a hoster who is responsible for their maintenance. In order to secure any physical resource requirements, appropriate arrangements should be made with the hoster to use the hardware in time. Alternatives include standby server or similar approaches.

On this external server-/storage-infrastructure the cloud infrastructure software is then installed and configured as a virtual hosted private cloud. For example, according to their needs this allows employees to start own servers for software development and freeze and remove them after the project again. For the billing of the used resources, the cloud infrastructure software is responsible, which provides such functions.

openQRM Cloud

Basically, an openQRM Cloud can be used for the construction of a public and private cloud. This completely based on openQRM’s appliance model and offers fully automated deployments that can be requested by cloud users. For this openQRM Cloud supports all the virtualization and storage technologies, which are also supported by openQRM itself. It is also possible to provide physical systems over the openQRM Cloud.

Based on the openQRM Enterprise Cloud Zones, a fully distributed openQRM Cloud infrastructure can also be build. Thus, several separate data centers may be divided into logical areas or the company topology can be hierarchically and logically constructed safely separated. Moreover openQRM Enterprise Cloud Zones integrates a central cloud and multilingual portal including a Google Maps integration, so an interactive overview of all sites and systems is created.

Structure of the reference environment

For the construction of our reference setup a physical server and multiple public IP addresses are required. There are two options for installing openQRM:

  • Recommended: Configuration of a private class C subnet (192.168.xx/255.255.255.0) in which openQRM is operated. openQRM required an additional public IP address for access from the outside.
  • Option: Install openQRM in a virtual machine. In this variant openQRM controls the physical server and receives the virtual machines from the physical host for subsequent operations of the cloud.

For the assignment of public IP addresses cloud NAT can be used in both scenarios. This openQRM Cloud function will translate the IP addresses of the private openQRM Class C network into public addresses. This requires pre-and post-routing rules on the gateway / router using iptables, configured as follows:

  • iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -o br0 -j MASQUERADE
  • iptables -t nat -A POSTROUTING -s 192.168.0.0/24 -o eth0 -j MASQUERADE
  • o More information on pre-and post-routing with iptables can be found at http://www.karlrupp.net/en/computer/nat_tutorial

For the configuration of complex network environments, the IP management plugin is recommended. This enterprise plugin allows to set any network- and IP address configurations for the managed servers. In the openQRM Cloud, it also provides a mapping of networks to cloud users and groups and also supports the automated VLAN management.

In addition, two bridges are needed:

  • One of the public interface with a public IP address.
  • One for the private interface dpe for which DHCP is configured.

The data in the cloud are later stored in the local storage of the physical server. For this purpose, there are two variants:

Recommended:

  • KVM-Storage LVM Deployment (LVM Logical Volume Deployment)
  • Requires one or more dedicated LVM volume group (s) for the virtual machines. For more complex setups a central iSCSI target or a SAN is recommended.

Option:

  • KVM-Storage BF Deployment (blockfile deployment)
  • Create a directory on the Linux server as
    • /var/lib/kvm-storage/storage1
    • /var/lib/kvm-storage/storage2
    • (The storage directories can be set arbitrarily on the plugin configuration.)

  • For more complex setups, a central NAS for the configured mount points should be used.

At the end iptables must be configured according to the rules above and the desired own safety. After that the installation of openQRM follows. Packages for popular Linux distributions are available at http://packages.openqrm.com. After openQRM has been installed and initialized the configuration follows.

Basic configuration of openQRM

The first step after initialization is editing the „/usr/share/openqrm/plugins/dns/etc/openqrm-plugin-dns.conf“, by changing the default value to the own domain.

Configure domain for the private network
# please configure your domain name for the openQRM network here!
OPENQRM_SERVER_DOMAIN=”oqnet.org”

After that we activate and start the plug-ins via the web interface of the openQRM server. The following plugins are absolutely necessary for this:

DNS Plugin

  • Used for the automated management of the DNS service for the openQRM management network.

DHCPD

  • Automatically manages the IP addresses for the openQRM management network.

KVM Storage

  • Integrates the KVM virtualization technology for the local deployment.

Cloud-Plugin

  • Allows the construction of a private and public cloud computing environment with openQRM.

Further additional plugins are recommended:

Collectd

  • A monitoring system including long-term statistics and graphics.

LCMC

  • Integrates the Linux Cluster Management Console to manage the high availability of services.

High-Availability

  • Enables automatic high availability of appliances.

I-do-it (Enterprise Plugin)

  • Provides an automated documentation system (CMDB).

Local server

  • Integrates existing and locally installed server with openQRM.

Nagios 3

  • Automatically monitors systems and services.

NoVNC

  • Provides a remote web console for accessing virtual machines and physical systems.

Puppet

  • Integrates Puppet for a fully automated configuration management and application deployment in openQRM.

SSHterm

  • Allows secure login via a web shell to the openQRM server and integrates resource

Plugins which offer more comfort in the automatic installation of virtual machines as cloud templates are:

Cobbler

  • Integrates cobbler for automated deploying of Linux system in openQRM.

FAI

  • Integrates FAI for the automated provisioning of Linux systems in openQRM.

LinuxCOE

  • Integrates LinuxCOE for the automated provisioning of Linux systems in openQRM.

Opsi

  • Integrates Opsi for the automated provisioning of Windows systems in openQRM.

Clonezilla/local-storage

  • Integrates Clonezilla for the automated provisioning of Linux and Windows systems in openQRM.

Basic configuration of the host function for the virtual machines

Case 1: openQRM is installed directly on the physical system

Next, the host must be configured to provide the virtual machines. For that an appliance type KVM Storage Host is created. This works as follows:

  • Create appliance
    • Base > Appliance > Create
  • Name: e.g. openQRM
  • Select the openQRM server itself as resource
  • Type: KVM Storage Host

This gives openQRM the information that a KVM storage is to be created on this machine.

Case 2: openQRM is installed in a virtual machine running on the physical system

Using the “local server” plugin the physical system is integrated into openQRM. To this the “openQRM-local-server” integration tool is copied from the openQRM server on the system to be integrated, e.g.

scp /usr/share/openqrm/plugins/local-server/bin/openqrm-local-server [ip-address of the physical system]:/tmp/

After that, it is executed on the system to be integrated:

ssh [ip-address of the physical system]: /tmp/openqrm-local-server integrate -u openqrm -p openqrm -q [ip-address of the openQRM server] -i br0 [-s http/https]

(In this example “br0” is the bridge to the openQRM management network.)

The integration via “local server” creates in openQRM automatically:

  • a new resource
  • a new image
  • a new kernel
  • a new appliance from the sub-components above

Next, the appliance of the currently integrated physical system must be configured to provide the virtual machines. For this the appliance is set as type KVM Storage Host. That works as follows:

  • Edit the appliance
    • Base > Appliance > Edit
  • Type: Set KVM Storage Host

This gives openQRM the information that a KVM storage is to be created on this machine.

Basic configuration of the storage function

Now, the basic configuration of the storage follows. For this purpose, a storage object of a desired type is created. This works like this:

  • Create storage
    • Base > Components > Storage > Create
    Case 1, select the resource of the openQRM server
  • Case 2, select the resource of the integrated physical system
  • Name: e.g. KVMStorage001
  • Select deployment type
    • This depends on the selected type at the beginning: KVM-Storage LVM deployment or directory (KVM-Storage BF deployment)

Preparation of virtual machine images

In order to provide virtual machine (VM) later over the cloud portal as part of finished products, an image for a VM must first be prepared. This works as follows:

  • Creating a new virtual machine with a new virtual disk and install an ISO image on it.
    • Plugins > Deployment > LinuxCOE > Create Templates
    • The created images are automatically stored in an ISO pool which each virtual machine within openQRM can access.

Subsequently a base for the master template is created. This serves as a basis to provide users a product over the order process.

  • Create a new appliance
    • Base > Appliance > Create
  • Create a new resource
    • KVM-Storage virtual machine
      • Create a new VM
      • Make settings
      • Select an ISO image
      • Create
    • Select created resource
  • Create a new image
    • Add image as KVM-Storage volume
    • Select KVM-Storage
    • Select volume group on KVM-Storage
    • Add a new logical volume
    • Select an image for the appliance
    • Edit to set a password (The previously chosen password of the ISO is overridden.)
  • Select kernel
    • From the local disk
    • (LAN boot is also possible)
  • Start appliance
    • The automatic installation can now be tracked over VNC.
    • Further adaptations can be done itself.
    • Please consider
      • Misc > Local-Server > Help >Local VMs („Local-Server for local virtual machines “)

Cleaning up

The created appliance can now be stopped and deleted afterwards. The important point was to create an image that can be used as a master template for the cloud.

The created image using the appliance includes the basic operating system which was created from the ISO image.

Configuration of the openQRM Cloud

We have now finished all preparations to start configuring the openQRM cloud. We find the necessary settings at „Plugin > Cloud > Configuration > Main Config“. All parameters which are adapted here have a direct impact on the behavior of the whole cloud.

Basically an openQRM Cloud can be run with basic settings. Depending on the needs and the own specific situation, adaptations can be make. The area “description” in the right column of the table are helpful.

However, there are parameter which are need to consider regardless of the own use case. These are:

Automatic provisioning (auto_provision)

  • Determines if systems are automatically provisioned by the cloud or if an approval of a system administrator is needed.

Provisioning of physical systems (request_physical_systems)

  • This parameter defines if besides virtual machines even physical hosts can be provisioned by the cloud.

Cloning of images (default_clone_on_deploy)

  • By default the cloud rolls out copies (clones) of an image.

High-availability (show_ha_checkbox)

  • Enables to operate the openQRM cloud including the high-availability of the provided resources.

Billing of the used resources (cloud_billing_enabled)

  • openQRM has an extensive billing system to determine own prices for all resources to get a transparent overview of the running costs.

Cloud product manager (cloud_selector)

  • Enables the product manager to provide users various resources over the cloud portal.

Currency for the settlement of resources (cloud_currency)

  • Determines the local currency with which the resources are to be settled.

Exchange ratio for resources in real currency (cloud_1000_ccus)

  • Determines how many 1000 CCUS (Cloud Computing Units) correspond to a previously fixed real currency.

Resource allocation for groups (resource_pooling)

  • Determines from which host an appointed user group receive their virtual machines.

Creating products for the openQRM Cloud

To provide our users the resources over the cloud portal we have to create products first which define the configuration of a virtual machine. The settings for that we find at „Plugin > Cloud > Configuration > Products“.

The “Cloud product management” is used to create various products which users can choose later to build own virtual machines itself over the cloud portal. Products which are available for us are:

  • Number of CPUs
  • Size of local disks
  • Size of RAM
  • Kernel type
  • Number of network interfaces
  • Pre-installed applications
  • Virtualization type
  • If a virtual machine should be high-available

Over the status line by using +/- each product can be activated or deactivated to show or hide it for the user in the cloud portal.

Please note: Products which are deactivated but are still active within a virtual machine continue to be billed.

To create a new CPU product we select the “CPU” tap and define in the area “Define a new CPU product” our wanted parameter.

The first parameter defines how many CPUs (cores), here 64, our product should have. The second parameter determines the value of the product and how many costs occur per hour during its use. In this example, 10 CCUs per hour for 64 CPUs occurs.

With the arrow keys the order on how the single products are displayed in the cloud portal can be determine. The default value is above one.

Please note: In the cloud portal standard profiles in the sizes „small“, „medium“ and „big“ exist. According to the order the profiles are automatically be determined under the respective products. That means that “small” is always the first value, “medium” the second and “big” the third.

openQRM also allows to order virtual machines with pre-configured software stacks. For this openQRM uses Puppet (Plugins > Deployment > Puppet). Thus, for example, it is possible to order the popular LAMP stack.

If we have configured our product portfolio, it’s the user’s turn to order virtual machines. This is done via the cloud portal.

openQRM Cloud-Portal

To create a new virtual machine (VM) we click on the tap “New”. An input mask follows on which we can create our
VM based on the products the administrator has determined and approved in the backend.

We choose the profile “Big” and a LAMP server. Our virtual machine now consists of the following products:

  • Type: KVM-Storage VM
  • RAM: 1 GB
  • CPU: 64 cores
  • Disk: 8 GB
  • NIC: 1

In addition the virtual machine should be “high-available”. This means, if the VM fails, automatically a substitute machine with exactly the same configuration is started to work on with.

For this configuration we will have to pay 35 CCUs per hour. This is equivalent to 0.04 euros per hour or € 0.84 per day or € 26.04 per month.

If we want to order the virtual machine we select “send”.

Below the tap “Orders” we see all current and past orderings we have made with our user. The status “active” in the first column shows that the machine is already started.

Parallel to this we receive an e-mail including the ip-address, a username and a password, we can use to log into the virtual machine.

The tap “Systems” confirms both information and shows further details of the virtual machine. In addition we have the opportunity to change the systems configuration, pause the virtual machine or to restart. Furthermore the login via a web-shell is possible.

If the virtual machine is not needed any more it can be paused. Alternatively it is possible that the administrator disposes this due to an inactivity of the system or at a specific time.

Creating a virtual machine with the „Visual Cloud Designer“

Besides the “ordinary” way of building a virtual machine, the openQRM Cloud portal enables the user to do that conveniently via drag and drop. Here the „Visual Cloud Designer“ helps, which can be find behind the tap „VCD“.

Using the slider on the left below „Cloud Components” it is possible to scroll between the products. Using the mouse allows to assemble the „Cloud Appliance“ (virtual machine) in the middle with the appropriate products.

Our virtual machine „Testosteron“ we assembled in this case with KVM-Storage, Ubuntu 12.04, 64 CPUs, 1024 MB Ram, 8 GB disk, one NIC, and software for a webserver and the high-availability feature.

With one click on “Check Costs”, openQRM tells us that we will pay 0.03 EUR per hour for this configuration.

To start the ordering process for the virtual machine we click “request”. We get the message that openQRM starts rolling out the resource and we will receive further information into our mailbox.

The e-mail includes, as described above, all access data to work with the virtual machine.

In the cloud portal under “systems” we already see the started virtual machine.

Creating a virtual machine with the „Visual Infrastructure Designer“

Besides the provisioning of single virtual machines the openQRM cloud portal also offers the opportunity to provide complete infrastructures consisting of multiple virtual machines and further components, at one click.

Thus, we use the „Visual Infrastructure Designer“. This can be found in the cloud portal behind the tap “VID”.

Using the “VID” it is possible to build and deploy a complete WYSIWYG infrastructure via drag and drop. For this purpose, it is necessary to create ready profiles with pre-configured virtual machines at first, which include for example webserver, router or gateways. These can be deployed afterwards.

Categories
Analysis

How to protect a companies data from surveillance in the cloud?

With PRISM the U.S. government has further increased the uncertainty among Internet users and companies, and therefore strengthened the loss of confidence in U.S. vendors enormously. After the Patriot Act, which was often cited as the main argument against the use of cloud solutions from US-based providers, the surveillance by the NSA be the final straw. From a business perspective, under these present circumstances, the decision can only be to opt out of a cloud provider in the United States, even if it has a subsidiary with a location and a data center in Europe or Germany. That I already pointed out in this article. Nevertheless, the economic life must go on, which can also work with the cloud. However, here is a need for pay attention to the technical security, which is discussed in this article.

Affected parties

This whole issue is not necessarily just for companies but for every user of actively communicating in the cloud and shares and synchronized its data. Although the issue of data protection can not be neglected in this context. For companies it is usually still more at stake when internal company information is intercepted or voice and video communication is observed. At this point it must be mentioned that this has nothing to do primarily with the cloud. Data communication was operated long before cloud infrastructures and services. However, the cloud leads to an increasingly interconnection, and act as a focal point of modern communications and collaboration infrastructure in the future.

The current security situation

The PRISM scandal shows the full extent of the possibilities that allows U.S. security agencies, unimpeded and regardlessly access the global data communication. For this, the U.S. government officially use the “National Security Letter (NSL)” of the U.S. Patriot Act and the “Foreign Intelligence Surveillance Act (FISA).” Due to these anti-terror laws, the U.S. vendor firms and their subsidiaries abroad are obliged to provide further details about requested information.

As part of the PRISM revelations it is also speculated about supposed interfaces, “copy-rooms” or backdoors at the providers with which third parties can directly and freely tap the data. However, the provider opposed this vehemently.

U.S. vendors. I’m good, thanks?

While choosing a cloud provider* different segments are considered that can be roughly divided into technical and organizational areas. In this case the technical area is reflecting the technical security and the organizational the legal security.

The organizational security is to be treated with caution. The Patriot Act opens the U.S. security agencies legally the doors if there is a suspected case. How far this remains within the legal framework, meanwhile many to doubt. At this point, trust is essential.

Technologically the data centers of cloud providers can be classified as safe. The effort and investment which are operated by the vendors cannot be provide by a normal company. But again, 100% safety can never be guaranteed. If possible, the user should also use its own security mechanisms. Furthermore, the rumors about government hits by the NSA should not be ignored.

About two U.S. phone companies confirmed reports are circulating that are talking about direct access to the communication by the NSA and strong saved rooms that are equipped with modern surveillance technologies. In this context, the provider of on-premise IT solutions should also be considered how far these are undermined.

From both terms and the current security situation, U.S. vendors should be treated with caution. This also applies to its subsidiaries in the EU. After all, they are even not able to meet at least the necessary legal safety.

But even the German secret service should not be ignored. Recent reports indicate that the “Federal Intelligence Service (BND)” will also massively expand the surveillance of the internet. This amounts to a budget of 100 million Euro, of which the federal government already released five million EUR. Compared to the NSA, the BND will not store the complete data traffic on the Internet, but only check for certain suspicious content. For this purpose he may read along up to 20 percent of the communication data between Germany and abroad, according to the G 10 Act.

Hardliners have to adjust all digital and analog communication immediately. But this will not work, because the dependency has become too large and the modern business life is determined by the communication. Therefore, despite surveillance, other legal ways must be found to ensure secure communication and data transmission.

* In this context a cloud provider can be a service provider or a provider of private cloud or IT hardware and software solutions.

Requirements for secure cloud services and IT solutions

First, it must be clearly stated that there is no universal remedy. The risk shall be made ​​by the user, who is not aware of the dangerous situation or who has stolen corporate data on purpose. Regardless of this, the PRISM findings lead to a new safety assessment in the IT sector. And it is hoped that this also increases the security awareness of users.

Companies can obtain support from cloud services and IT solutions, which have made ​​the issue of an unconditional security to be part of their leitmotif from the beginning. Under present circumstances these providers should preferred be from Europe or Germany.

Even if there are already first reports of implications and influences by the U.S. government and U.S. providers to the European Commission, which have prevented an “Anti-FISA” clause in the EU data protection reform, exist no similar laws such as the U.S. Patriot Act, or FISA in Europe.

Therefore also European and German IT vendors, which are not subject to the Patriot Act and not infiltrated by the state, can help U.S. users to operate their secure data communication.

Criteria for vendor selection

On the subject of security it is always about trust. This trust a provider only achieved through openness, by giving its customers a technologically look in the cards. IT vendors are often in the criticism to be sealed and do not provide information on their proprietary security protocols. This is partly because there are also provider willing to talk about it and make no secret. Thus, it is important to find this kind of provider.

In addition to the subjective issue of trust, it is in particular the implemented security, which plays a very important role. Here it should be ensured that the provider uses current encryption mechanisms. This includes:

  • Advanced Encryption Standard – AES 256 to encrypt the data.
  • Diffie-Hellman und RSA 3072 for key exchange.
  • Message Digest 5/6 – MD5/MD6 for the hash function.

Furthermore, the importance of end-to-end encryption of all communication takes is getting stronger. This means that the whole process, which a user passes through the solution, is encrypted continuously from the beginning to the end. This includes inter alia:

  • The user registration
  • The Login
  • The data transfer (send/receive)
  • Transfer of key pairs (public/private key)
  • The storage location on the server
  • The storage location on the local device
  • The session while a document is edited

In this context it is very important to understand that the private key which is used to access the data and the system only may exclusively be owned by the user. And is only stored encrypted on the local system of the user. The vendor may have no ways to restore this private key and never get access to the stored data. Caution: There are cloud storage provider that can restore both the private key, as can also obtain access to the data of the user.

Furthermore, there are vendor which discuss the control over the own data. This is indeed true. However, sooner or later it is inevitably to communicate externally and then a hard end-to-end encryption is essential.

Management advisory

In this context, I would like to mention TeamDrive, which I have analyzed recently. The German file sharing and synchronization solution for businesses is awarded with the Data Protection Seal of the “Independent Centre for Privacy Protection Schleswig-Holstein (ULD)” and is a Gartner “Cool Vendor in Privacy” 2013. From time to time TeamDrive is described as proprietary and closed in the media. I can not confirm this. For my analysis TeamDrive willingly gave me extensive information (partly under NDA). Even the self developed protocol will be disclosed on request for an audit.

More information on selecting a secure share, sync and collaboration solution

I want to point out my security comparison between TeamDrive and ownCloud, in which I compared both security architectures. The comparison also provides further clues to consider when choosing a secure share, sync and collaboration solution.

Categories
Analysis

Survey: How is your current trust in the cloud?

After the revelations on PRISM I have started a small anonymous survey to see what is the current situation with the confidence in the cloud and how the scandal has changed on the personal relationship to the cloud.

The questions

  • How is your current trust in the cloud in general?
  • How is your current trust in the cloud provider of your choice?
  • How does the PRISM uncoverings influence your trust in the cloud?
  • Is your current cloud provider one of the accused?
  • How does the PRISM uncoverings influence your trust in the cloud provider of your choice?
  • Did you already think about to leave the cloud e.g. your cloud provider due to the PRISM uncoverings?
  • How important is the unconditional openness of your provider in times of PRISM and surveillance?
  • Do you think a 100% end-to-end encryption without any access and other opportunities of third parties can strengthen the trust?
  • In your mind which world region is the safest/ trustworthiest to store data in?

To participate in the survey please choose this way:

Your trust in the Cloud! – After the PRISM uncoverings how is your trust in the cloud?