Services @en @de

TeamDrive SecureOffice: Seamless and secure document processing for smartphones and tablets

With “TeamDrive SecureOffice” the German vendor TeamDrive, known for its secure enterprise cloud storage solutions, and Picsel, the Smart Office manufacturer, have joined forces to launch the first secure office solution for iOS and Android smartphones and tablets. TeamDrive SecureOffice presents itself as the first Dropbox-like synchronization solution with built-in office document handling and end-to-end encryption.

End-to-end encryption for enterprises and government agencies

With the integration of TeamDrive’s – for safety tested and certified – sync & share technology, alongside Picsel’s Smart Office solution for mobile devices, the two companies strive to set a milestone for safe productivity solutions for mobile devices. According to both companies, TeamDrive SecureOffice is the first technology of its kind that covers the needs of companies and government agencies with complete end-to-end encryption while allowing the sharing of data by different users on mobile devices.

A sandbox provides seamless security on the device

Based on sandbox technology, shared documents never leave the secure environment provided by the application. Complete end-to-end encryption is initiated when employees send and receive files via mobile devices. Data stored in the application remains within the secure environment of the application even when, for example, a Word document is opened. The document is forwarded directly to the internally implemented Smart Office application and opened without ever leaving TeamDrive SecureOffice’s secure environment. With the integration of Smart Office in TeamDrive, an encrypted synchronization solution for iPhones, iPads and Android smartphones will be realized.

Please note: On Android the encryption for the device itself and the SD card needs to be specifically activated in order ensure the highest level of security. On the iPhone and iPad, this is automatically active.

Features of TeamDrive SecureOffice

TeamDrive SecureOffice offers numerous functions. These functions include:

  • Support for multiple operating systems.
  • Powerful viewing and editing functions.
  • Comprehensive support for file formats such as Microsoft Word, PowerPoint, Excel and Adobe PDF.
  • Automatic document adjustment by zooming and panning.

Furthermore, the user expects a reliable printing solution for mobile devices and the ability to export files from an office application to Adobe PDF and provide PDF documents with comments.

Secure mobile collaboration becomes more and more vital

The safe use of corporate data is top priority. As we continue with the ongoing transition to a mobile society, the way we work and collaborate in the future is bound to progress and change. The introduction and toleration of Bring Your Own Device (BYOD) in enterprises has led to a new era of protection needs, needs which present a particular challenge for any organization wanting to protect critical data and intellectual property to the fullest. Therefore, new technological advances are more than necessary to help protect against the loss of sensitive data during the use of mobile devices and to not lose sight of simplicity and ease of use.

TeamDrive has proven itself to be the safest German vendor of sync, share and cloud storage solutions. The company leverages its expertise and proven security technology combined with Picsel’s Smart Office to increase the productivity of employees through the use of mobile devices. At the same time, both companies break fresh ground with their sandbox technology and give the critical issues of data security and data loss special attention and value.

For large companies, government agencies and any other organization working with particularly sensitive data, it is crucial to adapt to the mobile habits of employees and to respond to these habits with appropriate solutions. A first look at the Android version of the app displays how the seamless workflow of Picsel’s Smart Office for mobile devices in conjunction with TeamDrive’s cloud-based sync and encryption technology for business can help to achieve this goal.

Services @en @de

openQRM 5.1 is available: OpenStack and Eucalyptus have a serious competitor

openQRM project manager Matt Rechenburg already told me some of the new features of openQRM 5.1 at the end of July. Now the final release has been released with additional functions. The overall picture looks really promising. Although the heading at first can suspect that openQRM is something completely new, which is trying to outstrip OpenStack or Eucalyptus. But this is not by a long shot. openQRM already exists since 2004 and has built up a wide range of functions over the years. The open source project from Cologne, Germany unfortunately lost a little in the crowd, because the marketing machines behind OpenStack and Eucalypts run at full speed and the loudspeakers are many times over. Certainly, openQRM must not hide from them. On the contrary, with version 5.1, Rechenburg’s team again has vigorously gain functions and particular enhances the user experience.

The new features in openQRM 5.1

openQRM-Enterprise sees in the hybrid cloud an important significance for the future of cloud as well. For this reason, openQRM 5.1 was extended with a plugin to span a hybrid cloud with the cloud infrastructures of the Amazon Web Services (AWS), Eucalyptus Cloud and Ubuntu Enterprise Cloud (UEC). Thus, openQRM is using AWS as well as Eucalyptus and UEC as a resource provider for more computing power and memory and is able to manage public cloud infrastructures as well as local virtual machines. This allows administrators, based on openQRM, to transparently provide their end-users Amazon AMIs via a cloud portal and to monitor the state of the virtual machines via Nagios.

Another new feature is the tight integration of Puppet. With that end-users can order virtual machines as well as public cloud infrastructures with personalized, managed software stacks. The best technical renovation is the consideration of the Amazon Web Services high-availability concept. Does an Amazon EC2 instance fail, automatically a new one starts. openQRM 5.1 is even able to offset the outage of an entire Amazon Availability Zone (AZ). Is an AZ not available anymore the correlate EC instance will be started in another Availability Zone of the same region.

According to CEO Matt Rechenburg, openQRM-Enterprise expands the open source solution openQRM consistently with cloud spanning management solutions in order to empower its customers to flexibility grow beyond the borders of their own IT capabilities.

In addition to technical enhancements much emphasis was placed on ease of use as well. In openQRM 5.1 the user interface for the administrator was completely revised. This ensures a better overview and user guide. Even the dashboard, as a central point has benefited greatly, by all of the information such as the status of the data center, are displayed at a glance.

New features have been introduced for the openQRM-Enterprise Edition, too. This includes a new plugin for role-based access rights management, which allows fine-grained setting permissions for administrators within the entire openQRM system. With that complete enterprise topologies can be mapped to openQRM roles in order to restrict administrators who are responsible only for virtualization in the enterprise to the management and provisioning of virtual machines.

Further renovations in openQRM 5.1 include an improved support for virtualization technology KVM, by now using it for KVM GlusterFS volumes as well. In addition, VMware technologies are better supported. This means that now even existing VMware ESX systems can be managed as well as local or over the network bootable VMware ESX machines and can be install and managed.

openQRM takes a big step forward

Although openQRM significantly longer exists on the market as OpenStack or Eucalyptus. Nevertheless, the level of awareness of both projects is larger. This is mainly due to the substantial marketing efforts of both the OpenStack community and of Eucalyptus. But technological and functional openQRM must not hide. On the contrary, openQRMs functions is many times more powerful than that of OpenStack or Eucalyptus. Where the two focus exclusively on the topic of cloud, openQRM also has a complete data center management solution included.

Due to the long history openQRM has received many new and important features in recent years. However, as a result it also lose track of what the solution is able to afford. But who has understood, that openQRM is composed of integrated building blocks such as “Data Center Management”, “Cloud Backend” and “Cloud Portal”, will recognize that the open source solution, especially in the construction of private clouds, provides an advantage over OpenStack and Eucalyptus. Especially the area of ​​data center management must not be neglected for building a cloud to keep track and control of its infrastructure.

With version 5.0, the structures already begun to sharpen and to summarize the individual functions into workflows. This was worked out by the 5.1 release once again. The new look and layout of the openQRM backend has been completely overhauled. It looks tidier and easier to use and will positive surprise all customers.

The extension with the hybrid cloud functionality is an important step for the future of openQRM. The result of the Rackspace 2013 Hybrid Cloud survey showed that 60 percent of IT decision-makers have the hybrid cloud as the main goal in mind. Here 60 percent will or have withdrawn their applications and workloads in the public cloud. 41 percent left the public cloud partially. 19 percent want to leave the public cloud even completely. The reasons for the use of a hybrid cloud rather than a public cloud are higher security (52 percent), more control (42 percent) and better performance and higher reliability (37 percent). The top benefits, which hybrid cloud users report, including more control (59 percent), a higher security (54 percent), a higher reliability (48 percent), cost savings (46 percent) and better performance (44 percent).

openQRM not surprisingly orientates at the current public cloud leader Amazon Web Services. Thus, in combination with Eucalyptus or other Amazon compatible cloud infrastructures, openQRM can also be used to build massively scalable hybrid cloud infrastructures. For this purpose openQRM focuses on its proven plugin-concept and integrates Amazon EC2, S3 and Eucalyptus exactly this way. Besides its own resources from a private openQRM Cloud, Amazon and Eucalyptus are used as further resource providers to get more computing power quickly and easily.

The absolute killer features include the automatic applications deployment using Puppet, with which the end-user to conveniently and automatically can provide EC2 instances with a complete software stack itself, as well as the consideration of the Amazon Availability Zone-wide high-availability functionality, which is neglected by many cloud users again and again due to ignorance. But even the improved integration of technologies such as VMware ESX systems should not be ignored. Finally, VMware is still the leading virtualization technology on the market. Thus openQRM also increases its attractiveness to be used as an open source solution for the management and control of VMware environments.

Technological and functional openQRM is on a very good path into the future. However, bigger investments in public relations and marketing are imperative.

Services @en @de

Exclusive: openQRM 5.1 to be extended with hybrid cloud functionality and integrates Amazon AWS and Eucalyptus as a plugin

It’s happening soon. This summer openQRM 5.1 will be released. Project manager and openQRM-Enterprise CEO Matt Rechenburg already has told me some very interesting new features. In addition to a completely redesigned backend design the open source cloud infrastructure software from Cologne, Germany will be expanded with hybrid cloud functionality by integrating Amazon EC2, Amazon S3, and their clone Eucalyptus as a plugin.

New hybrid cloud features in openQRM 5.1

A short overview of the new hybrid cloud features in the next version of openQRM 5.1:

  • openQRM Cloud works transparently with Amazon EC2 and Eucalyptus.
  • Via a self-service end-user within a private openQRM Cloud can order by an administrator selected instance types or AMI’s which are then used by openQRM Cloud to automatically provision to Amazon EC2 (or Amazon compatible clouds).
  • User-friendly password login for the end-user of the cloud via WebSSHTerm directly in the openQRM Cloud portal.
  • Automatic applications deployment using Puppet.
  • Automatic cost accounting via the openQRM cloud billing system.
  • Automatic service monitoring via Nagios for Amazon EC2 instances.
  • openQRM high-availability at infrastructure-level for Amazon EC2 (or compatible private clouds). This means: If an EC2 instance fails or an error occurs in an Amazon Availability Zone (AZ) an exact copy of this instance is restarted. In case of a failure of an entire AZ, the instance starts up again in another AZ of the same Amazon region.
  • Integration of Amazon S3. Data can be stored directly on Amazon S3 via openQRM. When creating an EC2 instance a script that is stored on S3 can be specified, for example, that executes other commands during the start of the instance.

Comment: openQRM recognizes the trend at the right time

With its extension also openQRM-Enterprise shows that the hybrid cloud is becoming a serious factor in the development of cloud infrastructures and comes with the new features just at the right time. The Cologne based company are not surprisingly orientate at the current public cloud leader Amazon Web Services. Thus, in combination with Eucalyptus or other Amazon compatible cloud infrastructures, openQRM can also be used to build massively scalable hybrid cloud infrastructures. For this purpose openQRM focuses on its proven plugin-concept and integrates Amazon EC2, S3 and Eucalyptus exactly this way. Besides its own resources from a private openQRM Cloud, Amazon and Eucalyptus are used as further resource providers to get more computing power quickly and easily.

In my opinion, the absolute killer features include the automatic applications deployment using Puppet, with which the end-user to conveniently and automatically can provide EC2 instances with a complete software stack itself, as well as the consideration of the Amazon AZ-wide high-availability functionality, which is neglected by many cloud users again and again due to ignorance.

Much attention the team has also given the optics and the interface of the openQRM backend. The completely redesigned user interface looks tidier and easier in the handling and will positively surprise the existing customers.

Services @en @de

Application Lifecycle Engine: cloudControl launched Private Platform-as-a-Service

Platform-as-a-Service (PaaS) provider cloudControl has published its Application Lifecycle Engine, a private PaaS based on the technologies of its public PaaS. With it the Berlin based company wants to give companies the ability to operate a self-managed PaaS in the own data center in order to get more control over the location of the data and to control access to the relevant data.

cloudControl Application Lifecycle Engine

The cloudControl Application Lifecycle Engine technology derived from its own public PaaS offering which serves since 2009 as the foundation for the cloud service.

The Application Lifecycle Engine is to be understood as a middleware between the basic infrastructure and the actual applications (PaaS concept) and provides a complete runtime environment for applications. It offers developers the opportunity to focus on developing their application and not to be uncomfortable with processes and the deployment.

The published Application Lifecycle Engine is designed to enable large companies their own private PaaS to operate in a self-managed infrastructure in order to get the benefits of the cloud in their own four walls. The platform sets on open standards and promises to avoid vendor lock-in. cloudControl also spans a hybrid bridge by running already on cloudControl’s public PaaS developed applications on the private PaaS, and vice versa.

Furthermore cloudControl addressed service providers who want to build a PaaS offering itself to combine their existing services and infrastructure services.

Trend towards private and hybrid solutions intensifies

cloudControl is not the first company that has recognized the trend towards the private PaaS. Nevertheless the current security situation shows that companies are much more confident in dealing with the cloud and consider where they can store and process their data. For this reason cloudControl comes at exactly the right time to make it possible for companies to regain more control over their data and to raise awareness about where their data is actually located. Concurrently cloudControl also allows companies to enhance their own self-managed IT infrastructure with a true cloud-DNA by IT departments can provide developers with an easy access to resources to deliver faster and more productive results. In addition, IT departments can use the Application Lifecycle Engine as an opportunity to position themselves as a service broker and partner for the departments within the company.

Not to disregard is the possibility using cloudControl in a hybrid operation. It may well, relatively quickly, come to the fact that not enough local resources (computing power, storage space) are available, which are needed for the private PaaS. In this case the homogeneous architecture of the Application Lifecycle Engine can be used to fall back on cloudControl’s public PaaS, not to interfere with the progress of the projects and development activities.

Services @en @de

Version 4.1: arago makes its AutoPilot fit for the future

The automation experts of arago have released the latest version of their AutoPilot. Thus, the company from Frankfurt, Germany promises an even more flexible and secure IT operation with their knowledge-based automation solution. The AutoPilot can automatically carry out tasks within an IT operation and thus relieve IT departments of their routine tasks. The most important innovations in the latest update are the introduction of a developer portal and a new API, which opens the AutoPilot for software developers. Thus, the AutoPilot should integrated more efficiently into existing IT environments and adapted to individual needs. AutoPilot users can download the software for free to update on version 4.1 and get access to all the new services and features.

New API improves the integration

With the introduction of a Java/C++ API library and a REST interface connecting external systems is now easier. Users get the option to integrate the AutoPilot more quickly and efficiently in IT environments, and so access to relevant databases, hardware or user interfaces simplified. In addition, a new, more compact XML format for the respective MARS model reduces overhead when model data is transferred to the API.

Developer portal extends AutoPilot to a platform

With the introduction of the developer portal, AutoPilot can be used immediately by developers as a platform to generate their own applications on the basis of knowledge-based automation and data storage or to use existing connections. For this purpose, arago has made extensive documentation, code examples and test data available in the new portal. Should you have any questions or comments, arago has also provided the users with a support community. The developer portal is currently in the beta phase for a selected circle of developers – this is constantly being expanded, however.

Knowledge-based replaces scripted automation

AutoPilot distinguishes itself from most automation solutions considerably by being knowledge-based. Many other solutions on the market require a standardisation of the IT environment and work in scripts, run books or workflows. They create IT processes that work in a similar way to an assembly line and therefore work well on a level that already benefits from a large amount of standardisation – for example, the operating system or standard applications. A knowledge-based solution, in contrast, administers the whole stack, from the operating system to individual applications and the business process, and integrates into the existing IT environment – even into complex and non-standardised environments.

arago AutoPilot uses the knowledge already existing in the company and applies it automatically. The solution is filled with the knowledge of administrators and other IT professionals in the form of knowledge items and receives all the information it needs for the automated administration of IT operations. Subsequently, AutoPilot flexibly combines these depending on the situation and requirements, and thus works like an autonomous expert. As a result, the software tool can also administer individual applications and, in doing so, even react appropriately to unplanned events.

Comment: AutoPilot is prepared for its future

With version 4.1 arago sets a new milestone for its AutoPilot. Particularly with the introduction of the developer portal and the REST API arago makes a step into the future and opens for third party developer. This is insofar an important decision that it increases the reach of the AutoPilot and strengthened the acceptance of the knowledge-based automation in the market. This progress may eventually lead to a marketplace, which allows developers to offer their own applications for the AutoPilot and monetize it.

Automation is still considered as a dangerous development by many people, because the machines could replace the job completely. This is a well-established way of thinking that needs to be questioned. The industrial revolution also did not destroy the manpower of the people but led to greater efficiency in production and new higher-value tasks. IT departments are caught in their routine tasks for IT operations today, and thus can only limited intervene in the added value of a company. And this at a time in which everybody talks about IT as a business enabler.

A knowledge-based automation solution, such as the AutoPilot, has the potential to relieve the IT department and to give them more time and freedom to concentrate on the strategic orientation of the company’s IT and therefore to the same extent to increase innovation through IT in business.

Services @en @de

Big Data in the Cloud: AWS Data Pipeline and Amazon Redshift

Amazon powerful upgraded its cloud infrastructure for big data. With the AWS Data Pipeline now a service (currently in beta) is available to automatically move and handle data across different systems. Amazon Redshift is a data warehouse in the cloud, which will be ten times faster than previously available solutions.

AWS Data Pipeline

With the AWS Data Pipeline Amazon wants to improve the access to the steady growing data on distributed systems and in different formats. For example, the service loads textfiles from Amazon EC2, processes it and saves them on Amazon S3. The main hub is represented by the AWS Management Console. Here the pipelines including the several sources, conditions, targets and commands are defined. Based on task plans it is defined when which job will be processed. The AWS Data Pipeline determines from which system based on which condition the data is loaded and processed and where it is stored afterwards.

The data processing can be conduct directly in the Amazon cloud on EC2 instances or in the own data center. Therefore the open source tool Task Runner is used which communicates with the AWS Data Pipeline. The Task Runner must run on each system that is processing data.

Amazon Redshift

Amazon’s cloud data warehouse Amazon Redshift helps to analyze huge amount of data in a short time frame. Within it’s possible to store 1.6 petabytes of data and request them using SQL queries. Basically the service is charged by pay as you use. But customers who sign a three years contract and giving full load on their virtual infrastructure pay from 1.000 USD per terabyte per year. Amazon compares with numbers from IBM. IBM charges a data warehouse from 19.000 USD to 25.000 USD per terabyte per year.
First Amazon Redshift beta users are Netflix, JPL and Flipboard who were able to improve their requests 10 till 150 times faster compared to their current systems.

Amazon Redshift can be used as a single cluster with one server and a maximum of 2 terabyte of storage or as a multi node cluster including at least two compute nodes and one lead node. The lead node is responsible for the connection management, parsing the requests, create task plans and managing the requests for each compute node. The main processing is done on the compute node. Compute nodes are provided as hs1.xlarge with 2 terabyte storage and as hs1.8xlarge with 16 terabyte storage. One cluster has the maximum amount of 32 hs1.xlarge and 100 hs1.8xlarge compute nodes. This results in a maximum storage capacity of 64 terabyte respectively 1.6 terabyte. All compute nodes are connected over a separate 10 gigabit/s backbone.


Despite from the competition Amazon expands its cloud services portfolio. As a result, you can sometimes get the impression that all the other IaaS providers mark time – considering the innovative power of Amazon Web Services. I can only stress here once again that Value added services are the future of infrastructure-as-a-service respectively Don’t compete against the Amazon Web Services just with Infrastructure.

If we take a look at the latest developments, we see a steadily increasing demand for solutions for processing large amounts of structured and unstructured data. Barack Obama’s campaign is just one use case, which shows how important the possession of quality information is in order to gain competitive advantages in the future. And even though many see Amazon Web Services “just” as a pure infrastructure-as-a-service provider (I don’t do that), is Amazon – more than any other (IaaS) provider – in the battle for Big Data solutions far up to play – which is not just the matter because of the knowledge from operating

Services @en @de

Microsoft solutions for the private cloud

Given recent political developments not only in Germany, which will enable authorities, tap off data in the cloud, the trend to build own private clouds will increase. One reason to evaluate current solutions. After I presented with openQRM, Eucalyptus, OpenStack, CloudStack and OpenNebula some open source candidates (German), today it’s time for Microsoft’s solutions. In particular, the new Microsoft Windows Server 2012 and Hyper-V bring interesting Microsoft cloud approaches in the own data center.

Microsoft in the private cloud

A private cloud means to transfer the concepts of a public cloud – including flexibility, scalability and self-service – to your own data center. It should be noted again that a simple virtualization meets NOT a private cloud. A cloud includes inter alia the three properties above that are not met by ordinary virtualization.

Microsofts solutions enable to build infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS) both in the private and in the public cloud. This has the advantage for example to store data and applications primarily local in your own data center and if necessary to scale in the public cloud.

IaaS represents infrastructure resources such as computing power, storage and network capacity provided as a service, however, PaaS application platforms as a service.

What do you need for a private cloud à la Microsoft?

For building a Microsoft private IaaS environment the Windows Server Hyper-V, Microsoft System Center as well as the Microsoft System Center Virtual Machine Manager Self-Service Portal is required. This allows to provide a dedicated cloud environment, including self-service options.

Beyond that the private cloud services enabling a seamless access to Microsoft public cloud infrastructure Windows Azure. Besides resource scaling this also provides application frameworks and identity management tools to integrate the private with the public cloud based on a hybrid cloud.


Hyper-V is Microsoft’s technology for server virtualization. It is based on the Windows Server 2012 and is the basis of Microsoft private cloud. With that multiple servers can be operate as virtual machines (VMS) on one physical hardware. For this purpose Hyper-V supports different operating systems in parallel, including Windows and Linux on x64 hardware.

System Center

The System Center is the focal point of the private cloud, and assisted in the management of physical, virtual and cloud infrastructure. In addition to the management of a scalable data center infrastructure and mission-critical workloads also the control of standardized processes for the management of the data center and administrative workflows are covered. Furthermore, the System Center Self-Service functions allow users to consume the required IT resources.

Virtual Machine Manager SSP 2.0

The Virtual Machine Manager Self-Service Portal is based on the Windows Server Hyper-V and System Center. This is a free and complete solution with which resources within a data center can be dynamically pooled into groups in order to provide the private cloud with the necessary resources. Furthermore, individual or groups of resources can be assigned to different departments e.g. to deploy virtual machines over a self-service portal.

Combination with Microsoft’s public cloud

Even if the article has sensitized the political influences on the public cloud at the beginning there are still a lot of scenarios where a public cloud can still be considered. It depends on the sensitivity of the data and how a company wants to handle it himself.

The Microsoft private cloud services can also connect with Microsoft’s public cloud to offset any peak loads or improve the cooperation in different regions. Here are a few possibilities.

SharePoint Online & Office 365

If companies want to increase worldwide, the IT must grow as well. For on-premise solutions this represents financial and time challenges. It is easier to connect new locations with cloud solutions to allow employees the same access to documents and applications, such as employees in the central office. Here, SharePoint Online and Office 365 can help. Both allow cross-site collaboration and data sharing between employees in geographically diverse locations.

Exchange Online

Like exchanging data and global cooperation it’s the same when it comes to e-mail. If a company increases nationwide or even global IT decision makers face the same challenges. A Microsoft based private cloud can be extended with Exchange Online to meet the growing storage demand. In addition, employees have a simple URL to access the Exchange services in the private or public cloud.


When you think of Microsoft and databases, you first think about Microsoft Access. People who need more performance use Microsoft SQL Server. Notabene, especially a database server swallows a lot of hardware and requires a lot of performance to meet today’s needs.

An alternative from Microsoft in the cloud is SQL Azure. It is a fully managed SQL Server in Microsoft’s public cloud which is available worldwide.

Application scaling

In times of global networking and the Internet it’s difficult to estimate the required resources for an application. A single marketing campaign can be crucial for the success or failure (Applications collapses under the requests.) of a new service. You can only counteract with significant investments in your own infrastructure, without knowing whether the resources are actually needed or whether they will be even sufficient at all.

Public cloud infrastructures are more dynamic because basically more resources are available than a company usually can have available in it’s own data center. This allows applications in the cloud operate failsafe and scalable during peak loads. If the traffic increases, more resources are automatically added and removed when the rush is over. The application can either be used directly in the public cloud or in private cloud in the first run and, if necessary request additional resources from the public cloud (hybrid cloud).

It should be noted, however, that applications, whether locally or already conceived directly for the cloud, must be developed for the cloud.

Services @en @de

HP Discover: Stackato based private PaaS extends HPs cloud portfolio

During the HP Discover there was not much coverage about it, but as Ben Kepes found out, HP extends its cloud portfolio with a private platform-as-a-service (PaaS). The platform based on Stackato, which is a fork of Cloud Foundry.

HP Platform-as-a-Service

HP itself describes the new enterprise focused platform as an application platform for the developing, deployment and management of cloud based applications, regardless of the language and stack.

Partnership with ActiveState

To offer the new PaaS, HP partners with ActiveState the driving force behind Stackato. Therefore HP takes an OEM license of Stackato to integrate the PaaS into the own cloud infrastructure.

For ActiveState it looks like a small knightly accolade. Usually, one should assume that a big company like HP has the expertise to develop a PaaS on its own. But it shows, that Stackato apparently fits into HPs infrastructure.

What ist Stackato?

Stackato allows developers to use a wide range of different programming languages including Java, Ruby, Python, Perl, Node.js and PHP. It’s a typical polyglot PaaS.

The Platform-as-a-Service market

PaaS is the fastest growing segment in the cloud computing market. According to Gartner, we will see a steady growth in the next five years from 1.2 billion USD in 2012 to 2.9 billion USD in 2016. By comparison, 900 million USD in 2011. Gartner also expects, that each big software vendor will have its own PaaS in the market until the end of 2013.

Services @en @de

The Future of Cloud Computing? Private Cloud Management-as-a-Service"

When companies thinking about Cloud Computing they are always faced with topics like security and data privacy. Their fear and the risk is to high to suffer from data loss or that trespassers get access to the business critical data. The way into the public cloud is therefore increasingly weighed. The swedish company Witsbits comes up with a new concept I would call “Private Cloud Management-as-a-Service”. Quasi the swedish solution to face Cloud Computing.

The idea is relatively easy to understand. Using a bootable USB-stick or CD-ROM – with a Witsbit cloud-key stored on it – the administrator is preparing the physical servers on which the virtual machines (KVM) will be hosted afterwards. Based on the cloud-key the servers automatically connect to the Witsbits Management Platform. From here the administrator can manage the private cloud in the own data center/ server room using a centralized and web based surface.

The target group of this solution are small and medium-sized enterprises. Whereby I see a huge potential for systems houses and consultants, which are going to offer their customers a private cloud (flexibilization of existing local resources) or just want to manage the servers remote.

For the establishment no installation is required whereby the start-up can be done relatively quick. At the moment, Witsbits offers two different billing options: Free and Premium. The option “Free” is, like the name means, free. However, the use is limited to 8 GB of VRAM (The maximum total amount of Virtual Machine primary memory (RAM) under management at any given time.). The option “premium” costs $ 3 per month per 1 GB of VRAM, where there is no limit to the VRAM.

Services @en @de

"MARS-o-Matic" or design your (cloud) infrastructure the mobile way.

As you might have seen the world becomes more and more mobile. And even IT-infrastructure designer or should I better use the new paradigm “cloud designer” 😉 need new ways to work independently from a “fat” notebook and use new technologies like the iPad or Android tablets.

My friends from the automation experts arago aka Chris Boos and Roland Judas introduced me to their new idea of designing infrastructures on the go. A really convenient solution.

They call it “MARS-o-Matic” an App for the Apple iPad that allows you to design your IT-infrastructure and let benchmark it against industry standards or other automated best-practice environments. Whereby you can benchmark and optimize your or other infrastructure environments based on a lightweight and reusable model.

With this you are able to design everything visual and mobile while an XML-based data set is created in the background which you can export to other applications like a CMDB, a BSM dashboard or a service directory.

MARS-o-Matic is useful for project manager, IT-architects or consultants for planning IT infrastructures or if they want to know how much their IT operations will cost. Furthermore CIOs and IT operations manager can compare their current operation cost with industry best practices managed services offerings and other automated IT operations. And even CFOs or procurement manager have benefits if they want to benchmark a managed service against an outsourcing offer.

MARS-o-Matic is currently available as a private beta version. So if you would like to test it, just contact Chris and Roland at “”.