Categories
Comment

Google PR agency to feedback on the long-term availability of the Google Compute Engine

After I have called the long-term availability of Google’s Compute Engine (GCE) into question, Google’s PR agency has contacted me to understand the motivations for the article. In the article I have reacted to a GigaOM interview of Googles Cloud Platform manager Greg DeMichillie who wouldn’t guarantee the long-term availability of GCE.

Background

Google Cloud Platform manager Greg DeMichillie responded in a GigaOM interview to a question on the long-term availability of Google cloud services and answered unexpected and not within the meaning of the customer.

„DeMichillie wouldn’t guarantee services like Compute Engine will be around for the long haul, but he did try to reassure developers by explaining that Google’s cloud services are really just externalized versions of what it uses internally. ”There’s no scenario in which Google suddenly decides, ‘Gee, I don’t think we need to think about storage anymore or computing anymore.“

Although DeMichillie to qualify in the end that Google wouldn’t shut down their cloud services in a kamikaze operation. However, it’s an odd statement on a service which is relatively as of late on the market.

Feedback of Google’s PR agency

Beforehand I want to clarify that this was no call to influence me, but to understand how it came to the article. Google’s PR agency said that DeMichillie was apparently misunderstood and my article only highlights this negative statement and neglects the positive themes. Google today severely invests in infrastructure resources and there are no signs and reasons that the Google Compute Engine will be closed.

My statement

It was and is never about to cast a shadow on Google (or any other vendor). This is what I told the PR agency. But at the end of the day the user needs to be advised and equally be protected. Moreover I am an analyst and advisor and counsel companies who rely on my judgment. For this reason I need to react on those statements and include it in my decision matrix, especially when it directly comes from an employee of a vendor. What should I do when I recommend the use of the GCE since the technical things and requirements fits, but Google subsequently announced to close the service? For this reason I react extremely sensitive on such topics. In addition, Google had not cover oneself in glory the recent months and years when it comes to maintain its service portfolio for the long-term. The end of Google Reader caused more negative reactions by the users than the current NSA scandal. Notabene, this is a free service for consumers. With the Google Compute Engine we are talking about a service which mainly address companies. For companies it’s about a lot of money to spend to bring the workloads to the GCE. If the service is suddenly closed this generates, depending on the respective company, a non incalculable economic damage to migrate the data and applications. This Google should consider when making decisions. Even if this was just one statement as part of an interview. This does not create trust in the cloud portfolio and fits well with the experience of the recent past when Google cleaned it’s room.

Categories
Comment

"Amazon is just a webshop!" – "Europe needs bigger balls!"

This year I had the honor to host the CEO Couch of the Open-Xchange Summit 2013. This is a format were Top CEOs are confronted with provocative questions on a specific topic and had to answer to the point. Among the guests on the couch were Herrmann-Josef Lamberti (EADS), Dr. Marten Schoenherr (Deutsche Telekom Laboratories), Herbert Bockers (Dimension Data) and Rafael Laguna (Open-Xchange). This year’s main topic was cloud computing and how German and European provider to assert oneself against the alleged overwhelming competition from the US. I’ve picked out two statements mentioned during the CEO Couch, I would like to discuss critically.

Amazon is just a webshop!

One statement has got me worry lines. One the hand VMware already had underlined that it seemingly underestimates its supposed biggest competitors, on the other hand its absolutely wrong. To call Amazon today still a webshop one must close its eyes very wide and hide about 90% of the company. Amazon is today more than just a webshop. Amazon is a technology company respectively provider. Rafael has illustrated this very well during his keynote. There are currently three vendor who have managed to set up their own closed ecosystem of the web services over the content to the devices. These include Google, Apple and Amazon.

Besides the webshop Amazon.com further technology and services belong to the company. Including the Amazon Web Services, content distribution for digital books, music, movies (LoveFilm, Amazon Instant Video), ebook reader Kindle and Kindle Fire (with an own Android version), the search engines A9.com, Alexa Internet and the movie database IMDb.

Above that, if you take a look at how Jeff Bezos leads Amazon (e.g. Kindle strategy; sell at cost price; sales over content), he focuses on the long-term growth and market share, rather than to achieve quick profits.

Who wants to get a good impression of Jeff Bezos’ mindset, I recommend the Fireside Chat with Werner Vogels during the 2012 AWS re: Invent. The 40 minutes are worth it.

Europe needs bigger balls!

The couch completely agreed. Although Europe has the potential and the companies to technically and innovative play its role in the cloud – apart from privacy issues, compared to the United States we eat humble pie, or better expressed: “Europe needs bigger balls!”. That depends on the one hand with the capital that investors in the U.S. are willing to invest, on the other hand, to the mentality to take risks, to fail and to think big and long term. At this point, European entrepreneurs and investors in particular can learn from Jeff Bezos. It’s about the long-term success, not about the short term money.

This is in my opinion one of the reasons why we will never see a European company that, for example, is able to hold a candle to the Amazon Web Services (AWS). The potential candidates like T-Systems, Orange and other major ICT providers that have data centers, infrastructure, personnel and necessary knowledge, rather focus on the target customers they have always served – the corporate customers. However, the public cloud and AWS similar services for startups and developers are completely neglected. On one side this is alright since cloud offerings on enterprise level and virtual private or hosted private cloud solutions are required to meet the needs of enterprise customers. On the other hand, nobody should be surprised that AWS currently has the most market share and is seen as an innovation machine. The existing ICT providers are not willing to change their current business or to expand it with new models to address another attractive audiences.

However, as it also my friend Ben Kepes well described, Amazon is currently quite popular and by far the market leader in the cloud. But there is still enough room for other provider in the market who can offer use cases and workloads that Amazon can not serve. Or because the customers simply decide against the use of AWS, since it’s too complicated, too costly or too expensive for them, or is simply inconsistent with legal issues.

So Europe, put on bigger eggs! Sufficient potential exists. Finally, providers such as T-Systems, Greenqloud, UpCloud, Dimension Data, CloudSigma or ProfitBricks have competitive offerings. Marten Schoenherr told me that he and his startup process of developing a Chromebook without Chrome. However, I have a feeling that Rafael and Open-Xchange (OX App Suite) have a finger in the pie.

Categories
Comment

The importance of mobile and cloud-based ways of working continues to grow

The advantage of a flexible way of working because of mobile and cloud technologies to outweigh the concerns and risks of 86 percent of executives and managing director of small and medium-sized enterprises. These are the results of a global study among 1.250 enterprises in Europe, North America and Canada by YouGov on behalf of Citrix.

Better productivity and image enhancement

The increase of productivity 48 percent and thus about the half of the respondents expect from the capabilities of flexible ways of working. 32 percent hope for an enhancement of their image as an employer. Approximately a quarter (23 percent) see in mobile ways of working a better compensation of a work-life balance, especially for employed parents (29 percent). In addition 39 percent see an improved integration of external teams and 28 percent an advantage for business continuity.

Concerns with private life

However, mobile ways of working also cause concerns. The largest to refer to the separation of private and work life (41 percent) and the apprehension, employees to obliged themselves for after hours. Although half of the respondend enterprises implemented mobile was of working, however, they encounter concerns with fixed oral or written regulations, controlling the employees. Certainly these restrictions to contradict to the wish to organize the work with self-reliance. Thereto the majority of 73 percent refuse rules to organize flexible and mobile ways of working in timeframes since the basically idea of flexible and mobile ways of working are restricted. Otherwise employees are not able to choose the location and time on their own anymore to be more productive and find their work-life balance.

Mobile and cloud-based ways of working are important

For most of us the worklife not to occur in a local limited area in a long time. Instead we live in a global, connected and especially mobile world from which each of us should gain its main benefits to become more productive and thus more profitable for the company but simultaneously have a balanced private life.

Modern enterprises today need to offer their employees the capabilities/ liberties to work from wherever they want. This must not be the Starbucks at the corner. But it offers the opportunity to withdraw in more creative areas like co-working spaces and thus obtain other impressions and opinions or get feedbacks of potential customer or partner. The biggest advantage consists in the possibility to escape from the daily outine in the office and be able to develop further. Cloud and mobile technologies to enable this like not any other technologies before.

The hazard to work overtime or even to end up in a burnout plays constantly in today’s working environment. But this is not an issue of flexbile respectively mobile ways of working. To the contrary, an employee obtains the liberty to “take time out” in a personal or even currently environment in order to balance. However, employees need to assume more responsibility to reach the agreed goals and simultaneously take care and organize itself to continue to have a private life. Whereby the employer should also help to make sure to capture the attention.

Categories
Comment

Google Compute Engine seems to be no solution for the long haul

In an interview with GigaOM, Google‘s Cloud Platform manager Greg DeMichillie made an odd statement on the future of the Google Compute Engine which again have to lead to a discussion on the future-proofness of Google’s cloud service portfolio and if it makes sense to depend on the non core business areas of the search engine provider.

Google is to agile for its customers

After Google announced to close the Google Reader, I already asked the question how future-proof the Google cloud portfolio is. In particular, because of the background, that Google starts to monetize more and more services, those due to the revenue get a new KPI and thus are threaten a closure. Google Cloud Platform manager Greg DeMichillie exactly meets this question in a GigaOM interview and answered unexpected and not within the meaning of the customer.

„DeMichillie wouldn’t guarantee services like Compute Engine will be around for the long haul, but he did try to reassure developers by explaining that Google’s cloud services are really just externalized versions of what it uses internally. ”There’s no scenario in which Google suddenly decides, ‘Gee, I don’t think we need to think about storage anymore or computing anymore.“

Although DeMichillie to qualify in the end that Google wouldn’t shut down their cloud services in a kamikaze operation. However, it’s an odd statement on a service which is relatively as of late on the market.

These are things customers should better not hear

The crucial question is why a potential customer should decide for the Google Compute Engine for the long haul? Due to this statement one have to advise against the use of the Google Compute Engine and instead set on a cloud computing provider who has its actually core business in infrastructure-as-a-service and not be indulgent to sell its overcapacities and instead operate a serious cloud computing business.

I don’t want to speak of the devil and the devil shows up! But news like the sudden death of Nirvanix – an enterprise cloud storage service – to make massive waves and outface the users. This also Google should carefully understand if it wants to become a serious provider of cloud computing resources.

Categories
Comment

Dangerous: VMware underestimates OpenStack

Slowly we should seriously be worried who dictated VMware executives, what they have to say. In early March this year, COO Carl Eschenbach says, that he finds it really hard to believe that VMware and their partners cannot collectively beat a company that sells books (Note: Amazon). Well, the current reality looks different. Then in late March, a VMware employee from Germany comes to the conclusion, that VMware is the technology enabler of the cloud, and he currently see no other. That this is far from it, we all know, too. Lastly, CEO Pat Gelsinger puts another one on top and claims that OpenStack is not for the enterprise. Time for an enlightenment.

Grit your teeth and get to it!

In an interview with Network World, VMware CEO Pat Gelsinger spoke on OpenStack and does not expect that the open source project will have a significant reach in enterprise environments. Instead, he considered it as a framework service providers can use to build public clouds. In contrast, VMware has a very wide spread with extremely large VMware environment. The cost of switching and other topics are therefore not particularly effective. However, Gelsinger sees for cloud and service providers, in areas where VMware has not successfully done business in the past, a lot of potential for OpenStack.

Furthermore, Gelsinger considered OpenStack as a strategic initiative for VMware, which they are happy to support. VMware will ensure that its products and services work within cloud environments based on the open source solution. In this context OpenStack also opens new opportunities for VMware to enter the market for service providers, an area that VMware has neglected in the past. Therefore, VMware and Gelsinger see OpenStack as a way to line up wider.

Pride comes before a fall

Pat Gelsinger is right when he says that OpenStack is designed for service providers. VMware also remains to the leading virtualization vendors in the enterprise environment. However, this type of stereotypical thinking is dangerous for VMware. Because the tide can turn very quickly. Although Gelsinger like to see the high switching costs as a reason why companies should continue to rely on VMware. However, there is one thing to consider. VMware has its strengthen only(!) in virtualization – with the hypervisor. In relation to cloud computing, where OpenStack has its center of gravity, it does not look as rosy as it might look. To be honest, VMware has missed the trend to offer early solutions that make it possible to enable the virtualized infrastructure for the cloud, and serve with higher quality services and self-service capabilities to allow the IT department to become a service broker. Meanwhile, solutions are available, but the competition, not only from the open source environment, grows steadily.

This is also seen by IT buyers and decision makers. I’ve spoken with more than one IT manager who plans to exchange its VMware infrastructure against something open, in most cases KVM was named, and more cost-effective. This is just the virtualization layer which can break away. Furthermore, there are already some use cases of large companies (see on-premise private cloud) that use OpenStack to build a private cloud infrastructure. What also must not be forgotten is that more and more companies change in the direction of a hosted hosted private cloud or public cloud provider and the own data center will become less important over time. In this context, the hybrid cloud plays an increasingly important role to make the transfer and migration of data and systems more convenient. This is where OpenStack due to its wide distribution in hosted private cloud and public cloud environments has a great advantage over VMware.

With such statements, of course, VMware tries to suppress OpenStack from their own territory – on-premise enterprise customers – to place their own solutions. Nevertheless, VMware should not make the mistake of underestimating OpenStack.

Categories
Comment

Caught in a gilded cage. OpenStack provider are trapped.

OpenStack is on the rise. The number of announcements increase that more and more companies and vendors rely on the more than three years old open source project to provide scalable solutions and own infrastructure-as-a-service (IaaS) offerings. However, in my view, the OpenStack community is in a dilemma – diversification. In addition, unnecessary disturbances from the outside carried indoors, which do not consider exactly this issue. Thus, for example, the discussions on the Amazon API compatibility by Randy Bias are as little conducive as Simon Wardley’s demand OpenStack should be like the Amazon Web Services (virtually a clone). OpenStack has to find its own way. However, OpenStack itself is not the problem, it is the provider who use OpenStack. These are 100% responsible to present meaningful offers and deploy OpenStack profitably for themselves.

Amazon API compatibility is a means to an end

I think it is important that OpenStack is implementing the Amazon API in order to offer the possibility, if necessary, to span a hybrid cloud to the Amazon Web Services. At least OpenStack service providers should offer their customers the option, to not only theoretically promise no vendor lock-in, but actually allow this.

But that’s it. Amazon should not have more influence on the OpenStack community. To get the curve to the “Linux of the cloud.” Has Linux orientated at Microsoft Windows? No. And it has become successful. In my view, the problem lies also in the fact that Linux was developed from an ideology by a single person and then was driven forward by a large community. OpenStack, however, has been launched to meet 100% of a commercial purpose for the OpenStack community. For this reason, OpenStack is nothing more than a big marketing machine of all participating providers. The OpenStack community must find its own way, create innovation itself and should not be influenced by what Amazon is doing.

Amazon Web Services are NOT the biggest competitor

What I still do not understand about the Amazon Web Services vs. OpenStack discussion is, that constantly apples and oranges are compared. How can one compare a public cloud provider with a software for the development of public/private clouds? If you really want to compare Amazon Web Services and OpenStack you’ve to match each single OpenStack provider with AWS! Everything else is meaningless. Only then you can make a real statement!

However, one will then realize very quickly that the by the OpenStack community self-proclaimed competitor Amazon Web Services is not the competitor! This sounds hard, but it’s the truth. There is currently no single OpenStack service provider who rudimentarily can hold a candle to the Amazon Web Services. The Amazon Web Services are the imaginary competitor, the desired competitor in the minds of the providers.

Just compare the services of the two top OpenStack public cloud provider Rackspace and HP to the Amazon Web Services.

Amazon Web Services

Rackspace

HP

Amazon EC2 Cloud Servers Compute
Auto Scaling
Elastic Load Balancing Cloud Load Balancers Load Balancer
Amazon EMR
Amazon VPC
Amazon Route 53 Cloud DNS DNS
AWS Direct Connect
Amazon S3 Cloud Files Object Storage
Amazon Glacier Cloud Backup
Amazon EBS Cloud Block Storage Block Storage
AWS Import/Export
AWS Storage Gateway
Amazon CloudFront CDN
Amazon RDS Cloud Databases Relational Database
Amazon DynamoDB
Amazon ElastiCache
Amazon Redshift
Amazon CloudSearch
Amazon SWF
Amazon SQS Messaging
Amazon SES
Amazon SNS
Amazon FPS
Amazon Elastic Transcoder
AWS Management Console Management Console
AWS Identity and Access Management (IAM)
Amazon CloudWatch Cloud Monitoring Monitoring
AWS Elastic Beanstalk Application Platform as a Service
AWS CloudFormation
AWS Data Pipeline
AWS OpsWorks
AWS CloudHSM
AWS Marketplace
Cloud Sites
Managed Cloud
Mobile Stacks

This comparison shows that the Amazon Web Services are not the biggest competition, but the danger comes from the own camp. Where is the diversification, if the two big OpenStack public cloud provider offer up to 90% of exactly the same services? The service portfolio of both Rackspace and HP is not able to represent a competition to the Amazon Web Services, by far. On the contrary, both take market share away from each other by offering almost identical.

Caught in a gilded cage

The OpenStack providers are in a dilemma, which I regard as the gilded cage. In addition, all providers basically cannibalize each other, which shows the comparison of the services of Rackspace and HP by the portfolios are hardly different.

But why do all OpenStack provider sitting in a gilded cage. Well, they benefit from each other by everyone make new ideas and solutions available to the project, and all benefit equally from a common code base to put it together to an own offering. But that also means in reverse that no one can draw a real competitive advantage of it by all sitting in this cage using the same means. The cage includes basically something valuable and offers its possibilities with the available services. However, each provider has limited freedoms by all have the same basic supplies.

Rackspace is trying to differentiate with an extended support. Piston Cloud to completely keep out of the public cloud competition and offers only private or hosted private clouds.

I have already followed discussions on Twitter, where it came up to duel with Amazon over a hybrid OpenStack cloud. However, many not bargain on Eucalyptus, which formed an exclusive partnership with Amazon and has developed more and more services lately to close the service gap to Amazon.

Furthermore, one must not be ignored. The comparison to Linux seems to be correct in its approach. However, most Linux distributions are free of charge. OpenStack provider have to sell their services to be profitable. This also means that OpenStack providers are doomed to make as much profit as possible from their service offering to cover the running costs of the infrastructure, etc..

Differentiation over an attractive service portfolio only

OpenStack provider have to accept being in the gilded cage. But that does not mean that one therefore must be just one of many vendors. At first one should refrain from seeing infrastructure-as-a-service as a bearing business model, at least as a small vendor. Infrastructure is commodity. Furthermore, one should not continue, or even begin to imitate the Amazon Web Services, the train has left the station. Instead, attempt should be made to develop the wheel further and become – on innovation – the next Amazon Web Services.

Rather, it is about that every OpenStack provider must make the most out of the OpenStack project and not forget to be disruptive and neglect innovations. In the future, only the cloud providers will be successful who offer services, giving customer an added value. Here OpenStack can and will play a very important role, however, not occupy center stage, but serve as a means to an end.

The OpenStack providers must begin to change the marketing machine OpenStack to the innovation machine OpenStack. Then the admiration comes naturally.

Categories
Comment

ProfitBricks opens price war with Amazon Web Services for infrastructure-as-a-service

ProfitBricks takes the gloves off. The Berlin-based infrastructure-as-a-service (IaaS) startup acts with a hard edge against the Amazon Web Services and reduced its prices in both U.S. and Europe by 50 percent. Furthermore, the IaaS provider has presented a comparison which shows that its own virtual server should have at least twice as high as the performance of Amazon Web Services and Rackspace. Thus ProfitBricks is trying to diversify on the price and the performance of the U.S. top providers.

The prices for infrastructure-as-a-service are still too high

Along with the announcement, CMO Andreas Gauger shows correspondingly aggressive. “We have the impression that the dominant occurring cloud companies from the U.S. are abusing their market power to exploit high prices. They expect deliberately opaque pricing models of companies and regularly announce punctual price cuts to awaken the impression of a price reduction.”, says Gauger (translated from German).

ProfitBricks has therefore the goal to attack the IaaS market from the rear on the price and let their customer directly and noticeably participate from technical innovations resulting in cost savings.

Up to 45 percent cheaper than Amazon AWS

ProfitBricks positioned very clearly against Amazon AWS and shows a price comparison. For example, an Amazon M1 Medium instance with 1 core, 3.75GB of RAM and 250GB of block storage is $0.155 per hour or $111.40 per month. A similar instance on ProfitBricks costs $0.0856 per hour or $61.65 per month. A saving of 45 percent.

Diversification just on the price is difficult

To diversify as IaaS provider just on price is difficult. We remember, Infrastructure is commodity! Vertical services are the future of the cloud, with which the customer receives an added value.

To defy the IaaS top dog this way is brave and foolhardy. However, one should not forget something. As hosting experts of the first hour Andreas Gauger and Achim Weiss have validated the numbers around their infrastructure and seek with this action certainly not the brief glory. It remains to be seen how Amazon AWS and other IaaS providers will react to this stroke. Because with this price reduction ProfitBricks shows that customer can actually get much cheaper infrastructure resources as is currently the case.

As an IaaS user there is something you should certainly do not lose sight of during this price discussion. In addition to the price of computing power and storage – which are held up again and again – there are more factors to take into account, that determine the price and which are actually just call to mind at the end of the month. This includes the cost of transferring data in and out of the cloud as well as costs for other services offered around the infrastructure that are charged per API call. In some respects there is a lack of transparency. Furthermore, a comparison of the various IaaS providers is difficult to represent as many operate with different units, configurations and/or packages.

Categories
Comment

Top Cloud Computing Washer – These companies don't tell the truth about their products

Since the beginning of cloud computing, the old hardware manufacturers are trying to save their business from sales slumps by selling their storage solutions, such as NAS (Network Attached Storage), or other solutions as “private cloud” products to position against real flexible, scalable and available solutions from the cloud. The Americans call this type of marketing “cloud-washing”. Of course, these providers use the current political situation (PRISM, Tempora, etc.) in order to promote their products with further strengthen. Tragically, young companies also jump on this train. Because what the ancients can, they may eventually can do, too. Wrong! What these providers not interested at all. They reckless ride roughshod over the real provider of cloud computing solutions. This is wrong and a distortion of competition, as they advertised with empty marketing phrases, that demonstrably can not be met. At the end of the day, not only the competition of these providers and the cloud is seen in a bad light, but also the customer buy a product with misconceptions. One or the other decision-makers will surely soon be experiencing a rude awakening.

Background: Cloud Computing vs. Cloud-Washing

In recent years many articles on the topic of cloud-washing have appeared here on CloudUser. A small selection, mostly in German:

What says Wikipedia?

Private Cloud according to Wikipedia

“Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third-party and hosted internally or externally. Undertaking a private cloud project requires a significant level and degree of engagement to virtualize the business environment, and requires the organization to reevaluate decisions about existing resources. When done right, it can improve business, but every step in the project raises security issues that must be addressed to prevent serious vulnerabilities.

They have attracted criticism because users “still have to buy, build, and manage them” and thus do not benefit from less hands-on management, essentially “[lacking] the economic model that makes cloud computing such an intriguing concept”.”

Cloud characteristics according to Wikipedia

Cloud computing exhibits the following key characteristics:

  • Agility improves with users’ ability to re-provision technological infrastructure resources.
  • Application programming interface (API) accessibility to software that enables machines to interact with cloud software in the same way that a traditional user interface (e.g., a computer desktop) facilitates interaction between humans and computers. Cloud computing systems typically use Representational State Transfer (REST)-based APIs.
  • Cost is claimed to be reduced, and in a public cloud delivery model capital expenditure is converted to operational expenditure. This is purported to lower barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are required for implementation (in-house). The e-FISCAL project’s state of the art repository contains several articles looking into cost aspects in more detail, most of them concluding that costs savings depend on the type of activities supported and the type of infrastructure available in-house.
  • Device and location independence enable users to access systems using a web browser regardless of their location or what device they are using (e.g., PC, mobile phone). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
  • Virtualization technology allows servers and storage devices to be shared and utilization be increased. Applications can be easily migrated from one physical server to another.
  • Multitenancy enables sharing of resources and costs across a large pool of users thus allowing for:
    • Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
    • Peak-load capacity increases (users need not engineer for highest possible load-levels)
    • Utilisation and efficiency improvements for systems that are often only 10–20% utilised.
  • Reliability is improved if multiple redundant sites are used, which makes well-designed cloud computing suitable for business continuity and disaster recovery.
  • Scalability and elasticity via dynamic (“on-demand”) provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads.
  • Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface.
  • Security could improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than other traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford. However, the complexity of security is greatly increased when data is distributed over a wider area or greater number of devices and in multi-tenant systems that are being shared by unrelated users. In addition, user access to security audit logs may be difficult or impossible. Private cloud installations are in part motivated by users’ desire to retain control over the infrastructure and avoid losing control of information security.
  • Maintenance of cloud computing applications is easier, because they do not need to be installed on each user’s computer and can be accessed from different places.

The National Institute of Standards and Technology’s definition of cloud computing identifies “five essential characteristics”:

  • On-demand self-service. A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.
  • Broad network access. Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).
  • Resource pooling. The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. …
  • Rapid elasticity. Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear unlimited and can be appropriated in any quantity at any time.
  • Measured service. Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.

Source: http://en.wikipedia.org/wiki/Cloud_computing

Top Cloud Computing Washer

Protonet

The latest stroke of genius comes from Hamburg’s startup scene in Germany. A NAS with a graphical user interface for social collaboration in a really(!) pretty orange box is marketed as a private cloud. Of course, the PRISM horse is ridden.

ownCloud

Except for the name basically there is not much cloud in ownCloud. ownCloud is a piece of software with which – not easily – a (real) cloud storage can be built. For this, of course, an operating system, hardware and much more is needed.

Synology

Synology writes itself, that they are “… a dedicated Network Attached Storage (NAS) provider.” Fine, but why to jump on the private cloud train? Sure, it’s currently selling well. If the cloud is soon the qloud, Synology will definitively sell private qlouds.

D-Link

D-Link is also not bad. In a press release from last year it said generously:

D-Link, the networking expert for the digital home, enhanced the cloud family by adding a new router: With the portable DIR-506L the personal data cloud can be easily stick in your pocket.

and

D-Link continuously invests in the development of cloud products and services. … Already available are the Cloud router … multiple network cameras and network video recorders …

D-Link now even cloudifies network cameras and network video recorders to increase sales over the cloud train. Mind you, at the back and costs of the customers.

Oracle

Oracle loves hardware and licenses. This initially one could see clearly. Preconfigured application servers to rent for a monthly fee and then be installed in the customer’s data center were marketed as infrastructure-as-a-service. However, slowly the vendor catches up. It remains to be seen what impact the cooperation with Salesforce will have on Larry Ellison. He will certainly still annoyed that he simply no longer observes the Sun cloud technology after the acquisition.

Further tips are welcome

These are only a few providers that use the cloud computing train to secure their existing or even new business model. Who has more tips like these can send them with the subject “cloud washer” to clouduser[at]newagedisruption[.]com.

Categories
Comment

The IT department does not die out – But it will have to change

Recently an article in the english Computerworld got the circuit in which the assumption was made, that by the end of this decade the IT department could disappear. T-Systems’ Ingo Notthoff subsequently debates on the question at Facebook, “If the IT departement to die out.” I’ve clearly answered that they will not die out, but transform into a service broker. This discussion Ingo again has written down in a (German) blog post. A this point I just would like to continue this topic to clarify my point of view.

Despite the consumerization of IT there is a lack of important knowledge

Even if I appreciate everything which promises to be disruptive in any form. There are things that are required despite the massive use of technologies und self-services. I am talking about humans.

I know and it is right that cloud services via self-service can be virtually used by anyone in the enterprise, to reach the own goals by the personal requirements without always waiting for the IT department. But is that reasonable? Can anyone decide which services are valuable and important for the company, just because he can use an iPhone or a SaaS application? In doubt, the knowledge is obtained 100% of external consultants, which is not necessarily beneficial. Saving costs for staff is fine and dandy, but need to stop at some point. Because where costs can be saved somehow, others have to work longer. The line of business managers will be thankful.

Moreover, just listen into the companies. Of course, most would want that IT works faster. But do they also want to take responsibility for it in addition to their main tasks? No! This certainly works for a few areas in the company, but most employees don’t have the knowledge, desire and time for it.

IT departments need to reinvent themselves

After I take up the cudgels for the IT departments, I also have to express criticism. Has not everyone already annoyed on the slow, hanging behind the time IT department? How can it be that you have to wait up to 3 months on the hardware for a test system(!). And at the end it turns out that it’s just a virtual machine. Of course, these experiences feed those who would mostly like to eliminate IT departments from one day to the other. In this case for a good reason.

Nevertheless, each good IT department is very valuable to any business. The extreme examples confirm fortunately not the rule. However, no IT department should go on this way, but need to concern about a structural change and ultimately implement this. Thanks to the cloud it has lost its central position for the purchase and operation of IT solutions. Shadow IT is here a so far proven means for the employees passing IT department to obtain IT services quickly and on demand.

This circumstance needs to be eliminated. Shadow IT is necessarily not something very bad. At least it helps to ensure that employees get things done quickly in their jobs and in their own way. For each decision maker and IT manager, however, it is comparable to a walk over hot coals. There is nothing worse in a company, if the left hand not know what the right does or when IT solutions degenerate into uncontrolled proliferation. This can be handled only by a central organization. On which the IT departments should not go back into their ivory tower, but pro-actively communicate with the employees of the departments to understand their needs and requirements. The IT department is the internal IT service provider for the employees and departments and should also classified like this into the company. In times of internal and external (cloud) services, broker platforms are the tools with which they are coordinating and steering it for the employees.

Coordination is hugely important

Where we finally come to the issue of IT responsibility within the company once again. Depending on which study you want to believe, the private cloud is currently the preferred cloud shape in the company. After all, 69 percent of the respondents say that. In addition, in 80 percent of all cases the decisions on purchasing IT solutions are made in the IT departments. At first this sounds like the preservation of the status quo. But due to the current political developments it is probably remain the reality for now. Nevertheless, real private cloud solutions enable enterprises a flexible allocation of resources through a self-service for their employees.

But who should build this private cloud infrastructures and who should coordinate them? This can only be done by the IT departments. All other employees have a lack of the necessary knowledge and time. IT departments need to learn from the providers in the public cloud, allowing the departments a fast and especially easy access to IT resources in a similar way. This only works if they establish themselves as a service broker for internal and external IT services and see themselves as a cooperative partner (service provider).

Categories
Comment

Disgusting: Protonet and its cloud marketing

First of all, congratulations Protonet. To achieve 200,000 EURO on Seedmatch is a big thing. I guess you’ve read my artikel about you. It was even been linked at Seedmatch. But nevertheless, you sold your product still as cloud computing product? “Protonet revolutionized the cloud computing market with the simplest server in the world that combines the best of the cloud with the benefits of local hardware.” Sorry, even if I think your box is actually interesting, but instead of positioning this as a “NAS” on the market, you are rather jumping on the cloud computing train and do cloud-washing. This is disgusting.

Protonet doesn’t tell the truth!

I had a chat with Ali Jelveh at CeBIT 2012 shortly after the release of Protonet. And he has admitted in this conversation that Protonet has nothing to do with cloud, but sounds good, just because everyone is talking about the cloud.

Protonet deliberately deceiving investors

Protonet writes on Seedmatch:

“Credibility
We tell an honest, authentic story. We do not artificially sugarcoat our product. Independence and freedom in the digital age affect everyone. And we have the necessary tools to obtain these basic values​​.”

But then I have to ask why you sold your solution for something it’s not? Credibility is something else.

For the investors the jump on the cloud train:

“In terms of individual benefit aspects of the Protonet box, we compete each with a number of competitors. In the Home Server field, we compete with vendors such as Synology, Iomega, Western Digital and Buffalo. In the social collaboration market, we challenge with services like Yammer, Dropbox, Campfire or Teambox for future market shares. Our innovative combination of hardware and collaboration platform in a design product currently no other company offers.”

“In short: All the benefits of the cloud – without the disadvantages. Our customers regain control of their data and information superiority and enjoy the highest possible data security.”

Furthermore Protonet has used the growth figures of the cloud computing market to look more attractive, although Protonet has nothing to do with cloud computing!

Cloud computing properties

To identify a true Cloud Computing offering, you shall take care of the following characteristics:

  • On Demand:
    I obtain the resources at the moment, when I actually need them. Afterwards I just “give them back”.
  • Pay as you Go:
    I just pay for the resources which I am actually using, when I am using them. Thereby it will be deducted per user, per gigabyte or per minute/hour.
  • No basic fee:
    Using a Cloud Computing offering I do not have to pay a monthly/annual basic fee!
  • High Availability:
    When I need the resources I can use them exactly at this time.
  • High Scalability:
    The resources adapt to my needs. This means that they either grow with my needs, if I need more performance or they become smaller if the requirements decrease.
  • High Reliability:
    Resources I am using during a specific period of time are actually available when I need them.
  • Blackbox:
    I don’t have to care about the things inside the Cloud offering. I just use the service over an open, well documented API.
  • Automation:
    After the establishment regarding my needs, I do not have to take any interventions manually, while using the offering. That means, I do not have to change the performance of the Server or the size of the storage space manually. The Cloud provider allocates capabilities (like tools etc.) for automation.
  • Access via the Internet:
    This is discussible! However, the cost advantage which is obtained by Cloud Computing is obsolete if an expensive exclusive leased line is required, for example to use the resources of a provider.
  • No additional installations:
    With a SaaS offer you have the full use over the web browser without to install new software components, eg. Java (environment).

Question, can a NAS meet these requirements? No!

Even internationally Protonet is criticized

Although I have never spoken about Protonet with him, my friend and analyst colleague Ben Kepes from New Zealand critically considered the solution and also put it in the category of cloud-washing.

Two areas that also Ben clearly highlights:

Protonet makes total sense, it’s a great solution. But it isn’t in any way cloud.

Let’s use the age old acronyms to run a check on this, firstly Cloudcamp founder Dave Nielsen’s OSSM that states that a cloud service should be:

  • On demand
  • Scalable
  • Self service
  • Metered

Well Protonet isn’t scalable (beyond the obvious ability to swap out drives for bigger ones, its service isn’t metered and while some might call it self-service, driving down to your local computer supplies retailer for a new drive doesn’t really cut it when compared to true programmatical access.

So let’s take another try, this time using the father of Cloudonomics, Joe Weinman’s, CLOUD mnemonic. According to Weinman, a cloud service should be;

  • Common infrastructure
  • Location independence
  • Online accessibility
  • Utility pricing
  • On-demand resources

So Protonet scores even lower using this test. Sadly.

Congratulations Protonet, you have made it with Ben on one of the most influential cloud blogs in the world. However, he also has seen through you!

Journalists are not critical enough

The best thing is that even German journalists had been upset by this marketing. On CIO.de a laudatory article is published and exactly in the middle is an info box with the title

“The Advantages of Cloud Computing”.

And the description:

“Especially for small and medium businesses initial investments in IT ar an enormous hurdle. Cloud models as an alternative not only offer the opportunity to convert capital costs into operating costs, but also save at the bottom line.”

A big contrast on a single page.

Update: After the publication of this post, CIO.de justifiably deleted the above named article from their website. Thus the link is redirected to the homepage of the magazin.

Honesty, honesty, honesty

Summarized question: What has Protonet to do with cloud computing or a private cloud? Summarized and simple answer: Nothing.

Honestly, I find it ridiculous to jump on the cloud computing train, confuse investors, journalists and the rest of the general public with such a marketing bubble and then cashing money. Actually, Protonet did not need that. The enhanced NAS Chatter copy is generally a good idea. And in all seriousness, I do not begrudge Protonet’s success so far. But to build a business on untruths and deceptions already backfired on numerous examples.

This post is written with lots of emotion and sounds nasty. But every single word I mean seriously, because telling untruths for the own advantage are not a harmless crime!