The latest IT trend on infrastructure is “the cloud”. But what is all about? Well, if you ask 10 different persons what cloud computing is, you’ll probably get 7 different definitions – from “a service over the internet” to “a user-friendly version of grid computing”, “a resource pool with on-demand resource allocations”, “self-service internet infrastructure” and the list can continue. Problem is, all these definitions are correct… Depending on how you look at it, “the cloud” can be all that and much more.
This is one of the reasons for which this article is called “clouds computing”. There isn’t just one type of cloud: there is the private cloud, there is the public cloud, and there is the hybrid cloud, as various combinations of the previous two. Now, the difference between a private cloud and a public cloud is made by the firewall (which restricts foreign access) and by the premises where the cloud is physically located.
The mainstream concept involves service over the Internet, via web applications/web services, from an off-site (third-party) provider who bills the user based on usage of computational resources – depending on the contract the billing criteria can include processing power, storage and uptime.
An option for “taking cloud computing at home” instead of turning to an internet-based service, private clouds aren’t really clouds. The users of private clouds still have to buy, build, and manage the infrastructure, meaning they will not benefit from the lower TCO (total cost of ownership) that characterizes the cloud concept – on any aspect: either as up-front costs or subsequent operational costs/effort.
A combination of the two above – public and private – having a part of the infrastructure self-hosted, and the rest of it relying on an external service provider. This approach is fit for the situations when sensitive data can’t be externalized, but in the same time – to gain performance or reduce running costs – the “relaxed” part of the business can run on foreign equipment.
Like anything new, from a technical point of view, cloud computing isn’t new at all. It’s been called “virtualization” all along – until now.
So why the rebranding? Because it’s centralized, it’s bigger and, of course better – and by better we mean “smarter”. The novelty of the cloud concept is less in the0020technical aspects, but rather in the business model – what is really new here is the fully externalized service approach that allows exchanging all up-front acquisition costs and subsequent operational costs (and hustle) for a pay-per-usage fee.
From a technical point of view, when comparing to the usual virtualization set-up, the cloud is fully centralized. While in an classic virtualized environment you would have a console for virtual machines management, a console for storage (SAN) management and various other consoles to admin various other parts of the environment, at this new level of virtualization –cloud computing – everything can be done much more easier from a single point, usually on a web interface. There are two major advantages: security (less points of access to manage) and wide availability.
Is it bigger? It is the environment that covers all other environments where you needed to connect till now to provision a new machine or a new testing environment. And it has something more, it can cover not only a site (a datacenter) but all sites that your company has, and can and will suggest what is the best place to start new machines according with the resources provisioned vs resources available (after the machines are up, the cloud will migrate by himself the environment in such way to exploit the hardware platform at his maximum capacity).
Also, it can contract or grow the resources provisioned according with the consumptions during the peaks of the day or the nadir of the night, and so the power and cooling costs it will decrease and free some bucks for other projects.
And how can a cloud be smarter? Well, the main thing that cloud brings as a novelty here is the idea of an interface allowing for quick business workflows execution on top of existing virtualization structures. Sure, that sounds good, but I bet you’re wondering what it really means, so let me give you a specific example. Say a company wants to move its test and development environment to one of these smart clouds.
The difference from a standard virtualized environment is that the cloud environment interface allows for definition of integrated processes: whenever the test team will need a new test environment instance it can be created with a simple wizard interface in several clicks and will be fully ready for use in a matter of minutes – or upon clicking the “build” button in the virtualized development environment the code will be built, a new virtual machine instance will be created and the build installed automatically, and the newly-installed environment will be available to the distributed team.
For other companies, cloud can mean different business workflows, but the principle is the same: fully centralized, smart usable virtualization.
So, to summarize, from a technical point of view, “the cloud” is the same bunch of equipment from the usual datacenter filled with virtualization at all levels (computing power, storage, network), based on internal premises or at external partners (and delivered as a service), on top of which stays a new layer of management applications, implementing business processes and workflows.
End of Part 1