This post is planned to be the start of a series of posts on the various types of virtualization, helping to address some questions I’ve been asked recently on the topic. I’ll start things here with a high level view of virtualization in general and will do a deeper dive into each area in future posts.
At its root, virtualization involves creating a special software environment to run a computing service (a server, desktop, application, etc.) that is a step removed or isolated from its typical target location. In a simple example, instead of spending a bunch of money to buy a new computer to run a new web server you might instead run it in a virtual environment on an existing physical computer.
While some types of virtualization date back to the 1960s, newer types are continuing to be developed and evolved. I view the current state of virtualization as breaking down into four main categories: platform virtualization, desktop virtualization, application virtualization, and cloud computing.
Here’s a quick breakdown on each and what they are typically used for:
Platform or "server virtualization" is taking a computer operating system (like Windows, Linux, etc.) and running it in a software host instead of directly on a physical machine. The virtualization software runs on the physical machine and handles the hardware requests of the "guest" operating system in a way that makes the guest think it is running on a physical box. This can enable an IT department or an individual to do some interesting things like:
- Run many servers on a single hardware box, better using the computing power, allowing dynamic creation of new servers easily from pre-configured images, and allowing dynamic save or removal servers to conserve resources when needed.
- Run guest desktop operating systems on running desktop or laptop PC for: special case testing (test or development operating systems), running alternate operating systems (e.g. Windows running in Parallels on a Mac), or running an old operating system to get some old application to run.
In the case of desktop virtualization you get a special case desktop operating system running elsewhere (typically in a data center) that allows remote access to the desktop environment. This can be used to do things like:
- Host a personal desktop for a set of users that don’t use a computer that often during the day, and allow them to share physical computers but still have a personalized experience.
- Allow users to get to a personal workspace from many different locations like from a home PC, a work PC, and/or a public library PC.
- Allow users to get to their personal desktop from a variety of devices like laptops, thin-client terminals, or something like a terminal app on an iPad.
Application virtualization allows users to get to one or more applications that aren’t installed on the main operating system of the computer they are using. Applications can be specially packaged and hosted either locally or remotely and made to appear as if they are just installed and running normally. This allows scenarios like:
- Rarely used business applications can be hosted in a datacenter and streamed real-time to a user’s computer for use without having to be physically installed.
- Non-compatible applications can be run in a special "sandbox" and made to appear like they are executing normally. No local installation need to occur, limiting the impact on the local operating system setup or any other locally installed, potentially incompatible, applications.
Applications or even server services in the "cloud" are in some ways an extension of virtualization, though this pulls in other types of computing solution domains as well. I now include it in a discussion of virtualization because the types of problems being solved are often similar. The definition of "Cloud Computing" is continuing to evolve but usually means using a set of virtual resources over the Internet as a utility type service. An application might be hosted or servers might be partitioned in a cloud service as if the host environment is an infinite pool of virtual computing resources. This enables scenarios like:
- Applications, like a web shopping site or a service like twitter, can scale up dynamically as traffic load increases, relying on the virtual infrastructure of the cloud service to provide the needed computing power.
- Business can set up virtual server services, like file sharing and data storage, in a virtual cloud infrastructure and avoid hardware and platform management costs, instead paying by usage, e.g. amount of data transferred in and out or amount of total data stored in the service.
With that as background, I’ll break for now and leave the deeper dive into each of these four types of virtualization for future posts. I’ll likely be presenting a view of the current state of each of these areas, a look at the key vendors and what they offer, and a little "the good, the bad, and the ugly" of what each means to businesses trying to get benefit from virtualization.
Here are a couple additional resource of interest that will play into this analysis:
Microsoft Virtualization and Cloud – http://www.microsoft.com/virtualization
Citrix Virtualization – http://www.citrix.com/English/ps2/products/product.asp?contentID=683148
VMWare Virtualization – http://www.vmware.com
Oracle Virtualization – http://www.oracle.com/us/technologies/virtualization/index.htm
IBM Virtualization – http://virtualizationconversation.com
IBM Cloud – http://www.ibm.com/ibm/cloud
Amazon Cloud – http://aws.amazon.com/ec2
Google Apps – http://www.google.com/apps/intl/en/business/index.html
Salesforce.com – http://www.salesforce.com