A User's Guide to Implementing Virtualization
One of the most compelling technologies for consolidating servers has a surprising aspect: there are many available options, some of which are free. That's good news; now, but how do you put this technology to work? Read More
In my last column, I explained the basics of virtualization, how it enables sites to run multiple virtual machines (VMs) each containing its own operating systems and applications on a single hardware platform. I also showed some examples of virtualization on the desktop.
Desktop virtualization enables a user to run VMs on a PC or notebook. This arrangement is surprisingly useful. Software developers, for example, can test their programs on other operating systems right on their development workstation. Likewise, users of less-common operating systems can create an instance of Windows in a virtual machine to run Windows-specific applications, such as Microsoft Office. A defining example of this usage is VMware Fusion, which enables owners of Intel-based Apple Macs to run Windows, Linux, and Solaris x86 applications on their Macintosh, while running Mac OS X (and without rebooting).
Before moving on to server virtualization, I should point out one other important use for desktop virtualization: avoiding corruption of your current system. For example, suppose you want to evaluate several desktop packages (let’s say various project-management packages), you should not install them on your principal PC. There are several reasons for this: complex packages generally make lots of changes to your configuration. They also install background programs, such as to databases that start up automatically when you boot Windows or utilities that check the vendor site for software upgrades, and so on.
The problem is that when you remove the software, the uninstall procedures invariably leaves traces of the package: it leaves some setting turned on or performs some other irritating action that you can never quite undo completely. To solve this problem, install evaluations in a VM-one per VM. This way, when you’re done with the evaluation, you simply discard the VM and your base system is intact. This approach has an additional benefit: you can run the packages simultaneously in separate VMs, so you can compare features side-by-side if necessary.
There are lots of good reasons for examining desktop virtualization and it’s a theme I am likely to explore in future columns because the rate of advance for this technology is so great and the benefits are likewise increasingly compelling. The key products for desktop virtualization are VMware Workstation which runs on Windows; VMware Fusion for Mac OS X, Virtual PC 2007 from Microsoft for Windows and completely free; and Xen Express runs on Linux and is also free.
Server Virtualization
Most IT sites, however, are more interested in server-based virtualization. In this scheme, an entire machine is dedicated to hosting multiple VMs. By migrating applications from other servers to VMs, these hosts enable server consolidation, which results in lower power consumption and generally better performance. This last point might be surprising: how could software running in a VM and hosted on another platform be faster than running the same software on a native machine?
The curious thing is that virtualization rarely extracts more than a 10 percent performance penalty. In such a case, if the host system is as little as 10 percent faster (a good bet, if it’s newer), the application is likely to run faster in the VM than on its current system. Two exceptional cases are applications that handle heavy network traffic or that perform substantial amounts of disk I/O. These applications will compete with other virtualized applications and so will see performance degradation. Such applications are not good candidates for consolidation via virtualization.
Advances in today’s hardware systems, particularly the advent of multicore processors, are a boon to virtualization. On quad-core chips-which have four quasi-independent processing units-each VM can run on a single core or a pair of cores and delivers tremendous performance at very favorable price points. As the number of cores increase, the number of processors that can be dedicated to a single VM instance increase, thereby boosting performance even further.
So, the first step in server virtualization is to run it on a system with as many cores as possible. Intel-based servers and desktop systems today use one or more quad-core chips. AMD is expected to ship it quad-core chips in August or September, which will likely produce a jump in performance and a drop in prices.
Another processor technology to look for is specific support for virtualization. On Intel chips, this feature is called vPro, on AMD it is known as AMD-V. Most new processors from these vendors include this technology, but it’s important to verify that your host system processors do have it. Without going into too much detail, this extra silicon logic helps make virtualization run much, much faster.
The Software
Once you have chosen a hardware system, you’ll need server virtualization software. You have several choices that depend in part on the operating system you decide to run on the hardware, if any. If you opt to run Linux, you can choose from two different packages from XenSource: Xen Server and Xen Enterprise. These packages differ primarily in how many concurrent VMs they support.
On Windows-hosted systems you have one principal choice: Microsoft Virtual Server 2005 R2, which is available at no cost from Microsoft. If you’re considering trying out a pilot project for small-scale server-based virtualization, this product is a good place to start. It has good performance and is fairly easy to configure. It lacks some of the enterprise features of its principal competitor, VMware’s ESX Server.
ESX Server runs on bare metal. In other words, it has no underlying operating system. This enables the virtualization layer to have complete control of the hardware and thereby garner better performance. ESX Server is not free, and it forms the basis of a larger package called VMware Infrastructure 3, which bundles extensive support for enterprise virtualization needs, including: high-availability support, virtualized SMP, consolidated backup, and other tools.
An interesting discussion thread comparing VMware ESX vs. Microsoft Virtual PC Server can be found on the Server Virtualization Blog. More important than the article itself are the many comments posted by users of both server products and the comparative performance they obtained. As you’ll see some data centers have successfully consolidated impressive numbers of applications onto a single hardware platform.
Virtualization is making fast inroads into the data center. And the uses for it are expanding daily. Many pundits expect that eventually most data centers will run a majority of their servers using VMs. This is in part due to its tremendous flexibility and, of course, the savings it offers by reducing the number of stand-alone machines and thereby lowering energy and space consumption in the data center. For these reasons, enterprises today are starting to actively roll out pilot projects to assess the benefits virtualization will bring them.
Andrew Binstock is the technology editor at GreenerComputing.com. His blog on software and technical matters can be found at http://binstock.blogspot.com.
