Sign up by Feb. 21 for Circularity, the leading circular economy event 4/29-5/1, in Denver, to save $800.

Article Top Ad

The Challenges in Making the Move to Virtualization

Virtualization is one of the most important eco-friendly moves a data center can make and, undeniably, one of the smartest from the pure IT-technology perspective. In this feature, Andrew Binstock explains simple solutions to some of the stumbling blocks in going virtual. Read More

(Updated on July 24, 2024)

During the last 18 months, I have discussed virtualization from multiple angles: what it is, how it saves energy and dollars, the various software packages that deliver virtualization, and the typical hardware platforms for running them. This coverage arises from the conviction that virtualization is one of the most important eco-friendly moves a data center can make and, undeniably, one of the smartest from the pure IT-technology perspective. Virtualization solves lots of problems, not just the core green concerns.

But as veteran IT managers know, every solution introduces challenges of its own. These challenges or — let’s call them what they really are  — problems, are rarely covered in the popular IT press, but they are quite real. And taking them into account as you begin a pilot to test out virtualization at your site helps avoid unexpected surprises down the road.

Level-Setting Expectations
1) Migrating applications to a virtual host does not change the management profile of the application. You are not lowering the administrative overhead of an app by consolidating it. Only the hardware support is reduced. And, as we’ll see in my next column, managing virtual machines introduces new management issues that you need to anticipate.

2) When you consolidate various servers onto a single server via virtualization, you exchange many points of failure, inefficient as they are, for a more efficient single point of failure. While most servers intended for virtualization are carefully built with redundancy in mind so failure is not likely, there are times when you might want to take a server down intentionally, such as for maintenance or to upgrade the system. You need to be careful that you don’t combine mission-critical applications from different systems on the same server such that you can never bring it down. Choose carefully which apps you put on which servers so that their uptime needs don’t conflict.

3) The newly hosted applications might not run any faster. Performance will depend on the hardware layer and on the workload of the other virtual machines on the system. Make sure that you balance the workloads so that it’s unlikely that two apps would be competing fiercely for a single resource, thereby killing their performance and possibly starving other applications. The ideal way to pilot virtualization is to first consolidate applications that don’t have sudden bursts of intense resource usage. If intense resource usage is inevitable, provide plenty of the resource. The three scarcest resources tend to be (in no particular order): CPU, RAM, and network I/O. Disk I/O can be a terrible bottleneck, of course, but most virtualization hosts solve this by keeping data remote (on spindle farms, NASs, etc.) and accessing it via network or fibre channel. So, disk I/O is less frequently a problem. CPU and network are fairly obvious constraints, but RAM requires some explanation.

Most apps you’re consolidating can run on less RAM than they had on the system they were originally running on. You should try to figure out the maximum RAM they could need and allocate only that much to the VM. Lots of legacy programs don’t require gigabytes of RAM to run well. A frequent error is to provide too much RAM to migrated apps. However, there lurks a caveat here: When you start up a virtual machine, the full complement of RAM is allocated by the virtualization hypervisor, even if only a small portion of it is actually ever used. Consequently, even conservative allocations to applications can quickly consume all the RAM on a host. Figure out ahead of time how much RAM your consolidated applications will require and make sure your host supports that number.

One fortunate solution to resource constraints is that once you can easily move virtual machines between hosts, so if one system becomes overloaded, you can move a resource hog to another system. This could not be done without easily without virtualization.

Making the Move
If you account for the previous constraints into your project plan, you next come to the problem of migrating an application to a virtual machine. There are good tools today, especially from VMware that help with this. The process is generally referred to as P2V (physical to virtual). VMware used to offer a tool called P2V, but it has recently been replaced by VMware Converter, which is available at no charge.

I suggest that you initially convert simple apps — ones using recent versions of Windows, for example — so that you can migrate them quickly and then easily tell if they are working as expected. All apps should be tested and verified after a P2V conversion. Truly legacy apps running on very old versions of Windows or even MS-DOS need to be checked even more carefully. And even if you’re satisfied that they are running well, I would not discard the old servers until you are certain the migration is complete and correct. However, do turn those servers off!

Next month, I will examine some problems you’re likely to face as you start running more than just a pilot virtualization project. That column will conclude this excursion into virtualization, which is, once again, probably the most powerful technique currently available to data centers to lower their energy consumption. After that, I’ll move back to green topics that are closer to the hardware and ferret out other opportunities for going green. Until then, happy holidays!

Trellis Briefing

Subscribe to Trellis Briefing

Get real case studies, expert action steps and the latest sustainability trends in a concise morning email.
Article Sidebar 1 Ad
Article Sidebar 2 Ad