Virtualization How-To Part 1

So your company has gone through some growth. When you first started you had that one server with every single component installed on it. As the years have been kind to you, you’re now managing 80+ servers with separate functionality and role requirements. Failover and recovery is highly desired but your business is moving fast and you need something in place to accomplish this.


Step 1 is a virtualization assessment.

Those can get complex but in a nutshell they take a look at your existing environment and tell you which servers you could possibly utilize as virtualized servers. No one likes to re-buy all their hardware when there’s plenty available to use right? There are plenty of tools to do this and plenty of companies that will assist (including our awesome engineering team at Accelera Solutions) but if you’re looking for the quick answer, check out the FREE Microsoft MAP tool. You can run this tool inside your environment and it will give you the Cliff’s notes about which servers are candidates for virtualization.

Step 2 is choosing the right product.

Choosing a product really depends on many factors. What is your budget like? Do you have the expertise to configure something a little more complex than the others? Either way, when you start looking you’re likely to come across some of the key players:

1. VMWare VSphere

2. Citrix XenServer

3. Microsoft Hyper-V

Each product has key benefits and advantages which you can quickly find on the vendors’ websites. Here’s the quick and dirty on each…

VMWare VSphere – This one has been the giant in the industry for a while. It’s well established and contains a host of features. It does require a Virtual Center server AND license to do many of the advanced features (live migration, management of multiple servers, etc). The price can be a little steep for users just getting into virtualization and management can tend to be strenuous if you don’t know what you are doing.

Citrix XenServer – This one has joined the ranks of virtualization over the past couple of years and has quickly risen as a player. Key features are included in the price of FREE like live migration and iSCSI backend storage. If you're looking for memory sharing, performance metric monitoring or high availability, there are pricing packages for you. This product will allow you to get the environment up and running with the key virtualization features without the need to pony up the cash.

Microsoft Hyper V- It’s now included in the Windows 2008 operating system and even comes with 5 server licenses if you buy the Data Center edition of Windows. If you already have the Data Center edition of windows this one seems like a no-brainer. You can enable Microsoft clustering for reliability at no extra cost but some of the other features require the use of Microsoft SCVVM so read the fine print carefully.

Step 3 – Choose your Storage

Your choice of storage (where you store the actual VM file) is important believe it or not. If you choose slow storage, no matter how much RAM and CPU you throw at the VM it's going to run slow. When choosing your storage options, keep an open mind to various vendors and make sure to do your research. Of course, if you would like help with that research, Accelera has many vendors we can recommend for your specific environment. If you are looking for a quick fix for the test environment, consider using local storage on solid state drives or SAS drives. Keep in mind though that the products mentioned above require shared storage in order for the more advanced features to work properly (Failover, Live Migration, etc).

Step 4 – Begin Virtualization

It’s time to install it, configure it and of course start up some of your VMs. It sounds complicated, but depending on the product choice you made in step 1, it can be done in as little as 20 minutes. Most of the products allow for a step-by-step wizard and quick help options that will answer the majority of your questions up front. So now that the hypervisor is installed, let’s get some servers on it. But who wants to rebuild all of their servers to make them virtualized right? This is where the conversion tools come in.

Each product has their own version of a conversion tool to get VM workloads working properly. If you're looking for a quick tip, make sure the physical server you’re trying to virtualize is shut down when possible. By shutting down the physical server, you ensure that any services or files that are locked can be accessed. Boot the physical machine using the conversion disk and start the conversion process. Of course if your environment needs to remain “live” at all times, you can give the live conversion tools a try, but sometimes they don’t get those files that are locked. As always, make sure to test after the conversion.

Of course you are never limited to just the products offered by the vendor you selected. A quick Google search on “Convert Physical to Virtual” will turn up many third party products that can do the same thing as the vendor’s tools but “better”… at least according to their websites.

There you have it, now you have some of your servers virtualized. Next week we will look at solving those pesky desktop imaging problems.

Virtualization How-To Intro

I remember my first IT job. My company had a unique business model. We’d take over all aspects of a client’s IT infrastructure and remotely deliver it. When we did, user access came through their internet connection and Citrix’s Metaframe Desktop Publishing. Everything was "virtualized" (although this term wasn't used at the time) and they worked 100% from the published desktop (Cloud Computing).


Great model when we were only supporting 100 users! But before we knew it we had 80 servers supporting 1500 users and we constantly experienced some sort of hardware failure, a windows patch that broke half the environment or user requests for applications that needed to be updated. Additionally, every new client required some sort of unique desktop or image, followed by applications (some standard...some proprietary) and of course additional hardware.

If two clients had the same application - say QuickBooks - and it needed to be updated, we’d need to update all instances of the application. To make matters worse, if the update broke a critical function of the application, users were unable to work until it was fixed across the board. A restore process could take 4 hours…with users unable to work throughout.

I’m now working with a consulting company and simply amazed that businesses, healthcare institutions, and even schools have the same problems I was working through seven years ago! Here’s a perfect example of my one my recent trips…

Joe down in IT pushes out a patch (which had gone through limited user testing) to all the desktops late one evening. The next morning none of the users can access the company’s CRM. The only way to return to a functioning product: remove the patch from each desktop and repair the CRM application. Downtime for the organization was more than a week as multiple technicians had to go to individual PCs in order to properly repair the product. One change - by one person - cost the company hundreds of lost man hours and thousands in lost revenue.

The truth is that this is 2010 where words like Virtualization, Desktop Cloud Computing, and Image Provisioning are standard on almost every IT ‘to-do’ list. The problems arise as businesses try to tackle those to-do’s without causing these and many other problems while at the same time watching the budget. Although many think it can be difficult, a few easy changes can mean the difference between total disaster and perfect harmony with the right tools in place.

This four part series will focus on utilizing today’s technology to solve today’s challenges.

1. The first problem was 80+ servers managed by only a couple of engineers

2. The second problem was maintaining the desktop images and failback procedures if something occurred to break the image

3. The third problem being the tons of application updates and maintenance that had to be completed on a weekly basis

4. The fourth being how to properly test the application updates before rolling them out to the users in a production like environment