Wednesday, July 30, 2008

Think you're ready for a Microsoft Cluster Assessment? - Part 1

If you've never had a Microsoft Cluster Assessment, I would encourage you to give it a try. They are standard practice in our organization, as in the past we have had more issues with clusters than any other servers that we deploy. The reports that we receive at the end are great data for improving the next set of clusters that are deployed. While I' not going to get into details about our particular assessment, I think that some of the things I learned in this assessment are safe to share. This is not meant to be a comprehensive checklist - Microsoft already did a great job of that, this is more the common things that sometimes get missed.

  1. make use of the Microsoft Cluster Resource Center. There is a tremendous amount of well organized material on this site that will help avoid both operational issues and technical issues.
  2. The servers need to be identical prior to clustering the servers. I'm a big fan of unattended build processes, but not every IT organization uses unattended builds. Keep in mind the more manual interaction you have with a build, the less likely they are to identical. Servers shouldn't be like snowflakes.
  3. In addition to being current on all current MS hotfixes, apply any applicable updates from this knowledge base article: That article applies to SP2, fixes for SP1 and base install are linked at the bottom of that document.
  4. When installing hotfixes pay close attention to failures prior to continuing with hotfix deployment. If you apply a hotfix that supersedes an existing hotfix you will be unable to return to install the original, and the servers will appear to have inconsistent patches, despite having the same end result.
  5. Check your network configuration:
  6. Make use of the ClusPrep tool. If you need to check an x64 server, you will need to host the file on a 32bit server. My understanding is it is back ported from the 2008 clusprep tool, and therefore there will not be any further updates to the tool.

In part two, I'll discuss some of the common operational readiness steps that are crucial to successful clustering.

Tuesday, July 29, 2008

High Density Virtualization Environments and Windows 2008 Datacenter Edition

In November of 2007, Microsoft made an announcement I truly found shocking - When purchasing Windows 2008 the customer is entitled to "unlimited virtual instances per license".

Now that Hyper-V is reality, the consequences of this move are huge.

As an example - let's take a DL585 with 4 sockets and do the math on this using list price.

  • 4 processors of datacenter edition @ ~$3000 per processor would be $12,000
  • 1 single Windows 2003 server Standard license costs $1000.

The break even point on Datacenter edition is 12 servers (depending on the number of CALs that need to be purchased). If you are using Datacenter Edition, you effectively saved money on every copy of Windows required past that instance.

So you may be thinking "What is considered a high density server environment?". Lacking an official ruling on the term, I would place it at anytime you are averaging more than 3 virtual machines per processor (not core). For example a fully populated DL585 with dual core processors would need to have more than 24 guests to enter this category.

Going with the example number of 24 guests on the virtual machine with the Datacenter edition license, and the pricing from the example above - you have now saved $12,000 in budget without having to make any difficult decisions. The savings are even larger if you deploy virtualized instances of Windows 2003 Enterprise.

The best news is that this doesn't just apply to Hyper-V, it also applies to VMware, Citrix Xen, and assumedly any other virtualization software deployments as well.