Set Up Server 2012 R2 HyperV Cluster for VDI (GUI) / Prequisites

hyperv-2012-logo

As you guys may or may not know, I’ve worked on deploying a HyperV Cluster with some pretty nice video cards for VDI. I’ll walk you through exactly how this is set up. There are a couple of prerequisites though, and failure to follow these will definitely give you a less than optimal experience. It’s really not been bad though there have been some hiccups in it due to some less than optimal configuration. When it comes to servers and Linux VMs, I am definitely a VMware fanboy, but for running Windows Server, and more importantly, Virtual Desktops, HyperV is certainly formidable.

  1. A pair (or more) of Enterprise Grade servers (we are using a pair of HP ProLiant DL360 G7s) – probably a good idea to make sure they use the same CPUs with the same Stepping.
  2. I would think at least 16GB of RAM a piece – HyperV’s “Dynamic Memory” really helps limit memory usage, but you still want enough to be able to run all your VMs on one host, so keep that in mind
  3. CPUs that Support SLAT (Second Level Address Translation) – only if you want to take advantage of RemoteFX – which I would highly recommend if you’re doing VDI. It’s called EPT on Intel Processors and AMD-V on AMD Processors. Most newer processors have this technology (2012 +)
  4. Some Video Cards – You want AMD FirePro v5800 or better or Nvidia Quadros – which are expensive as crap. See the full list here. It’s said that any DirectX 11 GPU will work, so you could use desktop video cards as well, but make sure they are beefy enough – as in 1GB VRAM or more (depending on how many Virtual Desktops). Also make note of the power requirements. Some of these new servers come with 400 or so watt Power Supplies, and you’ll want a bit more than depending on the load. I think that you could run about 10 or so RemoteFX sessions per 1GB of VRAM. I may be wrong there, but that is just a rough guess.
  5. 3 NICs at a minimum. 1x for Public, 1x for Cluster Communication and Live Migration, and 1x for VM traffic. If you are using LAN based storage (iSCSI, SMB) you’ll want some NICs dedicated to that
  6. Shared Storage – Either FC, FCoE, iSCSI, SAS will even work if you have an enclosure and some SFF 8088 SAS cards laying around.
  7. Licenses for 3x Windows Server 2012 / 2012  R2 Servers (you’ll also need to set up a Remote Desktop Session Host, Remote Desktop Licensing Server, Remote Desktop Web Access (optional), and Remote Desktop Connection Broker. All of these roles can be on the same server, and it can be a VM – ours is hosted right on the HyperV Cluster). You’ll also need a license for some CALs. If you have an MSDN Account – there are some in there that you can use (provided your licensing allows it). I would personally suggest getting your hands on Server 2012 R2 because of the whole “compressed migration” feature. That is awesome – migrations happen in literally a few seconds.
  8. Active Directory / Domain and Admin Rights on that domain.
  9. Windows 7 Enterprise, Windows 8 Enterprise, or Windows 8.1 Enterprise keys for your VDI Guests. RemoteFX sadly only works with Enterprise versions.

There are some things to be sure of as well – if you are using iSCSI based shared storage, it absolutely needs to support SCSI-3 Persistent Reservation. If this fails this during Cluster Validation, you’re going to have a lot of issues with your storage – things like dropping connections or random disk failovers that take forever and it will disrupt traffic to your VMs.

So, to go off on a bit of a tangent, there are some decent ones you can use – Windows Storage Server 2012 / 2012 R2 will work (or for that matter, Windows Server Standard / Datacenter 2012 / 2012 r2) will work if you use the iSCSI Target, Nexenta works and so does StarWind iSCSI SAN. Nimble Storage does, though this is not what we use for our VDI deployment (different locations, though I really wish I could use it). Other iSCSI SAN Vendors may work the same, you’ll have to do some research on your particular SAN. I know that several of the free ones do not work (FreeNAS, OpenFiler, OpenIndiana or Solaris for example).

So – let’s begin. The first thing you are going to want to do after you get your hardware installed (including your Video Cards) is getting your Operating System installed. I’m going to assume you know how to do this either by using a KVM or else by using something like iLO from HP. I think that if you don’t know how to install your Operating System, you really shouldn’t be doing this and you should be leaving it to someone else. As I’ve said, you can use Server 2012 Standard, Server 2012 Datacenter, Server 2012 R2 Standard, or Server 2012 R2 Datacenter. I also believe that there is a Server 2012 R2 HyperV Edition, though I’m not sure what the difference is. When you are installing, choose the “Standard Install with GUI”. The GUI really doesn’t use that much for resources and configuration is a lot easier than doing it all through Power Shell at this point, and I’ll be showing you how to configure this through the GUI.

After you have your OS installed, you need install a few roles and features. The roles are:

  1. HyperV
  2. Remote Desktop Services -> Remote Desktop Virtualization Host (you don’t have to do this now, but eventually you will have to do it)

The features are:

  1. .NET Framework 3.5 Features (optional I suppose, but I usually install this because some applications require it)
  2. Failover Clustering
  3. Multipath I/O (if you’re using SAS, iSCSI, or FC shared storage)

After those are installed, reboot your server.

Next, run Windows update a few times until you are all up to date. Update / install all your drivers, including the newest stable revision of your video card drivers (either from amd.com or nvidia.com). This is EXTREMELY important. If you’re using old or out of date drivers, it could affect performance.

Network

Next, let’s set up your network. On my particular host, I have 6 NICs. I dedicated 3x to iSCSI, 2x to HyperV, and 1x to LAN – and my LAN is my Cluster Network as well (since very little traffic actually hits the LAN). Depending on how often you’re going to be migrating VMs, you can do it this way, or you can have a dedicated Cluster Subnet / NIC.

  1. LAN – set this as a static IP on your local area network (for this example we will call it 192.168.1.12 for host 1 and 192.168.1.13 for host 2)
  2. Cluster – set this to a static IP of something other than your LAN subnet – it doesn’t need to be resolvable on your LAN at all – it can be something like 172.16.1.1 for host 1 and 172.16.1.2 for host 2
  3. HyperV – I set this to a static IP on our network, but I believe when you add it as a HyperV switch, these settings no longer matter.

If you have spare NICs, I would suggest teaming the HyperV interfaces just for high availability purposes. If you’re using iSCSI, you’ll also want to dedicate some NICs to that as well for multipathing (do NOT use your iSCSI network for Live Migration or Cluster Communication – reserve your iSCSI network for solely iSCSI traffic).

Some suggestions on networking:

  1. Use Jumbo Frames / Jumbo Packets if your NIC / Switch / Storage supports them. This is configurable in the NIC Properties – use this for your Cluster NIC and for your storage NIC
  2. Don’t use Jumbo Frames on your HyperV NIC(s)
  3. Disable Virtual Machine Queuing
  4. Don’t cheap out on extra NICs – use quality, server class NICs, not just some cheap Realtek 100mbp/s NIC.

Storage

Depending on your storage type, you’ll want to get this set up next. As you know, I am a bit partial to iSCSI. I do have some spare SAS Enclosures that I wish I would have dedicated to HyperV, but our iSCSI SAN works just fine. Depending on your software / hardware configuration, this will be up to you to figure out. You’ll want a couple of different volumes:

  1. A Dedicated Disk for the .vhdx files. This would be similar to like where VMware puts it’s .vmdk files. You need to judge how big this is going to be.
  2. A Dedicated Disk for the Configuration Files. This includes snapshot data, virtual swap, and VM Metadata.
  3. A Quorum Disk. This is recommended by Microsoft. I think this is really only required on a Cluster with more than 2 nodes, but Microsoft’s recommendation is to have it on every Cluster.

The reason I separate the .vhdx and configuration files is mainly for performance reasons in our particular environment. The .vhdx volume seems to get hit a lot harder than the configuration volume, so I have different performance and compression policies on our SAN for each one. This is totally up to you, you can either stick it all one volume or else split them up. I would recommend splitting them up solely for the reason I mentioned.

I’ll get to configuring the Cluster here in the next post.