VCAP-DCA Lab Setup – #2 – Host Configuration

One of the first things to do is to prepare your lab environment. This, at first, was pretty complicated for me (probably because I was overcomplicating it), but it really isn’t too hard.

There are a few things that you need to do, however, if you want performance and reliable networking from within your nested lab.

The first thing I’ll go over is VMware Workstation configuration, and from there I’ll go in to the next post with getting your lab domain (Active Directory) set up.

I’m going to assume that you have all the requirements I said in the first post (Quad Core processor or better, preferably i3, i5, i7, 32GB of RAM, SSD / RAID / HDD of about 200 – 400GB, with VMware Workstation installed). The screenshots are from my little lab setup and are, as I said, on a DL360 G5 running Windows Server 2012.

Incidentally, you don’t need much on your lab host, I stuck VMware Workstation, the .NET Framework 2.0 feature (not really sure if this is a requirement, I did this more out of habit) and a web browser (preferably Chrome, but FF will work too, the only reason I say Chrome is preferable is because it has embedded Flash support – this is especially important if your host is going to be your Control Station).

So, on to Workstation. First, you’re going to want to change where your VMs are stored on to your faster / separate storage. If you are lucky enough to have your OS installed on a 1TB SSD, I guess you can ignore this part, but for those who are wanting their VMs installed on separate storage, this is for us.

  1. Open VMware Workstation
  2. On the top menu, click “Edit” and then “Preferences”
  3. Under “Workspace”, choose a folder on the drive you want your VMs stored. As you can see in the screenshot, mine are going to be stored on the V:\ drive in a folder called VMs. Creative, I know. VMware Setup
  4. Click “Ok”. That part is done.

Next is the networking – I had some trouble with this at first because I was determined to make it harder than it really was. For the most part, the NAT adapter is all we will need (we will NOT be doing physical VLANs or anything like that in this particular lab, since this is basically a “lab in a box” and I am using a dumb physical switch).

To do this the way that I did it (which has been working quite well for me) is to do this:

  1. Open VMware Workstation
  2. Click “Edit” and then “Virtual Network Editor”
  3. I deleted all the VMNET adapters just so that I had a clean slate
  4. Add a new network by clicking “Add Network” and choose “VMnet0”. It doesn’t have to be VMnet0, it can be any number you like, but the purpose of this, I’m going to assume you did VMnet0.
  5. Once it is created, choose NAT as the network type.
  6. Next, make sure that “Connect a host virtual adapter for this network” checked.
  7. DHCP is totally optional. I left it enabled just so when the VM first powered on, it would have an IP. I’ve read some people say turn it off, but I guess I don’t really care that much if it’s on – as long as you know that you’re going to end up static IPs anyway for your VMs.
  8. Next, choose your subnet. For this particular setup, I’m going to be using for LAN and for iSCSI / Storage. You can only have 1x NAT VMnet per host, so you really only need one. NAT
  9. You’ll see that I also have a bridge adapter as VMnet5 – this is for my Control Station – I wanted that on dual subnets so that I could RDP from my regular network from a different computer. If you plan on doing it that way, just bridge one of your adapters in your computer / server that has physical access to the network.
  10. Click ‘OK’. That will about do it for now.


This is just about all it took for me to get my configuration rolling. I’ll get to getting some VMs up and running next – the next thing you’ll want on your lab network is a domain controller. You’ll hear me refer to it a lot as a DC, so if you see me mentioning something about a DC, just know that I’m talking about a domain controller.

VCAP-DCA Lab Setup – #1 – Prerequisites

I’ve been looking around for a good lab setup using either ESXi or Workstation, and despite that a lot of people tell you “how to do it”, there really isn’t a solid how-to on setting a up a lab for getting hands on experience with the things you’ll need to know for VCAP-DCA. So, that being said, I’m going to start a “This is how I did my lab” post series and walk you through how exactly I set up my lab and whether or not it worked.

So for starters, there are quite a few prerequisites, and here they are:


  1. A CPU that has some horsepower – I would say that for this setup you want at least a quad core, and the newer the better (i3, i5, i7 series have Intel VT). If you have an older processor that’s fine too, it just means that you won’t be able to run 64 bit VMs on Nested ESXi (which is what we are going to, nested ESXi). This isn’t a huge deal though because running VMs is the easy part of this – not to mention that 32 bit VMs will be fine for the lab.
  2. RAM – You’ll want at least 16 gigs, 32 gigs preferable and anything above that is even better
  3. Storage – You’ll want at least 200GB of pretty fast storage. You could probably get by with a single 7200 RPM disk, but I promise you that it will not run very well, and that your labs will remind of the labs at VMworld. You’ll probably want an SSD or at least some type of RAID array for your lab. The faster the disks, the better as well (10k, 15k). I think 200GB is minimum, you’ll probably want closer to 400GB.

You’ll need some software, and we can probably get away with “trial” versions of everything, provided you work at it every day (since most trials, especially VMware, will only last 60 days), but you’ll still need some ISOs to do all this with, and you’ll want those before you start the lab. I’ll lay those out really quick and why you need them.


  1. Windows Server (at least 2008 R2 x86 or x64). This will be used as a domain controller. You’ll want Active Directory integrated when we start working with stuff. It only takes a few minutes to set up Active Directory, so you are best of just doing it.
  2. Windows Server (at least 2008 R2 x64). This will be for vCenter Server. We will need vCenter Server.
  3. Windows 7 or 8 (x86 or x64) as your “Control Center”. This will basically be your management box. You can use the physical host if you want, though I am going to whip up a VM for it.


  1. vCenter Server (Standard). We’ll need this. Obvsiously.
  2. vMA (VMware Management Assistant). This is a OVA you can download and deploy.
  3. VMware Workstation – I’ll be doing this all on Workstation 10, but if you want to a nested lab on ESXi, you absolutely can, it may just be set up slightly differently (especially the networking).
  4. VMware vSphere PowerCLI – this gets installed on your Control Center box.
  5. VMware ESXi 5.5.0 – latest revision. We’ll use update manager to get it all up to date, so the latest one you can download from VMware will work.


  1. Your choice of OpenFiler or FreeNAS.

For this particular setup, I’m going to use an older HP DL360 G5 (that I just happen to have lying around) with 2x Intel Xeon 5430 (quad core, so 8 cores all day), 48 GB of RAM, 2x 10k 146GB disks in RAID 1 for the OS, and 4x 146GB disks in RAID 5. Yes, RAID 5 is not the best choice for writes, but I am using a little program called Primo Cache as a RAM write cache. If you have spare memory and want a RAM write cache, I would highly suggest getting that program – it’s optional, so I’m not going to call it a requirement, but it definitely helps.

For software, I’ll be using VMware Workstation 10, OpenFiler, and Server 2012 for all Windows Server based stuff. I’ll get to configuring these in the next post, but this is what you will need at this point to get started.

I’ll get to the configuration



GhostBSD and BSD for the Future

I’ve started helping with a BSD distribution with some code and site administration – the distribution is called GhostBSD. I can’t say as I have ever offered my help to a distribution and developers like this, but quite frankly, I believe in this project, and I believe in it for a few reasons:

1. It’s a really forward and progressive system. Normally when people think about BSD and Unix, they think of a somewhat unstable and unpopular distribution with very little support and no GUI – GhostBSD installs MATE right out of the box and makes it very user friendly. It’s distributed as a “desktop” system, but the resource usage is so low that I really think that, even with a GUI, it would make an extremely viable server distribution. I plan on migrating some of our Apache servers over to GhostBSD once 4.0 stable is released.

2. The support is growing, big time.

3. The new Package Manager (pkg / pkgng) is awesome in FreeBSD 10. In fact it’s so good that I think it’s up there with ease of Ubuntu (apt-get).

4. It’s really minimal – it’s not a bloated down distro with a ton of stuff bogging up the system. It’s really lightweight and user friendly.

5. It’s extremely easy to use – once again, people tend to think of BSD and Unix as hard to use and not supported – the installer is a genius GUI called GBI (Graphical BSD Installer) that helps you with partitioning, installation, the works.

I would really encourage people to give it a shot – I’ve found that VMware Tools are a little finicky on it (in fact, the official VMware Tools are not supported on FreeBSD 10 yet, and you have to use open-vm-tools).

SQL Server Reporting Services

SSRSLogoWe have had an SSRS server kicking around for several months and have not done much with it, and being a bit of a noob with SQL Server (I’m really not too much of a noob, but we had a definitive use case for SSRS, we just hadn’t gotten around to setting it up), I didn’t quite know what SSRS was / is capable of. I have to say this – the tool itself is awesome.

I’ll get in to some new how-tos and cool things you can do with it here in the next couple of weeks.

Heartbleed – Ruh Roh

Well, this is a huge ruh roh… for just about anyone on the internet (either server side or client side). There is a huge “implementation bug” in a bunch of currently deployed OpenSSL packages. There are numerous operatingscooby_doo_21 systems affected – CentOS, Red Hat, Oracle, Debian, Ubuntu, SuSE / OpenSuSE… and quite honestly, the more I read about this thing, the uglier it’s getting. They are calling this thing “Heartbleed” – a play on words for Heartbeat which is where the exploit originates from.

So, first off, you can’t just update OpenSSL and call it good. That’s not how this thing works. Secondly, just about anyone using a modern operating system as their web server is probably affected.

Here is where it gets icky… it’s not just you. In order to completely solve this thing and make sure people are not eavesdropping on your encrypted sessions, you’re probably going to need to revoke and rekey your SSL certs. Since this is, as I said, affecting everyone who uses OpenSSL is affected, and this includes CA’s (Godaddy, Comodo, etc…).

According to the bug report, it’s considered and “implementation bug”, or basically a programming mistake.

Anyway, it’s a huge issue, so you should probably figure out if you’re affected and start working on patching.

More info here:

Deduplication vs Compression

When we were in the process of looking at storage devices (SANs), I had a question that I had to logically figure out – and that was the idea of having data deduplicated or compressed. This is something that is gaining popularity both on storage devices and on operating systems.

For anybody who is unfamiliar with compression who might be reading this, compression is basically crunching down data – in a nutshell it basically means encoding data in such a way that uses fewer bits than the original file. There are even more ways to talk about compression (lossy and lossless compression), but I would suggest looking this up as other people have talked about it quite a bit more than I have. Compression can be done two ways as well – there is compressing a file after it has been written, and then there is in-line (realtime) compression as well (which is what most storage boxes would do).

Deduplication basically eliminates / removes duplicate copies of repeating data. This usually happens on a block level. There is also inline deduplication and “post processing” deduplication. Inline deduplication happens as the data is being written – and with most storage appliances, I have found that this requires quite a bit of memory. Some claim this is 1GB RAM per 1TB of data. I think that is grossly underestimating, though it really depends on your appliance. If you think about how inline deduplication works, it basically will search the file system (or MFT / Inode) for duplicate blocks before it even writes the block, and if the block is duplicated, it will only write a reference to the full block that contains the data. This means that if you have lots of similar data, you could potentially save a LOT of space.

So on to the nitty gritty. I’ve found that deduplication is excellent for data that just sits there – in other words, it’s not being accessed with immediate frequency. Good candidates that I’ve used deduplication on are backup volumes, VDI Deployments (which I’ll get to in a moment), and other storage where you have lots of potential duplicate blocks (I suppose you could say duplicate items here too, but most dedupe is block level, meaning it dedupes duplicate blocks).

Most deduplication appliances provide inline deduplication. There are some that are post-process (such as Windows Server’s Deduplication feature, which actually works pretty well), but it really depends on your usage. Post Process requires more space because basically the data sits there until either a schedule hits for it to begin dedupe or the system to calm down enough to do the process (background dedupe). Generally, as well, background or scheduled dedupe for post process creates a lot of disk activity (thrashing) – see the image below for the overnight dedupe processing:


Compression puts a bit more stress on the processor and perhaps memory as well since the processor is what is re-encoding the data.

So… on to VDI and Deduplication – at face value it seems like a great idea. I have done it before thinking it would be a great idea, but for some reason, both on Nexenta and on Windows Server (Windows iSCSI Target with Deduplication Enabled), it absolutely brings the volumes to a crawl. At the time, I didn’t have time to do much research on it because the complaints were coming in and I was just being quick to get the on to a volume that was not deduped, so perhaps this needs a bit more research.

All in all, I’m a bit of a bigger fan of compression because I’d rather the disks not be hit so hard. Today’s processors can handle most types of compression without much problem (unless you’re using Gzip-8 or 9) and decompression of well written compression algorithms can happen at RAM speed on most multi-core systems (which can be pretty fast).

This is an image of Nexenta using LZ4 compression – it’s not being hit too hard (this particular storage box has about 15 VMs on it that are at least fairly active).


Set Up Server 2012 R2 HyperV Cluster for VDI (GUI) / Prequisites


As you guys may or may not know, I’ve worked on deploying a HyperV Cluster with some pretty nice video cards for VDI. I’ll walk you through exactly how this is set up. There are a couple of prerequisites though, and failure to follow these will definitely give you a less than optimal experience. It’s really not been bad though there have been some hiccups in it due to some less than optimal configuration. When it comes to servers and Linux VMs, I am definitely a VMware fanboy, but for running Windows Server, and more importantly, Virtual Desktops, HyperV is certainly formidable.

  1. A pair (or more) of Enterprise Grade servers (we are using a pair of HP ProLiant DL360 G7s) – probably a good idea to make sure they use the same CPUs with the same Stepping.
  2. I would think at least 16GB of RAM a piece – HyperV’s “Dynamic Memory” really helps limit memory usage, but you still want enough to be able to run all your VMs on one host, so keep that in mind
  3. CPUs that Support SLAT (Second Level Address Translation) – only if you want to take advantage of RemoteFX – which I would highly recommend if you’re doing VDI. It’s called EPT on Intel Processors and AMD-V on AMD Processors. Most newer processors have this technology (2012 +)
  4. Some Video Cards – You want AMD FirePro v5800 or better or Nvidia Quadros – which are expensive as crap. See the full list here. It’s said that any DirectX 11 GPU will work, so you could use desktop video cards as well, but make sure they are beefy enough – as in 1GB VRAM or more (depending on how many Virtual Desktops). Also make note of the power requirements. Some of these new servers come with 400 or so watt Power Supplies, and you’ll want a bit more than depending on the load. I think that you could run about 10 or so RemoteFX sessions per 1GB of VRAM. I may be wrong there, but that is just a rough guess.
  5. 3 NICs at a minimum. 1x for Public, 1x for Cluster Communication and Live Migration, and 1x for VM traffic. If you are using LAN based storage (iSCSI, SMB) you’ll want some NICs dedicated to that
  6. Shared Storage – Either FC, FCoE, iSCSI, SAS will even work if you have an enclosure and some SFF 8088 SAS cards laying around.
  7. Licenses for 3x Windows Server 2012 / 2012  R2 Servers (you’ll also need to set up a Remote Desktop Session Host, Remote Desktop Licensing Server, Remote Desktop Web Access (optional), and Remote Desktop Connection Broker. All of these roles can be on the same server, and it can be a VM – ours is hosted right on the HyperV Cluster). You’ll also need a license for some CALs. If you have an MSDN Account – there are some in there that you can use (provided your licensing allows it). I would personally suggest getting your hands on Server 2012 R2 because of the whole “compressed migration” feature. That is awesome – migrations happen in literally a few seconds.
  8. Active Directory / Domain and Admin Rights on that domain.
  9. Windows 7 Enterprise, Windows 8 Enterprise, or Windows 8.1 Enterprise keys for your VDI Guests. RemoteFX sadly only works with Enterprise versions.

There are some things to be sure of as well – if you are using iSCSI based shared storage, it absolutely needs to support SCSI-3 Persistent Reservation. If this fails this during Cluster Validation, you’re going to have a lot of issues with your storage – things like dropping connections or random disk failovers that take forever and it will disrupt traffic to your VMs.

So, to go off on a bit of a tangent, there are some decent ones you can use – Windows Storage Server 2012 / 2012 R2 will work (or for that matter, Windows Server Standard / Datacenter 2012 / 2012 r2) will work if you use the iSCSI Target, Nexenta works and so does StarWind iSCSI SAN. Nimble Storage does, though this is not what we use for our VDI deployment (different locations, though I really wish I could use it). Other iSCSI SAN Vendors may work the same, you’ll have to do some research on your particular SAN. I know that several of the free ones do not work (FreeNAS, OpenFiler, OpenIndiana or Solaris for example).

So – let’s begin. The first thing you are going to want to do after you get your hardware installed (including your Video Cards) is getting your Operating System installed. I’m going to assume you know how to do this either by using a KVM or else by using something like iLO from HP. I think that if you don’t know how to install your Operating System, you really shouldn’t be doing this and you should be leaving it to someone else. As I’ve said, you can use Server 2012 Standard, Server 2012 Datacenter, Server 2012 R2 Standard, or Server 2012 R2 Datacenter. I also believe that there is a Server 2012 R2 HyperV Edition, though I’m not sure what the difference is. When you are installing, choose the “Standard Install with GUI”. The GUI really doesn’t use that much for resources and configuration is a lot easier than doing it all through Power Shell at this point, and I’ll be showing you how to configure this through the GUI.

After you have your OS installed, you need install a few roles and features. The roles are:

  1. HyperV
  2. Remote Desktop Services -> Remote Desktop Virtualization Host (you don’t have to do this now, but eventually you will have to do it)

The features are:

  1. .NET Framework 3.5 Features (optional I suppose, but I usually install this because some applications require it)
  2. Failover Clustering
  3. Multipath I/O (if you’re using SAS, iSCSI, or FC shared storage)

After those are installed, reboot your server.

Next, run Windows update a few times until you are all up to date. Update / install all your drivers, including the newest stable revision of your video card drivers (either from or This is EXTREMELY important. If you’re using old or out of date drivers, it could affect performance.


Next, let’s set up your network. On my particular host, I have 6 NICs. I dedicated 3x to iSCSI, 2x to HyperV, and 1x to LAN – and my LAN is my Cluster Network as well (since very little traffic actually hits the LAN). Depending on how often you’re going to be migrating VMs, you can do it this way, or you can have a dedicated Cluster Subnet / NIC.

  1. LAN – set this as a static IP on your local area network (for this example we will call it for host 1 and for host 2)
  2. Cluster – set this to a static IP of something other than your LAN subnet – it doesn’t need to be resolvable on your LAN at all – it can be something like for host 1 and for host 2
  3. HyperV – I set this to a static IP on our network, but I believe when you add it as a HyperV switch, these settings no longer matter.

If you have spare NICs, I would suggest teaming the HyperV interfaces just for high availability purposes. If you’re using iSCSI, you’ll also want to dedicate some NICs to that as well for multipathing (do NOT use your iSCSI network for Live Migration or Cluster Communication – reserve your iSCSI network for solely iSCSI traffic).

Some suggestions on networking:

  1. Use Jumbo Frames / Jumbo Packets if your NIC / Switch / Storage supports them. This is configurable in the NIC Properties – use this for your Cluster NIC and for your storage NIC
  2. Don’t use Jumbo Frames on your HyperV NIC(s)
  3. Disable Virtual Machine Queuing
  4. Don’t cheap out on extra NICs – use quality, server class NICs, not just some cheap Realtek 100mbp/s NIC.


Depending on your storage type, you’ll want to get this set up next. As you know, I am a bit partial to iSCSI. I do have some spare SAS Enclosures that I wish I would have dedicated to HyperV, but our iSCSI SAN works just fine. Depending on your software / hardware configuration, this will be up to you to figure out. You’ll want a couple of different volumes:

  1. A Dedicated Disk for the .vhdx files. This would be similar to like where VMware puts it’s .vmdk files. You need to judge how big this is going to be.
  2. A Dedicated Disk for the Configuration Files. This includes snapshot data, virtual swap, and VM Metadata.
  3. A Quorum Disk. This is recommended by Microsoft. I think this is really only required on a Cluster with more than 2 nodes, but Microsoft’s recommendation is to have it on every Cluster.

The reason I separate the .vhdx and configuration files is mainly for performance reasons in our particular environment. The .vhdx volume seems to get hit a lot harder than the configuration volume, so I have different performance and compression policies on our SAN for each one. This is totally up to you, you can either stick it all one volume or else split them up. I would recommend splitting them up solely for the reason I mentioned.

I’ll get to configuring the Cluster here in the next post.

Nimble Storage – An In Depth Review – Conclusions

nimbleAll in all, after about 7 months of our cut over to our SAN, we have absolutely zero regrets. The thing is still as fast as it was on day one, despite us pushing through quite a few IOPs and pushing quite a lot of data through it. We are not the busiest shop in the world by far, but we do use our storage quite a bit for both reads and writes.

Just a little bit of info about our particular environment – We have 27 Volumes currently, and we have about 60 VMs on our Nimble CS240, several virtual and physical instances of SQL Server have their databases, logs, backups, etc… on this SAN and are being hit constantly (production databases with certain Real Estate data), we have about 14 million photos on a volume and our cache hit ratio sits right around 85%. We probably could use an upgrade to a CS240  x2 (and probably will – doubling the cache from 640GB to 1.2TB) sometime in the near future.

Our server infrastructure is HP Blades in a C3000 Chassis with HP / Cisco Branded Gbe2c Blade Switches (all four interconnects) with the uplinks in LACP configuration to a pair of stacked Dell PowerConnect switches. We followed Nimble’s Network Configuration Guidelines almost to the letter. We use Multipathing in ESXi and Windows, and it works just as expected.


These are the uplinks from the 4x Interconnect Switches on the Blade Enclosure to the Dell Switches.

Everything is using Jumbo Packets, and iSCSI is on it’s own separate subnet. The GUI is amazingly simple – I would think that even an inexperienced person could set it up and create and map volumes.


Overall, we are extremely happy with it. It was certainly was not the cheapest option out there, but it has been well, well worth it in my opinion. We replaced 18U worth of SAN equipment (not to mention the Fiber switches and Fiber infrastructure we removed) – I’ve heard of other companies replacing half racks or more of old equipment. Our power usage is down about 1200% from our previous Controllers / Enclosures / Switches.


Redgate SQL Monitor

It’s really been a while since I’ve written anything about SQL Server, and I suppose it’s just about that time for one of those tools that really make the life a database administrator a lot easier – and that is Redgate SQL Monitor. Red Gate recently released SQL Monitor 4.0, which is a bit of an upgrade from the old 3.x versions, which now includes advanced query tracing.

In a nutshell, what SQL Monitor does is connect to your SQL Server (it needs its own database for stats and such, as well as having a sysadmin account for reading database stats) and it just reports all sorts of statics on logos-redgate-sql-monitoryour SQL Server – disk and processor queues, RAM usage, database hits, etc… All of this is divided in two main areas – Alerts and Analysis. It’s an application that provides a Web Server (runs on Windows) and has a nice little web GUI that is somewhat configurable.

Alerts provide all kinds of tunable alerts – ranging from job failures to deadlocks to long running queries to even things like table blocking. You can set up email alerting as well and get emailed these alerts as they happen. At first though, I would suggest not enabling the Email Alerts, especially if you have never monitored your server before, because this thing will send you all kinds emails about things it finds (log backups overdue, index fragmentation, etc…). As I mentioned, these alerts are tunable, which is really handy. We have one job that runs in a loop for 8 hours at a time – when we first started monitoring, SQL Monitor would constantly send us emails about a long running query, since, let’s face it, 8 hours is a long running query. You can tune that though and basically tell the alert that “Hey, it’s okay if it runs for 8 hours, but if it deviates from that by more than say… 5%, then send us an alert”. These alerts at first look annoying, but once you sift through them, there is a lot very very useful data.

An example of that is that we have been having about 2 or 3 deadlocks per day. For the longest time, we weren’t sure what was causing them – we could view the first part of the query, but it was so generic that it didn’t provide enough insight in to what was causing the deadlock to really give us any meaningful information. SQL Monitor was able to give us the whole query, the job / stored proc that was causing the deadlock and that, in turn, let us figure out how to fix it (a few indexes here and there fixed that issue).

Analysis also is absolutely invaluable – I spend about 75% of my time doing systems and storage administration, and a big part of that is performance tuning / getting the best possible performance out of our equipment and servers. This includes doing things like query and SQL Job tuning. We use things like LogicMonitor that provides WMI Monitoring of the physical hardware, Nagios / Opsview that provides us with alerts on hardware usage, but nothing else really to give us specifics (and by that I mean deep specifics such as stats over time, cache hits, database size, log size, full scans, etc…) about SQL Server. SQL Monitor fills that gap really nicely. It provides in depth analysis on a daily, weekly, monthly and even yearly basis (the SQLMonitor database gets quite big if you keep data for that long however, we keep SQLMonitor data for 2 months and the database sits around 30 gigs).


I suppose one of the other awesome things I should mention is that it is totally cluster aware. Our previous installation was a 3 node cluster, our current is a 2 node cluster. It monitors both nodes, but of course only monitors the SQL Server instance on the server that owns the SQL Server resources (disks, etc…). They have a free trail available (14 days), so you can give it a shot.

The price tag is somewhat hefty, but if you are running a production database that either your internal users use or you have a lot of external uses, the data and stats it provides are invaluable compared to other tools.