Nimble Storage – An In Depth Review – Post 1

nimble

Recently I have had the pleasure of working with a Nimble CS240 Array. Needless to say, I am quite pleased with it and I’m going to do a pretty thorough write up here on it here for those on the fence looking for a new SAN or looking to do a “rip and replace”. I’m going to go in depth at everything we went through to make this decision, why we made it, what we did, and most importantly, how it is working for us.

1. History

It’s a bit of a long story, but it is coming time to upgrade our SQL Server from 2008 R2 to SQL Server 2012. We have (had) in our production environment, a Compellent SC30. The Compellent is a tiered storage device that relies on pure spindle number and drive speed to push out it’s performance. There can be several tiers that are utilized for different scenarios – for example, Tier 1 is generally 10k or 15k SAS drives in RAID 10, Tier 3 would be something more along the lines of 7200 RPM SATA drives in RAID 5. The bread and butter are in that you can perform reads from the Tier 3 and writes to the Tier 1 and then at night (preferably), data progression can move the newly written data to Tier 3 for reading. Generally speaking this seems like a pretty good system except for one major flaw:  Disk thrashing. You see suddenly about 8 months ago, we had several (and by several I mean about 5x of our 36) drives fail. Tier 1, Tier 3, both were failing in very short proximity to each other – almost to the point of sweating bullets. Thankfully, Dell (who owns Compellent now) has very good support for these and we have our drives the next day.

harddiskB

In any event, this SAN used all these drives for random reads and random writes, which means that when data is written to the drives, it is written wherever the controller can find some free space to put that sector. This SAN was extremely multipurpose. It housed SQL Server databases, VMware Virtual Machines, Physical Servers connected to LUNs, etc… All these servers are performing reads and writes against the controller, and since the data is randomly written, it’s randomly read as well.

The image on the left will show you data “fragmentation”, or rather, what fragmentation would look like on a physical drive platter – so in order to read File A, the spindle head would have to go from the inner most part of the of the drive, out to the outermost part, back in, etc… and because of that, those hard drives are working intensely hard just to read and write data.

In any event, with a sour taste in our mouth coming from drives dying, we atempted to create a SQL Server 2012 box with a LUN. Needless to say the SC30 was too old to accomidate this due to it’s age and 32 bit disk structure. So… time to upgrade controllers – but they were very expensive.

So, we started looking around at some newer technology.

2. Application Usage

We are about 95% virtual and about 5% physical (our only physical servers are SQL Server boxes), however, I would think that SQL Server accounts for at least 50% of our I/O (and it’s probably 75% read, 25% write on that). We utilize VMware ESXi 5.1 (at the moment), SQL Server 2008 R2 (soon to be 2012), and some media volumes for images. We needed at least 7 TB of usable capacity and were expecting to grow by at least another 4 TB in the very near future.

3. Research

We met with a lot of vendors – companies like Tegile, Nexsan, Dell, Nimbus (insanely expensive for a full SSD SAN, but hey, it was fun), and a few other companies. Let me lay out the major contenders and why they were contenders.

Tegile: We really liked Tegile and had a good long meeting with them. It seemed like their big thing was SSD Cache, Compression, and Deduplication, all of which seemed attractive to us because we needed the read performance (SQL anybody?) and because deduplication and compression could potentially save us a lot of a space. It utilizes 2x onboard controllers in Active / Active configuration.

The good: Variable compression (gzip, lzjb), deduplication, SSD Cache, price. I’m not sure if I’m allowed to post price on here, but I’ll say, with a lot of generality that the quote come in around 40,000$ for a 22 TB array.

The bad: They basically treat their SSDs like regular hard drives, more or less following the procedure below:

Tegile Read and Writes

Tegile Read and Writes

As you can see, there are a lot of writes to the SSD – this is alright because they utilize eMLC based Solid State Drives, but still, it utilizes the SSDs as “fast spindle cache”. Quite frankly, they are not fast spindle cache – they are SSDs. After a lot of thought and research, we decided against this due to this basically being the same system we were leaving (drive thrashing) and because of how we felt about all those writes to SSD cache.