Nimble Storage – An In Depth Review – Post 3


The Decision: We kept going back to Nimble for everything as a comparison. We would be like “Oh well we really love this about Tegile… but Nimble does it this way”, or “Well, we really like how EMC does this, but Nimble does it this way…”. Eventually it got to the point of being something that we really needed to do.

And now…  why we chose Nimble: Here it goes. I’ll do my best to write what I remember and what I have seen. When we first started looking around for a new SAN, a friend came to mind – a friend that previously worked a different SAN company. I called him for some unbiased advice, mainly wanting to know if he had an opinion on what was better – 10GB iSCSI or 8GB Fibre Channel. He suggested that we do a little meeting and have a chat. Previous to this, I had only heard bits about Nimble storage. He came in and basically told us that both Fibre and iSCSI have their application in a particular environment, but then went on to his sales pitch. My friend is not a sales guy, he is a SAN Engineer. He didn’t sell the product to us, the product sold the product to us.

Basically he told showed us the entire SAN from the chassis all the way down to the file level functions.

So… first things first. The Nimble CS series are actually stashed in a slightly modified Supermicro 6036ST-6LR chassis. The enclosure has 16 disk bays (4x SSD, 12x HDD). The solid state drives are in the middle of the enclosure. Why did they choose that? One thing you’re going to learn here is that there is a reason for everything Nimble does. They put the SSDs in the middle of the enclosure because the standard drives could theoretically create vibration. Vibration is no good for a hard drive, and therefore having the drives closer to the screws on the side of the rack mounts could provide more stability and support for those drives. I suppose in theory, it would help with drive vibration. Obviously I have never field tested this, but to me it seems like a valid claim.


Look familiar? The Nexsan NST series and Tegile Zebi series are both housed in the same chassis. I’m not sure how they have their SSDs positioned, when we met with them, they did not make a point to tell us, nor when we saw their physical products, did they show us.

Anyway, the chassis has two controllers that are hot-swappable in the back in an Active / Passive configuration. Most SANs are Active / Active, and for those of you who are uncomfortable with this, in a nutshell, here is how the controller works. As data is written, it is first written to NVRAM, which is mirrored across controllers – these means that if Controller A drops out in the middle of a write, Controller B already has the exact same data in NVRAM and therefore nothing is lost. The failovers are absolutely seamless. We have done several test failovers and there is absolutely 0 interruption. The controllers are housed in the back of the unit and slide out. These can be upgraded at any time should your infrastructure need it (IE, you can go from a CS240 to a CS440 just by swapping controllers). This can be done hot.

The CASL architecture: Before going on to another post and going in to this in depth, if you have time, I would suggest watching this video:

You might think it’s all just marketing propaganda, but it seriously is revolutionary.