Veeam VS Unitrends

In our business, we take backup, recovery, and replication pretty seriously. We have a variety of different backup utilities, storage devices and appliances and devices dedicated as replica datastores. Anyway, we currently use a combination of Veeam and Unitrends. Though I am very partial to one of those two – and I’ll go through each of them, and their pros and cons because they both do have some very good things and of course they both have a few things that could be improved. Veeam happens to be the one I am partial to, so this article will just be pitting them against each other and basically ramblings about why I prefer Veeam over Unitrends.

Unitrends comes in a few different packages – you can buy an appliance from them (which is what we have) and they also now offer Unitrends Enterprise Backup, which basically is a VM that is almost the same thing as theUNI_Logo_RGB appliance, other than you use shared storage (or datastore storage) for your backup repository. A few things Unitrends really excels at is compression, deduplication (it has native dedupe), etc… It’s Linux (CentOS) based, and uses a lot of custom built packages to back up that data. It uses Postgres for the database (holds all backup metadata). You can map iSCSI, NFS, or CIFS volumes to back up to as a repository (at least with the VM, the appliance does not allow you to do this, or at least with our license it does not). I think one of the things that Unitrends shines in is that it can basically back up anything – SQL Server, Virtual Machines (HyperV or VMware), Physical Machines (bare metal or certain disks). You can also create Bare Metal restore Media (CD / DVD) and restore a physical server that way.

Veeam on the other hand is installed on a Windows machine, requires a SQL Server database (it will install SQL Express if you don’t have a full out SQL Server infrastructure). Veeam does an extremely good job at veeamcompressing VMs (very high compression ratio, at least compared to Unitrends). What’s also pretty nice about it is that it’s VERY fast – we back up VMs over a WAN connection (20 meg line) and for the most part they take less than 2 minutes a piece. There are some that take a little longer than that, but for the most part, they are very fast even over lower latency and slower networks. In contrast to Unitrends, Veeam really only backs up virtual machines (HyperV or VMware, depending on your license). I think one of the main things I really like about Veeam is the GUI and the program interface. It’s really straight forward and clean and easy to use. Unitrends, in contrast, has a really hokey web based UI that is flash based. There are things that are buried in there in all kinds of odd places and it’s just not as clean and straight forward as the Veeam interface.

So that said here is the comparison:

Veeam Pros:

1. Solid GUI
2. Fast backups (using change block tracking)
3. Awesome compression
4. Solid Feature Set
5. Robust backup options (Incremental, Reversed Incremental, Full
6. Custom Backup Repositories (as in physical disk, SAN to SAN, etc…)
7. Replication is included with the license

8. You can install it just about anywhere – a VM, a physical machine – though I would recommend a physical machine because depending on your compression ratios, CPU usage could get a little heavy

Veeam Cons:

1. Backing up Physical Machines is not an option at this point
2. Change Block Tracking doesn’t play nice with other backup appliances (to be fair, this isn’t Veeam’s problem, this is more of a VMware problem)
3. Price – it’s pretty expensive – but depending on what you need backed up, it’s worth it

Unitrends Pros:

1. Backs up just about anything (HyperV VMs, VMware VMs, Physical Windows, Linux, Solaris, etc… servers)
2. Either a VM or a physical appliance, but the physical appliance has to be from Unitrends. You can’t just use your own hardware.
3. Restores are pretty quick

Unitrends Cons

1. Unintuitive and unattractive GUI
2. Does not play nice with any kind of Replicas or VM Clones
3. Bare Metal restores are painfully slow (over network)
4. Reporting is not so hot – when there is a problem, it gives you only a “Job Failed” email, with no description as to why. To find out why, you have to go digging.
5. Pretty expensive and licensing fees are convoluted

Like I said at the beginning, at first I was a little hesitant of Veeam – mainly because of the expense, but the constant improvements and updates that come by way of patches, it’s really an awesome application. I would choose it over Unitrends if I had to decide today.

Nimble Storage – An In Depth Review – Post 5

nimbleHardware and Network considerations: The first thing you need to know is that Nimble is 100% iSCSI based. There is no fiber channel option, it’s either 1GB ethernet or 10GB ethernet. At the time we were looking in to this, we were using Fibre Channel on our Compellent. I had always assumed that Fibre Channel was always superior to ethernet for storage because of the latencies involved.

I’ve found over time that this is not always the case. There is always a bottleneck in storage networking – whether it’s the disk array being too slow to perform all the writes requested, the network being saturated with read or write requests, the server doing the writing could be queuing disk writes, the storage device could be having a hard time keeping up with all the IO activity being requested of it, there are any number of things that could cause some slowdown.

I was very curious as to why Nimble would have (what they claimed to be and it turned out true) such a fast storage device a not offer something like 8G or 16G fiber  – to me, it felt like they were kind of shooting themselves in the foot by doing this, and on top of that, by even offering something with 4 single gigabit NICs (for a total bandwidth of 4gbp/s or about 475 megs per second). After we had the chat with the engineer we found that it’s really not all about storage bandwidth, it’s about the number of operations happening. For example, if you have 3000 operations happening in a given second and your storage device does not have the capacity to handle that, it will begin to get saturated and utilization will go up – our network utilization is never that high on the Nimble because of how fast it processes those transactions or IOs.

There a few things you REALLY need to do to take full advantage of that though, and that is multipathing. Multipathing makes a HUGE difference in performance. Nimble has a couple of “Best Practices” that make it super efficient and very fast even over 1GB ethernet. If you use 10GB ethernet, I don’t think you’re going to have to worry about network bottlenecking. There is also a really handy Connection Manager coming up with the GA of Nimble OS 2.0.

Here is what I would suggest – you need a good switch. Whether 1GB or 10GB you need a good switch (probably L2 at the least – if you just get a dumb switch, you’re not going to get the performance you want) – something newer that has support for Jumbo Frames, Unicast Storm Control, STP / RSTP, Buffered Ports, etc… If you want good performance, I would highly suggest getting a nice switch. Netgear has some pretty good cheap ones, Dell has some decent ones, HP has some good ones, and of course, Cisco has some good ones. Don’t cheap out on the switch.

SNMP Monitoring with Observium

Observiumobservium_forum_logoObservium is a pretty nifty tool I stumbled upon while we were consolidating some of our monitoring trying to save on some costs on offsite monitoring. We wanted to be able to keep past data for reference as well as have data from today. Observium sort of fit the bill and on a whim, I just whipped up a server to check it out.

Before we go much further, I’ll just talk about the server requirements – it requires Apache, MySQL, PHP, SNMP (net-snmp), RRDTool, and a few other monitoring plugins. For hardware, you could probably get away with on a VM with 2 cores, 2 gigs of RAM to start with, though scaling will definitely come in to play. MySQL does not have to be on the same server either – you can definitely put it on another server. At first, I tried installing this on CentOS 6.4 (CentOS 6.5 has since come out and I have not tried that), but some of the graphs were not displaying properly. After that I tried on Debian Wheezy and things worked much better – so I would suggest using Debian even if you’re a RHEL / CentOS shop (like we are).

A few other notes before we go over using it – when I first started using it, it was a 100% free / open source project that was updated straight from their SVN repository. Since then, they’ve forked in to a “Community Edition” and a “Professional Edition”. The “Professional Edition” will set you back about 160$ (which I don’t have), but I would have to say that it is totally worth it and sooner or later, we will most likely do the same.

The Community Edition is released bi-yearly and it will carry with it all the updates that are committed to their “Professional Edition” repository. The only thing this really lacks is an alerting mechanism. It’s really for monitoring only. According to their documentation, their Professional Edition does have alerting built in.

A few things it monitors really well: Switches, Firewalls, VMware (ESXi Hosts), Linux. Some of the things it lacks in are monitoring more specifics – for example, you can’t really monitor MySQL without doing a bunch of work with an agent (which I find to be more trouble than it’s worth, especially since things like Nagios / Nagvis do that pretty well without having to do a bunch of supplemental configuration).

Anyway, on to some screenies:


Some switches and other equipment are supported MUCH better than others (for example, this Dell PowerConnect has a ton of information, but a Netgear Switch that I also monitor has very little information).


Other things will monitor more than just network interfaces – CPU / Memory / Disk I/O:

ESXiI have found it to provide some pretty valuable insight in to day to day operations, and I look at it at least once a day for some insight in to our network health.

Nimble Storage – An In Depth Review – Post 4


Disk structure and CASL: The system itself is set up in a somewhat different way – all spinning disks are in RAID 6 (Dual Parity) with one spare (meaning you could technically lose up to 2 disks before you really had to start sweating bullets). The SSDs are not in any kind of array and that bears repeating – they not in any type of array. Putting anything in any type of protected RAID Array (meaning anything other than a RAID 0) means that you start to lose capacity. Each SSD is individually used to the fullest of it’s capacity for nothing but cached reads. So… the question arises “what if your SSD dies?”. Well… not much to be honest – the space that was used for cache is lost, and the data that was in that cache was lost, but it’s a read only cache – meaning that the data is already on the spinning disk, it’s not longer in cache. The data that is deemed worthy of cache will be re-pushed to cache as its worthiness is noted.

So, on to the data write operations. I made up a little flow chart that shows you just how the data is written to disk and in what process it is written in.
Nimble-Write-PathA few other little tid bits on writing data – compression is decent. I personally think LZ4 is a decent compression algorithm – in the past I’ve used several in SAN infrastructure, mainly ones like LZJB (pretty close to LZ4), the standard GZIP levels, and LZMA (once again, pretty close to LZ4). LZ4 strikes a pretty good balance between compression and CPU usage – obviously the higher compression, the more CPU usage it requires to compress data. Using things like Nexenta, I prefer the GZIPs over the LZJB – mainly because the CPU usage is not out of line (except when you use things like GZIP 8+, then it starts getting a little high).

According some Nimble Documentation, compression speeds happen at about 300 MB/s per CPU core. LZ4 (and most Lempel-Ziv based compression) has awesome decompression – decompression can happen at RAM speed in most cases (1 GB/s+). Compression is optional, though it is enabled by default. We have shut it off on several of volumes (namely the Veeam Volume, since Veeam compresses so well already that we weren’t gaining anything by having it on, and our Media server, since photos are already compressed, it doesn’t offer much in the way of compression). For things like VMs and SQL Server, I think you can expect to see a 1.5 – 2.5x compression ratio – SQL Server especially compresses well (2.25x+).

You’d think with compression, that you may notice a performance hit while compressing and decompressing data like that, be we have not noticed any type of performance hit, only huge improvements.

Next are the read operations – once again, a little flow chart:

Nimble-Read-PathSo, let’s talk about cache for a second (accelerated reads). Any “hot” data is written to SSD Cache. The system serves that hot data from Cache and it obviously responds really quickly to changes. Obviously reads are a lot faster coming from SSD than HDD (RAID 6 reads aren’t horrible, especially when coming sequentially, but SSD is still generally a lot faster, in the realm of about 6 – 10 milliseconds on HDD vs 200 microseconds on SSD (about .2 milliseconds).

I may have previously mentioned that these use standard MLC SSDs (multi layer cell solid state). Generally speaking, in the enterprise, the “best” SSDs go in this order. SLC > eMLC > MLC > TLC, and TLC being the worst by quite a long shot. SLC’s, generally speaking, can endure more reads and writes than their MLC counterparts. If you want more info on why that is, read this. Honestly, when the engineer told us it was just plain MLC, I was like… wow, this product seems so genius and then throw MLC in there… what for? I asked that question while he was in there, and he gave me some pretty good answers – and the main one is that it all comes down to how you use it. SLC SSDs will work very well and endure random writes (by a lot, we are talking 100,000+ writes per cell). Since MLC drives have 2 bits written per cell, the cells can potentially wear out faster and since Nimble uses MLC, it converts random writes in to sequential writes, which minimizes “write amplification“, or basically spammy writes to cells, which thereby prolongs the life the SSD. Data on the SSDs are compressed and are “indexed” with metadata which also increases cached reads.

Next post, I’ll get in to networking and such.

Server 2012 R2 and HyperV Generation 2.0

Well, I have to admit that I was pleasantly surprised by Server 2012 and the improvements to HyperV. The major improvements that I have noticed since I started using it are these:

1. Live Migrations between Cluster Hosts are insanely fast. This is due to compressing the data between the two hosts. You notice a slight bump up on the processors (maybe going from 2% utilization to 12%), but it’s not enough to do anything. I’ll be the first to say that this feature is amazing – migrations take literally a few seconds – these are VMs with 6GB of dynamic memory (using about 2 gigs).

2. Virtual Machine Version 2.0. I haven’t used this much yet, however, some of the features look great and some don’t look so good. The main one is boot from SCSI disk and shared SCSI disks. That means you can create Virtual Clusters with extreme ease. The problem with Virtual Machine Version 2.0 is that it’s lacking the RemoteFX stack. This is not a problem if you’re hosting servers, but if you are doing a VDI deployment (like myself) this is a bit of a problem.

3. Deduped Cluster Shared Storage. This has been something I’ve been waiting for – dedupe in Windows is awesome. In my little VDI environment, I created each individual VM (not using a template) tailored for each user. This is good and bad – it mean each user retains their settings, however, it means when you thin provision and then use 30 gigs of space in one VM, it’s 30 gigs of space used on the volume. This is not a problem for a good SAN as a good SAN will compress that in to about half that size, but when you start having 20 or 30 VMs, it still can add up. Enter deduped cluster shared volumes. Dedupe would be awesome – since most VMs are Windows 8 or Windows 8.1, they all have about 10 – 15 gigs worth of the exact same dataDedupe will take that data and instead of having for all 20 or 30 VMs, store it only 1 time and each time a read is looking for a certain chunk of data, it will read the chunk that was deduped.

4. GPT / UEFI boot – this isn’t a huge thing since most VMs are needing 2TB+ boot disks, but for those who are, you’re in luck.

All in all, it looks like a lot of big improvement.

The Windows 8.1 Upgrade Experience (for Enterprise)

I have to admit, I’ve tried to be on Microsoft’s side through this whole Windows 8 thing. I’ve been using it for quite a while, and though there are some things I don’t particularly care for, I really don’t find it to be that horrifyingly bad of an OS. Now, that being said, the upgrade process to Windows 8.1 for an Enterprise user (or even a MAK / KMS key) is pathetic.

First off, Windows 8.1 was presented as a “free” upgrade to Windows 8. Well, free it is, but performing the actual upgrade on those is not as easy as just doing a Windows Update. First off, Microsoft says that the update is available in the “Windows Store”… unless of course you are using Windows 8 Enterprise or if you activated with a MAK or KMS key. So… after waiting for it to show up in the store, which it never did, I went on the prowl to find out where the update was.

Come to find out what I just mentioned. Now the only way to do the upgrade is an “in place upgrade” by using the ISO. I checked the VLSC for an ISO and was surprised to find out that it’s not there (at least not 8.1 Enterprise). So much for that problem free upgrade.

I finally found an ISO for doing the upgrade and completed it – however, now the system needs a new product key. This is total fail in my opinion and a pathetic release cycle.


Nimble Storage – An In Depth Review – Post 3


The Decision: We kept going back to Nimble for everything as a comparison. We would be like “Oh well we really love this about Tegile… but Nimble does it this way”, or “Well, we really like how EMC does this, but Nimble does it this way…”. Eventually it got to the point of being something that we really needed to do.

And now…  why we chose Nimble: Here it goes. I’ll do my best to write what I remember and what I have seen. When we first started looking around for a new SAN, a friend came to mind – a friend that previously worked a different SAN company. I called him for some unbiased advice, mainly wanting to know if he had an opinion on what was better – 10GB iSCSI or 8GB Fibre Channel. He suggested that we do a little meeting and have a chat. Previous to this, I had only heard bits about Nimble storage. He came in and basically told us that both Fibre and iSCSI have their application in a particular environment, but then went on to his sales pitch. My friend is not a sales guy, he is a SAN Engineer. He didn’t sell the product to us, the product sold the product to us.

Basically he told showed us the entire SAN from the chassis all the way down to the file level functions.

So… first things first. The Nimble CS series are actually stashed in a slightly modified Supermicro 6036ST-6LR chassis. The enclosure has 16 disk bays (4x SSD, 12x HDD). The solid state drives are in the middle of the enclosure. Why did they choose that? One thing you’re going to learn here is that there is a reason for everything Nimble does. They put the SSDs in the middle of the enclosure because the standard drives could theoretically create vibration. Vibration is no good for a hard drive, and therefore having the drives closer to the screws on the side of the rack mounts could provide more stability and support for those drives. I suppose in theory, it would help with drive vibration. Obviously I have never field tested this, but to me it seems like a valid claim.


Look familiar? The Nexsan NST series and Tegile Zebi series are both housed in the same chassis. I’m not sure how they have their SSDs positioned, when we met with them, they did not make a point to tell us, nor when we saw their physical products, did they show us.

Anyway, the chassis has two controllers that are hot-swappable in the back in an Active / Passive configuration. Most SANs are Active / Active, and for those of you who are uncomfortable with this, in a nutshell, here is how the controller works. As data is written, it is first written to NVRAM, which is mirrored across controllers – these means that if Controller A drops out in the middle of a write, Controller B already has the exact same data in NVRAM and therefore nothing is lost. The failovers are absolutely seamless. We have done several test failovers and there is absolutely 0 interruption. The controllers are housed in the back of the unit and slide out. These can be upgraded at any time should your infrastructure need it (IE, you can go from a CS240 to a CS440 just by swapping controllers). This can be done hot.

The CASL architecture: Before going on to another post and going in to this in depth, if you have time, I would suggest watching this video:

You might think it’s all just marketing propaganda, but it seriously is revolutionary.

Nimble Storage – An In Depth Review – Post 2


Nexsan: The Nexsan array we looked at (it is actually housed in the same chassis as the Nimble array, so the look and configuration are quite simple – though there is a really large part missing, CASL, which I’ll get to in post 3). It’s a Hybrid array (meaning it leverages standard spindle disks and SSDs as cache). This is very similar to Tegile except there is user defined RAID levels (only 5 and 6), and it’s quite accommodating. You can choose how many SSDs you want as cache and you can choose how many standard disks you want.

The good: It’s extremely flexible. Multi-protocol (CIFS, NFS, iSCSI, FC).

The bad: The caching seems extremely similar to the Tegile. Data is cached as it is read or written – this provides for fast reads and write at the expense of the lifetime of the SSD. Because it’s user accommodating, I would think that most people would elect for standard MLC SSDs rather than eMLC SSDs and they will most likely run in to problems because of that (especially in write intensive environments). No compression or deduplication to speak of.

Dell (Compellent): We looked at upgrading our controllers. This provided us a way to keep our existing disks and infrastructure, only with newer controllers.

The good: We wouldn’t have had to rip out the entire infrastructure.

The bad: Price. The price is a massive con. It’s so massive that just the price of the controllers was 3/4 of the way to an all new SAN. No SSD Caching to speak of. Tiered storage uses tons of I/O for reads and writes for data progression. Licensing costs. Power usage is off the hook (right now our old Compellent array is estimated to be using between 2500 – 3000 watts per hour. That’s 2x Controllers at 500 watts a piece and 3x disk enclosures at 500 watts a piece, and this is literally not a joke, we monitor our power usage). Disk failures are rampant due the drives being thrashed for data reads and writes.

Nimbus Data: Nimbus is a full out all Flash based storage built on SSDs. It’s insanely fast.

The good: Crazy high I/O.

The bad: Price is totally out of range and this is absolute overkill for what we need. We are pushing about 3500 – 5000 bursted IOPs. The Nimbus array is pushing 1 million IOPS. I will say this as well – I tend to despise companies that hire the really gorgeous girls to sell you products (who really don’t know jack), and this is exactly what Nimbus does. I know this blog is called Dorkfolio and that I am a bit of a nerd, but just because I am a nerd and some gorgeous girl tries to sell me something doesn’t mean I’ll buy it. Nimbus employs some the most attractive girls I’ve ever seen at a SAN booth, and it’s all to lure in the nerds. I buy the product based on whether the product sells itself to me, not based on whether you have an extremely attractive girl trying to sell it to me.

We though about building our SAN based on Nexenta: Nexenta is a SAN OS built on Solaris for ZFS. It’s a pretty robust SAN solution. It takes a bit of work to set up, but once it’s set up, it would work… or I thought it would.

The good: We could build it for a lot cheaper than we could buy it from a SAN vendor for. RAID-Z.

The bad: No support other than the person who designed and built it. Reporting is sketchy at best (IE, if a drive fails, you’re not going to know about it immediately). SSD Cache is used as a fast spindle disk (once again, like Tegile). ZFS is 95% stable. In a production environment, 95% is not good enough.


Yes, this is really replacing 12U of Compellent Equipment.

Nimble Storage – An In Depth Review – Post 1


Recently I have had the pleasure of working with a Nimble CS240 Array. Needless to say, I am quite pleased with it and I’m going to do a pretty thorough write up here on it here for those on the fence looking for a new SAN or looking to do a “rip and replace”. I’m going to go in depth at everything we went through to make this decision, why we made it, what we did, and most importantly, how it is working for us.

1. History

It’s a bit of a long story, but it is coming time to upgrade our SQL Server from 2008 R2 to SQL Server 2012. We have (had) in our production environment, a Compellent SC30. The Compellent is a tiered storage device that relies on pure spindle number and drive speed to push out it’s performance. There can be several tiers that are utilized for different scenarios – for example, Tier 1 is generally 10k or 15k SAS drives in RAID 10, Tier 3 would be something more along the lines of 7200 RPM SATA drives in RAID 5. The bread and butter are in that you can perform reads from the Tier 3 and writes to the Tier 1 and then at night (preferably), data progression can move the newly written data to Tier 3 for reading. Generally speaking this seems like a pretty good system except for one major flaw:  Disk thrashing. You see suddenly about 8 months ago, we had several (and by several I mean about 5x of our 36) drives fail. Tier 1, Tier 3, both were failing in very short proximity to each other – almost to the point of sweating bullets. Thankfully, Dell (who owns Compellent now) has very good support for these and we have our drives the next day.


In any event, this SAN used all these drives for random reads and random writes, which means that when data is written to the drives, it is written wherever the controller can find some free space to put that sector. This SAN was extremely multipurpose. It housed SQL Server databases, VMware Virtual Machines, Physical Servers connected to LUNs, etc… All these servers are performing reads and writes against the controller, and since the data is randomly written, it’s randomly read as well.

The image on the left will show you data “fragmentation”, or rather, what fragmentation would look like on a physical drive platter – so in order to read File A, the spindle head would have to go from the inner most part of the of the drive, out to the outermost part, back in, etc… and because of that, those hard drives are working intensely hard just to read and write data.

In any event, with a sour taste in our mouth coming from drives dying, we atempted to create a SQL Server 2012 box with a LUN. Needless to say the SC30 was too old to accomidate this due to it’s age and 32 bit disk structure. So… time to upgrade controllers – but they were very expensive.

So, we started looking around at some newer technology.

2. Application Usage

We are about 95% virtual and about 5% physical (our only physical servers are SQL Server boxes), however, I would think that SQL Server accounts for at least 50% of our I/O (and it’s probably 75% read, 25% write on that). We utilize VMware ESXi 5.1 (at the moment), SQL Server 2008 R2 (soon to be 2012), and some media volumes for images. We needed at least 7 TB of usable capacity and were expecting to grow by at least another 4 TB in the very near future.

3. Research

We met with a lot of vendors – companies like Tegile, Nexsan, Dell, Nimbus (insanely expensive for a full SSD SAN, but hey, it was fun), and a few other companies. Let me lay out the major contenders and why they were contenders.

Tegile: We really liked Tegile and had a good long meeting with them. It seemed like their big thing was SSD Cache, Compression, and Deduplication, all of which seemed attractive to us because we needed the read performance (SQL anybody?) and because deduplication and compression could potentially save us a lot of a space. It utilizes 2x onboard controllers in Active / Active configuration.

The good: Variable compression (gzip, lzjb), deduplication, SSD Cache, price. I’m not sure if I’m allowed to post price on here, but I’ll say, with a lot of generality that the quote come in around 40,000$ for a 22 TB array.

The bad: They basically treat their SSDs like regular hard drives, more or less following the procedure below:

Tegile Read and Writes

Tegile Read and Writes

As you can see, there are a lot of writes to the SSD – this is alright because they utilize eMLC based Solid State Drives, but still, it utilizes the SSDs as “fast spindle cache”. Quite frankly, they are not fast spindle cache – they are SSDs. After a lot of thought and research, we decided against this due to this basically being the same system we were leaving (drive thrashing) and because of how we felt about all those writes to SSD cache.

Hyper-V Virtual Machine Slow Network Transfers

I found a strange issue today affecting HyperV when you are using a Broadcom NIC as the vSwitch NIC – and that is that transfers from within the VM to another part of the network – those transfers are crazy slow.

This is a “bug” in the Broadcom driver and has to do with a network feature on Broadcom NICs.

To fix it, just disable “Virtual Machine Queues” in the Advanced Configuration dialog – this is for the dedicated NIC and NOT vEthernet.