Home > Uncategorized > StarWind v5.8 Testing–SSD on PERC6/I with DeDupe

StarWind v5.8 Testing–SSD on PERC6/I with DeDupe

So I had an opportunity to look at testing StarWind v5.8 on my SAN. Specifically, on a 4x120GB SSD RAID5 on a PERC6/I, with DeDepe enabled. I could afford the SSD’s, but obviously they don’t provide a lot of space. If I can use DeDupe, then the home lab can get a heck of a boost. Also in a lab, most of the VM’s are clones with very little added other than roles or features, but the base is largely the same.

So first we’ll start with a clone of my 2008 R2 SP1 x64 template:

clip_image001

Here we see the NW-TEST1 VM, powered off, so there is no logging, etc, going on. 44.12GB provisioned, 17.71GB used on disk.

clip_image002

Datastore browser would be inclined to agree.

clip_image003

Not quite as much sense is being made here. 12.4GB on disk? That’s nice, but not what I’d expect. Note which files have grown.

Now I’m going to copy another VM. This copy will be FROM StarWind to StarWind, partly to see how it copes with read/write simultaneously. Then we’ll look at DeDupe. I’m cloning the SAME NW-TEST1 to NW-TEST2. To be complete, I’ll use a Customization Profile, let it join the domain, do its thing, then I will shut it down.

clip_image004

Note though – this ALSO doesn’t make much sense. We agree 45GB is provisioned. But 299-277=22GB. This neither matches the 17GB in vSphere, nor the 12GB reported on the StarWind host, nor the 18GB or so on the Datastore – it might, if it included vSwap, but the VM isn’t running.

NW-ESXi1 is the host, and while it’s cloning I’ll grab some screen shots:

clip_image005

Only using 3x1GbE ISCSI. No Jumbo Frames.

clip_image006

3x1GbE on the Initiator, 4x1GbE on the Target = 12 connections. Set to Round Robin, and IOPS=1

clip_image007

Or so I thought, set to IOPS=1. Turns out the StarWind LUN is a “eui.xxxxx” vs “naa.xxxxx” which my script sets. NOW, I have changed it to IOPS=1 However, based on the chart, it still doesn’t seem to be using the extra NIC’s from the host. Perhaps I need to rescan.

So the VM is finished cloning, and booting up to run the first time OOBE and join the domain. While it is doing that, let’s check the datastore:

clip_image008

Makes sense, and there’s the .vswp we expect if the VM is powered on.

clip_image009

Not really what I expected. You’ll note we’ve gone from 277GB free at the beginning of the clone, to 263GB – a difference of 14GB. Except that the VM is 17GB, and with the .vswp it should be more. Plus, while StarWind DeDupe’s, I would expect VMware to be very blissfully ignorant of the disk usage as it doesn’t know if it is Thin, Reclaimed, DeDuped, Compressed, etc. Only the layer between the LUN and Disks should know that.

clip_image010

So now we’re down to 241GB – 22GB. Still strange, but let’s go with it.

clip_image011

Not entirely expected, but who’s to complain.

Based on this, I now have 3 copies of a 44GB Provisioned, 17GB Allocated VM, taking up 13.5GB. I’m going to make an NW-TEST4 just to be complete.

clip_image012

So we’re getting good usage of our NICs, so the ISCSI is working right.

clip_image013

219GB free – 80GB used for 4x17GB VM’s, with some overhead. This makes sense.

clip_image014

Looks like because they’re exact clones, they DeDupe pretty well. Shocking J

clip_image015

Can’t complain too much. Perhaps the trick is to create a 1TB DeDupe volume and then monitor it.

So now let’s do a little testing inside the VM – this is only one VM running, but on a volume where 4 of them are now DeDuped. Tests are with HD Tune Trial, with no changes from defaults:

clip_image016

Not so bad.

clip_image017

Above from NW-TEST5 on NW-SAN2-SAS

clip_image018

Above from NW-VC1 on NW-IX2-200D with 1x1GbE NIC and no mirroring (to get the full 2x1TB data space on the little box)

NW-TEST4 on NW-SAN2-SSDDD volume:

clip_image019

NW-TEST5 on NW-SAN2-SAS volume:

clip_image020

NW-VC1 on NW-IX2-200D:

clip_image021

I’m not entirely sure I get the similarities between the two StarWind volumes – I’d expect the SSD to be much faster than the SAS – or the SAS to be slower, whichever. But they’re both on the same PERC6/I controller, so perhaps that is the limiting factor (to SSD being faster) or where both get a bump due to caching and policies.

For completeness, I decided to run Atto Bench32:

clip_image022 clip_image023 clip_image024

Really kind of interesting. If you look at Atto benchmarks, which are only showing maximum sequential throughput, you’re going to determine that the SSD “just isn’t worth it” in almost all cases. Except for when you go back up and take a look at the IOPS provided at the lower block sizes, especially the 4K block size. Heck, if you look at the IX2, you might even think that in the 0.5-2K, it’s ‘better’.

clip_image025

Also take a look at the Read/Write latency. We can also see that the SAS vs SSD is just about twice in the maximum, and close to the same in the average. Over the long term, with many running VM’s, you’ll feel that.

clip_image026

What THIS shows me, is that we’re probably only seeing one NIC’s total worth of traffic between the host and the SAN. That poor IX2 L What I don’t understand is how the host sees only this much traffic to the datastore, but Atto can report so much more. HD Tune is showing > 120MB/sec on the File Tests as well, at the higher block sizes. Perhaps the VMware chart is blunting it based on polling frequency?

So my next test will be to take my existing VM’s, and *clone* them to the NW-SAN1-SSD volume. This way if it runs out of space or anything, I’m good to go. I can also afford to have them all turned off to do so. IF everything looks good, I can turn them on and purge the old. What I want to see though is total size of VM’s vs total size of disk on the StarWind DeDupe volume. To do so though, I’ll be purging the existing volume and making a new LUN that is larger than the size of the physical volume on the StarWind size.

What StarWind needs, based on this observation and my last few months with the full version:

· A “click here to apply TCP settings” box – one shouldn’t need to get it from a forum

· Similar, a list of “best practices settings” for ESXi hosts.

· Details on what they like on or off in the switching fabric. Not all vendors play the same.

· Reference architecture with 1 switch, 2 separate switches, 2 switches stacked, 1-4 1GbE NIC’s, etc would be very handy.

· The ability to grow a LUN – so far as I can tell, this isn’t there yet – but maybe I havne’t found it.

· STATS! There should be some ability to pull better statistics out of the console. What is my DeDupe ratio?

· Notification for drive space. I had an issue with a previously DeDupe’d LUN that I don’t know if it was StarWind or not – likely not. But it sure would be nice if you could set StarWind up to warn you when its about to see a volume filling and/or have some sort of graceful shut down while you still have some space to play. I think this would need VAAI though from the ESXi side – which they also need.

Advertisements
Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: