Home > FAS2020, NetApp > Lab NetApp (FAS2020)–First setup

Lab NetApp (FAS2020)–First setup

Recently I picked up an older NetApp FAS2020 with a DS14MK2 shelf with 7x250GB SATA to use for testing and learning for my NCDA/NCIE certifications. In my day job, one of the things we’re looking at doing is migrating from FCP Block for our VMware clusters over to 10GbE NFS/ISCSI with 1GbE in the Dev lab. We have recently started using the Storage Efficiency (DeDupe) features of our filers, and have been seeing good results. What I wanted to see when I got this setup at home, was what the differences looked like. This post will focus on that.

First a little bit about the hardware configuration. As this was to test storage efficiency, performance and optimization didn’t matter as much (this will come in later tests). Also I didn’t have the network optimized or ready to do any NIC Teaming and LAG’s. So these tests are all using 1GbE ISCSI on the (my ISCSI) subnet and NFS on the (my Data) subnet, none of which is currently using Jumbo Frames. As 2 drives were used up by NetApp for the Filer OS, I had 5 remaining. These were placed into a RAID-DP with 2 parity disks with 1 spare, ultimately leaving me with only 2 data disks. For the tests we’re doing, that doesn’t hurt anything. An aggregate was created that used all of this space.

Two volumes were then created at 220GB. Details:

  • /vol/VOL_VMWARE_01 (Fractional Reserve = 0% for volumes containing LUN’s, as per best practice, thin provisioned)
  • /vol/VOL_VMWARE_02_NFS (175GB, thin provisioned by default)

Within these, a qtree each, so if I wanted to do qtree mirroring , I could:

  • /vol/VOL_VMWARE_01/QT_VMWARE_01

Then a LUN/NFS export to make it usable for vSphere:

  • /vol/VOL_VMWARE_01/QT_VMWARE_01/LUN_VMWARE_01 (175GB, thin provisioned)
  • /vol/VOL_VMWARE_02_NFS/QT_VMWARE_02_NFS (exported, root=r/w)

Next we make sure that DeDupe is on, and that it is started, and set to run 24/7 rather than at midnight:

From there, a datastore was created for both volumes on a vSphere host:


I then copied 3 of my standard LAB VM’s to it:

  • NW-DELLOME1 – Dell Open Manage – has a bunch of downloads, scanning and inventory of the Dell hardware, etc.
  • NW-VC1 – a clone of my vCenter Server
  • NW-WDS1 – a clone of my WDS/PXE server, with a number of ISO’s and such present

As you can see, I tried to get VM’s that were not simply clones of the template with 50KB added.

Then I wanted to see what the Storage Efficiency looked like. As we know, NFS is thin by default. I honestly can’t tell you why in the screen shot above, that the ISCSI VMFS indicates 11GB more used than the NFS VMFS. If I had to guess, it was that the NFS was seeing portions of the Thin Provisioned VM’s that it could ‘re-thin’ on NFS, that ISCSI simply couldn’t do until a reclaim space process is done inside the VM.


Both are showing results. Again, the NFS is showing less used, which matches up with the VMFS display in vSphere. But NFS is showing more saved, and thus a higher percentage

Now something I found strange – 34/46 != 43%. It should be 73.9%. It also is not accurate for a volume or LUN size of 175GB, 220GB, or any other number. In fact, for it to be accurate, the volume size would need ot be 79GB if my math is right. Regardless of the math, the important take aways seem to be this:

  • NFS ‘appears’ to get marginally better usage and dedupe. This could be a difference in the way the VM’s were cloned though, and perhaps active vSwap, etc. I’ve checked, and don’t see any of this, but it must be something.
  • Either are going to get you about the same results on the NetApp side – IF your VOL/LUN for your ISCSI block VMFS are also set for thin provisioning.
  • Also, all of my VM’s were set for thin provisioning. I do need to retry this all over again, with more prep and perhaps a number of 100GB LUN’s and see what happens.
Categories: FAS2020, NetApp
  1. September 17, 2014 at 1:12 PM

    Pretty nice post. I just stumbled upon your blog and wished to say that I’ve truly enjoyed browsing your blog posts.
    In any case I’ll be subscribing to your feed and I hope you write again very soon!

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: