Archive for the ‘PernixData’ Category

Selected as a PernixPro!

So this post is a bit late – it’s been a busy month.  Right around the time that PernixData released FVP v1.5’s latest updates, they also released their list of PernixPro’s ( .  This program is similar in concept to VMware’s vExpert, EMC’s EMC Elect, etc – just obviously, for those that spread the word about PernixData.  And I guess this now includes me!

If you’re a home lab user, with slow (ISCSI/block – you’re unlikely to have FC/FCoE) storage, then give a trial of FVP a try – it will likely change drastically the amount of time you can be productive in your lab.  The irony for me is I started using it for my lab, and THEN started wondering how I could use it for production environments.  What can I say, I’m selfish that way Smile  I’m able to do 2-3x more lab work, which means I can learn and test that much more. 

Don’t forget there’s an upcoming release with new features such as NFS and DAS support, plus RAM (so you can try it out without needing to buy SSD’s).  If you have the capability, reach out to just about anyone from PernixData and see if it’s possible to get involved with any beta’s they do or even just a trial when it get’s GA’d – you’re sure to be pleased…

Categories: Hardware, Home Lab, PernixData

Updating PernixData FVP to v1.5.0.3

In a previous post ( I did a quick write up on installing PernixData FVP v1.5GA on vSphere v5.5.  Now that v1.5.0.3 is out I wanted to capture my notes on doing the upgrade.

Normally when I do my posts, it’s to make a quick cheat sheet of an overly complex and wordy manual.  I feel a little guilty when I do a post like this for PernixData products – the documentation is short, to the point, and complete.  I could almost end with “Just read the Upgrade Guide” and be done with it.  Still, I like to keep these as my own notes in case there is something I need to refer back to next time.

The upgrade process looks like this:

  • Transition all VM’s to Write Through caching mode
  • Upgrade the FVP Management Server – in place, preserve the database and settings
  • Upgrade the hosts to FVP v1.5.0.3 – there is a manual copy / ssh process, or I like to use VUM to automate it.
  • Transition all VM’s back to whatever caching mode they were in previously.

Let’s give it a go…..

(If you use DPM to keep your lab in reduced power mode, now would be a great time to disable/manual mode DPM – make sure all hosts are up and running)

If you’re using vSphere v5.5, then FVP is accessible via the vSphere Web Client.



My previous installation of the v1.5 GA release has expired, as I didn’t have a licence.  So keep this in mind, your licencing must be in place.


Note your current versions – this gives you a way to confirm later what version was and is now loaded after your upgrade(s).

1) Upgrade the FVP Management Server.


Simply run the installer, and choose YES to upgrade.  It really is just a splash screen and a finish screen – it’s that easy. 


I did, however, get this error. 


After a quick restart of the FVP and vSphere Web Client services, all is well.  It may just need a few minutes and I was impatient.  Either way, incredibly minor, in my opinion.

2) Configure VMware Update Manager (VUM) with the new update.

My first post on FVP v1.5 GA included how to do this, the process is the same.  You’re just updating the actual extension in use.


3) Transition all VM’s to Write Through caching mode.

Understandably, if you’re going to mess around with the caching software, that is distributed across hosts, and upgrade the hosts one at a time in a rolling fashion, we should temporarily disable the write caching to keep things healthy – just during the upgrade process. 


The easiest way to do this is to navigate to vCenter –> Flash Clusters –> <Cluster_Name> –> Manage Tab –> Datastores and select the datastore(s).  You can then click EDIT and the dialog will ask you to select the policy, either Write Back + Number of peer hosts or Write Through.  Choose Write Through and press OK.

4) Remediate your hosts.


In VUM, scan for and then update your applicable hosts.  Note that in my screenshot, I have a host that’s suspended, but that’s normal for my environment. 

5) Transition your VM’s back to Write Back.


That’s really all there is to it.  As always, I’m pretty impressed by the ease that this product works with. 

Next up – integration with Veeam Backup & Recovery, to ensure cache consistent backups…  tune in soon!

Categories: Home Lab, PernixData, vSphere

PernixData announces new features!

April 23, 2014 Leave a comment

Wow, so this is what I get for hiding under a rock for a day.  As you already know, I really like PernixData’s FVP product, and think it performs near miracles.  It’s simple, does what it says it does, and like most things I like about virtualization – seems almost magic. 

It seems that today at SFD5, new features were made public.  Full details can be found on their blog (  I obviously couldn’t possibly give as good or accurately information as they can – but I can summarize the points that seem impressive to me:

  • FVP can now cluster RAM as well as Flash/SSD:

    This gives you two options for tiers, and can use either or both as required.  In my mind, one of the nice things about this is the ability to deploy and test it quickly. 

    In my environments I tend to prefer diskless 1U/2U rack servers or blades if quantity of servers warrants it.  They’re easy enough to add a 2.5” SSD or two to, without a lot of cost.  But you still have to spend something (unless you have them around).  The diskless units I have all boot from SD/USB and have hot swap drive blanks, and not hot swap spares – so we’d have to pick up some hot swap carriers.  Then there’s a trip to the co-located facility to install them.  None of this is the end of the world.  But being able to use surplus RAM on a 256GB-512GB host, that might be using 40-65% of its capacity – THAT I can do any time. 

    This extra tier can obviously be much faster than SSD, which is also a nice benefit.  Obviously different environments will have different needs.  Some might be easier to add a PCI-e SSD to.  In a blade environment, 2.5” SSD’s might be easiest.  or, depending on your hosts, maybe you have surplus RAM.  Either way, you have all the options available now.

    Granted, I have an exceptionally small lab, only about 20 VM’s.  But when I’ve run disk heavy benchmarks, I’ve only seen about 50-60GB used on my SSD’s.  As FVP destages to disk as quickly as possible, I don’t often see high usage – so it’s fair to suggest that the quantity required won’t always be high.  Your mileage will vary of course.

  • Storage Protocol Agnostic:

    FVP now supports ANY supported storage – ISCSI, FC, FCoE, local DAS, and NFS.  As I use both NetApp ISCSI and NFS in my lab, it’s nice to have the option to use either.  This will greatly expand the options for those who might not have otherwise been able to utilize FVP when it was only supported on Block SAN. 

  • Network Compression: 

    Obviously, FVP uses the network to replicate and make the cache fault tolerant across hosts.  It now has the ability to use network compression to optimize this traffic.  This is dynamically calculated, and can be enabled or disabled as required.  

  • Topology Aware Replica Groups:

    The example given by PernixData is the ability to keep failure domains INSIDE a particular boundary.  You could specify that only the hosts within a certain blade chassis should be peers, to keep latency low.  This certainly makes sense.  I might also consider it a way to protect AGAINST single failure domains.  While a single blade center chassis is very unlikely to fail, you may want your replicas to be to another chassis, to protect against loss of the entire chassis. 

No matter the use case, there’s lots of good use cases here.  If FVP was great before, it’s in a new league now. 

Can’t wait to give it a shot in the lab – and get more RAM!

Categories: PernixData, vSphere

PernixData FVP v1.5 GA on vSphere v5.5 First Look

March 15, 2014 1 comment

So one of my most recent posts was about fixing my UUID issue on my Dell C6100 series server.  Of course, what prompted that initially and identified the problem, was PernixData’s FVP product – way back in the 0.9 Beta if I recall.  Now that I’ve gotten this solved, of course, I wanted to give FVP a try again. 

So out goes some e-mails to PernixData with a request for download ( – go request a trial!  You’ll like it…)  A quick chat with Chris Floyd (@phloider) and Peter Chang (@virtualbacon) gets me set up with the trial again.  However, a quick look says “.. vSphere v5.0 and v5.1…”  Well that’s no good, I’m on v5.5.0 U1 (of course, why not be an early adopter Smile).  So that looks like it’s out of the question.  Then they tell me the new version is supposed to GA on Monday March 17.  Well I can wait that long I figure.  That lasted until about 7PM on Friday, at which point I went to download the beta anyway.


Not being up on the current version number (I hadn’t been keeping track, what with the UUID issue, why disappoint myself further that my hardware doesn’t like their software), so I go ahead and download the ‘beta’ figuring I’ll give it a try.  Not 10 minutes later I get an e-mail from Chris with a subject line of “New plans for the weekend…” the body of which stated: “You were the first person to download 1.5 GA. Let me know what you think.”

Well dammit.  I’m not waiting till Monday now Smile 

First, nothing in this post should supersede what’s in the documentation – which is actually really good.  This is my notes version, and cheat sheet.  If you follow my notes and didn’t read their documentation at all – that’s on you.  With that said… let’s begin!


1) Install and configure the Management Server


I’ve chosen to install this in my lab on my vCenter server using the same svcVMware AD account.  Run PernixData FVP Management Server – 1.5.03869.0.exe and start the installation.


This really is the first screen that isn’t “Next, Next, Finish-y”. 


I’ve opted to use the same SQL_EXPRESS instance used by my vCenter Server – probably not the best way to go if in Production, but works good enough here.


Next we tell the FVP Management Server how it should be found on the network.


And then click INSTALL.


A JRE?  Yeah, go ahead and install that too if it’s needed.


2) Configure FVP


Next, you’d normally install the plug in.  The vSphere Client Plug-in for FVP v1.5 is only for vSphere v5.0 or v5.1.  For v5.5 the plug in is installed in the vSphere Web Client – and there’s nothing to do, as the installer added it to vCenter Server. 


So log in to the vSphere web client and click on vCenter.  You’ll see a PernixData FVP section at the bottom.  Click on FLASH CLUSTERS.




Name your cluster and select the cluster you want to attach it to.  Click OK.


Next you’ll see the Getting Started tab.  Click on the MANAGE tab.


It will show FLASH DEVICES.  Click ADD DEVICE.  You’ll quickly get prompted that you’re a fool and haven’t installed the software on the hosts.  Duly noted. 


3) (should have been 2) Add the FVP Extensions to

the host(s)


Installation is either via uploading to the host and installation via SSH or VUM – which is “Experimental” at this state.  However, I would like to see the VUM method work as it is more automated, so let’s give that a try.


In the vSphere Client, browse to HOME –> SOLUTIONS –> UPDATE MANAGER.  Click on the PATCH REPOSITORY tab.  Click IMPORT PATCHES.


Browse to where you’ve unpacked your FVP v1.5 software, and select the ESXi v5.5 update.  Click NEXT.  You may get prompted to install/accept/ignore a certificate – do so.




I’d never seen the patches not show up right away, but apparently my vCenter was busy.  Watch the RECENT TASKS pane to ensure the patches are Confirm Imported. 


Then confirm by entering PERNIX in the search box.

Click on the BASELINES AND GROUPS tab, and click CREATE on the BASELINE side.


Name your baseline and select HOST EXTENTION.  Click NEXT.


Search for Pernix, click the down arrow to add it to the lower window, and click NEXT.


On the READY TO COMPLETE screen, click FINISH.


If you have a Baseline Group you may want to add the Extension to your Baseline Group.  Click COMPLIANCE VIEW in the upper right to return to your hosts and clusters view.  Select your cluster and click SCAN to check for updates required.


Click REMEDIATE.  Then select only the EXTENTIONS BASELINE and select the PERNIXDATA FVP v1.5 GA baseline.  Check all applicable hosts and click NEXT.


Click NEXT, NEXT, then set your remediation options.  I like to disable removable media and set my retries for every 1 minute and 33 retries –largely because it’s easy to type/change with one hand.  Click NEXT.


Choose whatever remediation options make you happy and click NEXT and FINISH.  Then wait for the magic to happen.


4) NOW configure FVP 🙂


Now that you’ve added the extensions, let’s go back adding devices:


Only 2 of my 4 hosts are showing up right now – that’s fine.  I’m going to choose to add my Kingston V300 120GB SSD’s (here’s hoping they work and are on the HCL), and click OK.


Now that the devices show up, click on DATASTORES/VM’s


Next we’ll click ADD DATASTORE.


Only one of my datastores is ISCSI, and FVP only accelerates block devices – FCP, FCoE, or ISCSI- no local DAS data stores obviously either.  So select the appropriate ISCSI (in my case in the lab) datastore and caching method (Write Through or Write Back) and click OK.  As I want maximum performance I’m going to choose Write Back.


Except when I try that, it tells me all my hosts need to be ready.  So I’ll finish my FVP Extension installations and then retry.  Okay, and there we go Smile


Now we can not only select Write Back, but also select the Write Redundancy.  In order for Write Back to be safe, we need to select a mirror/parity for that cache on another host in case of the host with the primary cache failing.  For my lab, HOST+1 is more than enough.


Understandably, it will take a little bit of time for VM’s to start caching, and then for that cache to populate on the additional nodes.  Here you can see some VM’s are CONFIGURED for Write Back, but have a current status of Write Through. 


If we go click on MONITOR and PERFORMANCE we can start to see some stats on what’s happening.  Note that my lab isn’t very busy, so we shouldn’t expect to see much.


We can see the IOPS as well. 

So lets go log into a VM on the datastore and run a benchmark.  I’ll use Atto Bench32 which is what I use for quick and dirty throughput tests.  Note that this is not a good IOPS test, but it does give a decently quick indication as to performance and health.


Here you can see some pretty amazing numbers.   At 4.0KB, we’re seeing 2.5x write and 2x read numbers.  By the 16.0KB block size, it’s not even fair any more.  That’s not bad for a couple of $70 SSD’s.


But let’s look at what the FVP console gives us.  First we get a wealth of metrics that the vSphere performance monitor alone doesn’t give us.  You can clearly see that the VM was able to observe almost 9000 IOPS – which is nothing to bitch about. 


So based on this, I’m pretty happy.  I do have to do more testing, get some tweaks in, and better understand the settings.  But clearly I’m going to be able to push the lab a little harder. 


Observations and Conclusion:


For my needs, in my lab, speed is critical.  While I’m by no means business centric, “time is money” and the faster the equipment is, the more things I can do, which means the more I can test and the more I can learn.  I already know how to watch progress bars – so anything I can do to reduce that, will maximize my time.

Secondly, this is pretty amazing for the cost of 4x $70 120GB SSD’s.  Would you use this class of consumer grade MLC in Production with FVP?  Probably (hopefully) not.  But you could make an argument to do so, and just treat them like printer toner cartridges and replace them periodically – as long as that period didn’t fail at the worst time or require a large amount of time swapping SSD’s.

Clearly, I’ve sold the C6100 duplicate UUID/Service Tag problem Smile 

I’ll be doing additional testing in a bit.  But after hearing I was the first to download the GA code, I wanted to be the first to get something up about it.  Hopefully this will help someone else get started up quickly and easily. 

It’s late – time for bed.  But this post was a long time coming – damned C6100 UUIDs…

Categories: C6100, ESXi, PernixData, SSD, vSphere

HOWTO: Dell C6100 FRU / UUID Update–FINALLY!

March 13, 2014 2 comments

So this post has been a LONG time coming, and I’m pretty sure I’m good to go now.

As you know, the Dell C6100 is a great 4 node in 2U chassis, which works really well for a compact home lab (if you can stand the noise).  vSphere likes it, Hyper-V likes it, what’s to complain about?

Then I tried the beta of PernixData FVP.  It worked as advertised, was a simple installation, did what it was supposed to – kind of.  I noticed that it seemed like only the very last node I rebooted was the one with FVP running on it.  I did some tests, did some more installations, and watched as the next host I rebooted became the only one with the software running. 

So, given it was beta, I reached out to support – and support from PernixData was great.  Given all the troubleshooting I’d done, I gave them all the information I could find: screenshots, logs, processes, steps and sequences.  I’ll be damned if they didn’t come back pretty quickly with a suggestion – I must have duplicate UUID’s on the hosts.  Bullocks I say, ESXi has been happy, no complaints, no worries, whatever do you mean.

Support says “browse to: /mob/?moid=ha-host&doPath=hardware%2esystemInfo">https://<host>/mob/?moid=ha-host&doPath=hardware%2esystemInfo, and confirm the UUID string is different on each host”.  No problem:





Well I’ll be damned –

uuid string "4c4c4544-0038-5410-8030-b4c04f4d4c31"
On all 4 nodes.  Okay so that IS my problem. 

VMware even has a KB on it –  Not that this is a “Whitebox”, but it certainly is an OEM custom, by definition.  So we’ll go with that. 

See, on a C6100 you have a typical Dell Service Tag – eg: ABC123A for the chassis.  But each ‘sled’ has a .# after it.  So you’ll have ABC123A.1, ABC123A.2, ABC123A.3, and ABC123A.4.  Turns out this makes ESXi assign the same UUID.  Some Googling tells me that this is also apparently an issue for SCVMM and SCSM.  As DCS never really intended these systems to end up in “Enterprise” or “Home Lab” hands, but very large cloud providers, there’s no reason to care.  And fairly enough, it didn’t have any impact on my normal vSphere lab. 

Now.  How the heck do you update it?  The BIOS doesn’t give you an option.  Some posts on the internet suggest you could upload a new BIOS and specify it then, but that didn’t work out.  Dell was no help – and I don’t fault them one bit.  The system is used, off warranty, and used by someone it wasn’t intended to be supported by.  That’s fully on me, I have no complaints.  But I still wanted it fixed. Smile

I spend a lot of time at and this is a good place for a wealth of C6100 information.  A thread caught my attention where it noted these issues.  One particular post by TehSuk caught my attention –  Apparently you can just run the Windows version of IPMIUTIL.exe with the following options:

ipmiutil.exe fru -s %newassettag%

Reboot, and you’re good to go.  No such luck.  See, the user in question notes that he’s a Windows shop.  No such luck with ESXi.  So I tried making a DTK bootable ISO from Dell using some information they had, but that wasn’t working.  Various issues from the methods being written a while back and not supported on Windows 8 (which took me a bit to figure out that was my issue) to the tools having issues with creating a 32bit ISO on a 64bit system due to environment variables, DLL’s not found, etc.  Nothing the end of the world, but I didn’t like that path. 

Then I remembered that you can use IPMIUTIL.exe across a network.  I had no luck when I tried months ago, so why would it work now?   Other than I’ve now spent more time playing with the utility. 



ipmiutil.exe fru –N <hostname/IP> –U <user> –P <password>

Was able to get me a listing which included “Product Serial Num”.  So could I use the same “fru –s %SERNUM%” suggested by TehSuk? 

ipmiutil.exe fru s AAAAAA3 –N <hostname/IP> –U <user> –P <password>


Sure enough, it will change “Product Serial Number” to AAAAAA3.  So let’s reboot and find out what it says.

After updating the first 3 nodes, and checking the MOB link, looks like we have success:









No need to change it – leave it with the original Service Tag, as it no longer conflicts. 


So in the end, all you’re going to need is:

And run the above IPMIUTIL.exe FRU commands, and you should be good to go.  I haven’t checked if PernixData FVP now works better for me yet as it’s late – but here’s hoping it does.  If nothing else, the UUID’s are now different, as they should be!

BTW, please don’t read any of this as though I was disappointed with PernixData FVP – heck, if anything they helped me find this issue, pointed me in the right direction, and I wanted their software to work because my testing showed it made an AMAZING difference.   I’m looking forward to retrying the software across all 4 nodes.