Home > PernixData, vSphere > PernixData announces new features!

PernixData announces new features!

Wow, so this is what I get for hiding under a rock for a day.  As you already know, I really like PernixData’s FVP product, and think it performs near miracles.  It’s simple, does what it says it does, and like most things I like about virtualization – seems almost magic. 

It seems that today at SFD5, new features were made public.  Full details can be found on their blog (http://blog.pernixdata.com/a-new-era-of-server-side-storage-intelligence/).  I obviously couldn’t possibly give as good or accurately information as they can – but I can summarize the points that seem impressive to me:

  • FVP can now cluster RAM as well as Flash/SSD:

    This gives you two options for tiers, and can use either or both as required.  In my mind, one of the nice things about this is the ability to deploy and test it quickly. 

    In my environments I tend to prefer diskless 1U/2U rack servers or blades if quantity of servers warrants it.  They’re easy enough to add a 2.5” SSD or two to, without a lot of cost.  But you still have to spend something (unless you have them around).  The diskless units I have all boot from SD/USB and have hot swap drive blanks, and not hot swap spares – so we’d have to pick up some hot swap carriers.  Then there’s a trip to the co-located facility to install them.  None of this is the end of the world.  But being able to use surplus RAM on a 256GB-512GB host, that might be using 40-65% of its capacity – THAT I can do any time. 

    This extra tier can obviously be much faster than SSD, which is also a nice benefit.  Obviously different environments will have different needs.  Some might be easier to add a PCI-e SSD to.  In a blade environment, 2.5” SSD’s might be easiest.  or, depending on your hosts, maybe you have surplus RAM.  Either way, you have all the options available now.

    Granted, I have an exceptionally small lab, only about 20 VM’s.  But when I’ve run disk heavy benchmarks, I’ve only seen about 50-60GB used on my SSD’s.  As FVP destages to disk as quickly as possible, I don’t often see high usage – so it’s fair to suggest that the quantity required won’t always be high.  Your mileage will vary of course.

  • Storage Protocol Agnostic:

    FVP now supports ANY supported storage – ISCSI, FC, FCoE, local DAS, and NFS.  As I use both NetApp ISCSI and NFS in my lab, it’s nice to have the option to use either.  This will greatly expand the options for those who might not have otherwise been able to utilize FVP when it was only supported on Block SAN. 

  • Network Compression: 

    Obviously, FVP uses the network to replicate and make the cache fault tolerant across hosts.  It now has the ability to use network compression to optimize this traffic.  This is dynamically calculated, and can be enabled or disabled as required.  

  • Topology Aware Replica Groups:

    The example given by PernixData is the ability to keep failure domains INSIDE a particular boundary.  You could specify that only the hosts within a certain blade chassis should be peers, to keep latency low.  This certainly makes sense.  I might also consider it a way to protect AGAINST single failure domains.  While a single blade center chassis is very unlikely to fail, you may want your replicas to be to another chassis, to protect against loss of the entire chassis. 

No matter the use case, there’s lots of good use cases here.  If FVP was great before, it’s in a new league now. 

Can’t wait to give it a shot in the lab – and get more RAM!

Advertisements
Categories: PernixData, vSphere
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: