Home > Hardware, Load Balancer, Microsoft, Storage, vSphere > Does ALL of your data need to be on a SAN? Using DAS for uptime.

Does ALL of your data need to be on a SAN? Using DAS for uptime.

Are you still using SAN’s for everything? Should you? This is something I’ve talked about with a few of my peers and have been brainstorming to come up with better ways to do things more efficiently. It doesn’t mean it’s right, best-practice, or good in all situations. But it’s worth throwing up for discussion and to make people think about design differently.

See, I like virtualization. I think the magic that comes from vMotion and svMotion and similar technology is pretty great. But it also comes with costs. Outside of licencing, the next biggest cost is usually storage. You never have enough, even if you had more money, it’s not always easy to add more, you may not be able to get the right storage, or it might not get you what you need. Many companies will do Tiers of storage – Gold/Silver/Bronze, etc. But it’s usually all kept on the SAN – maybe tiers on the same SAN/NAS (I’m just going to say SAN for the purposes of this discussion) or maybe the lower tier is on an older unit or BrandX or what-have-you. If the data NEEDS to be on the SAN, that’s fine. But the question to ask is DOES it need to be on the SAN at all?

I’ve been in a few environments, and seen different options. In one, all hosts had local disks. If you were doing SAN maintenance, you offloaded all the data from the SAN to local disk with svMotion, and then did your SAN maintenance. This, to me, was mind boggling – you pay for enterprise features and 5-9’s uptime and online live upgrades and then… run scared every time? This wasn’t for me. In my current environment, we perform quarterly outages where two of us get trusted to basically take apart or do whatever is required (pre-approved and peer-reviewed, of course) from 8PM to 8AM once a quarter – provided we don’t break anything and when we hand it back, it’s running. This doesn’t always result in full outages, but it allows for the possibility. While things like SAN or fabric maintenance won’t necessarily take down anything, this is also a good time to ensure that VMware Tools, VMware hardware, Windows Updates, etc, are done that couldn’t otherwise be done transparently outside of the quarterly maintenance. This works very well, and I like the process. But of course, often the question will come up from somewhere “well, is it a FULL outage?” or “Can SOME of the data stay up?”, etc.

The issue here is one of application resiliency. It’s not really a SAN problem or fabric, or vSphere HA/vMotion/DRS issue. The Windows system may or may not have been architected in a way that makes it transparent. It could be a lack of NLB of some sort, no clustering, or older applications – we all have them. But for those things that DO have application based high availability – how can you leverage that? Exchange DAG’s, SQL Always On AG’s, Domain Controllers, Web Sites, File Shares that aren’t on NAS, etc. The ways I’ve come up with or heard include:

· Build two separate clusters, put one half of the application in each. This handles issues with the cluster itself, and anything internal to the application. But it’s quite likely these clusters are in the same rack – on the same power, network/storage networks, and SAN. So… you likely haven’t bought much if your SAN upgrade results in a panic.

· Use redundant network/storage networking and SAN’s. HAH. Yeah, that’s not in the budget. But it certainly would give you two silos that you would only do maintenance on one at a time, and if you knocked something over, you’d be okay.

· Don’t use a SAN, use DAS. Now your HA can be in a much smaller unit – perhaps on a 2U box with internal disks, on separate power. THIS is what I want to take a look at.

The obvious caveats that will be pointed out:

· “One box can’t possibly run my entire data center” – correct. But am I trying to keep “everything” up, or just “core services”? I probably don’t care if the mail archiving solution is up, if Exchange is still up to Outlook/OWA/ActiveSync users.

· “You’ll never get enough storage for everything” – correct. Again though, not trying to keep a duplicate of everything on an Enterprise Tier 1 SAN that we’ve consolidated onto.

· “You’ll never get the performance out of it – the IOPS will kill you” – probably. But like that mini-spare in your trunk, it’s just trying to get you to the service station, not take you coast to coast on a road trip. Same applies here – you just need to ensure that services ARE up. Also, we have options such as PCIe SSD and vFlash or other 3rd party solutions to leverage SSD caching/acceleration. If it’s good enough for modern/hybrid SAN’s…

· “But your network can still take you down” – yes, it could. An option is to connect this host to different switches and/or ports on your core/distribution/firewall, and have it function segregated. Design what will work for YOUR environment.

· “But you’ll lose the ‘SAN Magic’ like snapshots, deduplication, compression, etc” – you will. And I’m going to suggest you don’t care. You’d still have your “Primary Node” on SAN. THAT system gets all these benefits. Why pay twice for it, to be stored in the same location, and be in the same amount of risk of SAN failure, or Admin operator error, etc?

· “Without shared disk, you get no LUN’s, so you can’t do clustering” – correct. If what you want to do is clustering that requires shared disks. There are many modern applications that do not do this, I’ll talk about them later.

So why am I bringing this up? Well from numbers I’ve crunched and seen from peers, “Enterprise SAN Storage” costs “about $6/GB”. You get benefits like thin provisioning, deduplication, compression, snapshots, replication, so the effective cost likely comes down, but let’s assume that number to be true. Let’s also assume that your typical vSphere Rack Server for your environment is around $10K without disks, booting from SD. Also, that a similar box, but outfitted with 12x 3.5” 3TB NL-SAS 7200RPM disks is around $20K. You might do.. RAID6 or RAID50, with a hot spare or two. So we’ll assume you get 8 usable disks worth of capacity – ~ 23TB. Now we have some numbers to play with. 

(You could also use a 25×2.5” chassis with 1.2TB 10K SAS if you needed speed over capacity, with an understandable cost jump – you’d still see about 24TB with RAID50 and 3×8 disk RAID5 stripes and a single hot spare – but 3100+ IOPS vs 650+.  Also, external disk shelves could be used.)

First, the difference between the $10K for the base host and $20K for the host with disks is all you should factor for. We can reasonably say that without the disk, this would just be another cluster host, running some of your cluster workload – including this “application HA” systems – either way, we were buying the node. If we go with internal disks, some of the workload could be DAS based, and some could even still be cluster based connected to the SAN. We take $10K, divide it into 23TB to get a cost per GB – about $0.43/GB. That same 23TB at $6/GB for SAN space would be about $141,312. I bet I have your attention now. Now that we know what it costs, we can start talking about ways you could use it…

Some ideas on how this system could be used – either for survivability or just for reduced cost during normal operations:

· Secondary Domain Controller

· Exchange DAG Mailbox role holder

· Exchange HUB/CAS role holder

· “Mail Archive” – eg: Enterprise Vault, Exchange Enterprise Archiving. (this is one of the ‘non-outage’ examples where you would use this as “Tier X-1”)

· Print Server – maybe you just svMotion this over during maintenance, it’s likely quite small

· Windows 2012+ based DHCP Failover with load balancing

· SQL Servers that utilize SQL Server 2012 Enterprise AlwaysOn Availability Groups

· SQL Servers pre SQL Server 2012 that use Active/Passive mirroring with log shipping that you would manually sync and cutover to

· Windows File Shares that could be a DFS-R/DFS-N replica of what’s on the NAS (could be another example of a ‘non-outage’ use)

· Secondary RADIUS, PKI, NAC, NAP, WSUS, etc, servers

· IIS servers that sit behind some sort of load balancer

· Network Monitoring Services?

· Phone/Unified Communications/Lync/etc secondary node

As you can see, that’s a pretty decent list of services that you could likely say “These WILL be up during maintenance”. You could do their patching/updates/maintenance the weekend before or after the normal quarterly outage, and no one would know.

The critical piece here though, is ensuring that your applications are resilient. Too many times I’ve seen things like:

· Clusters that have non-cluster aware services/apps pointing at them, that don’t tolerate a cluster failover (which negates the benefit)

· NLB’s that _balance_ two or more hosts, but aren’t built for failure – only normal operation. So if a service stops serving the correct web page, and only shows a “404 not found” web page – well, it replied with a web page, so the service must be healthy! (not so good for the ‘user experience’)

· NLB’s or clusters where, for some reason, something was installed that only runs on one node. It’s not cluster or NLB aware, and there isn’t even a second copy of it to fire up manually. Suddenly, your ability to do rolling maintenance is gone.

Virtualization and modern datacenters bring a lot of ‘magic’ to the table. But it’s largely for “infrastructure”. If your applications and/or OS isn’t built to allow your clients seamless connectivity, all the redundant controllers, power, hosts, etc, in the world, won’t help you at all when an Admin reboots the wrong machine or a BSOD takes it out. You NEED to build for that.

Another use case for this “DAS Silo” is what people SAY is archiving, but isn’t. What I mean, is where they take things from some share and move it into a folder on the same share called “ARCHIVE” but it’s still in the same path. Or maybe it’s some sort of link to another path, but it’s on the same NAS. Or on another NAS, but it’s just as expensive as the first one! I’ve seen this with general user documents, and with Exchange mail archiving solutions. I’ve been in meetings where I’ve asked questions like “So can the archive go offline at all?” (no) “Can the archive be slower?” (no) “Can the archive be in another location?” (no) “Can the archive have the path name change at all?” (no) – which only lets me draw the conclusion that most users just want to append the word “-archive” to a folder/file name and pat themselves on the back. But that doesn’t help us, the data center administrators who need to manage it, and it’s why it ends up staying on the same Tier. Usually because there just isn’t another tier available, and adding one would be too complex and/or costly.

But what if you could actually MOVE that Archive data? Take your Exchange Archive or Enterprise Vault and put it on DAS? It’s still getting backed up. It’s still available. It’s just got maybe slightly less availability. But if someone’s really trying to tell you that you can’t take down historical mail, deleted user data, and archived data from 2-5+ years ago, for 2 hours in the middle of the night on a weekend… we need to recheck the users/business expectations and what they’re willing to pay and contribute to this magical world they believe in.

I know this solution won’t work in all cases. But it’s a different way to think about the problem and how to solve it. The above solution has the potential to be about $130K cheaper than “doing what we have always done”. Using the $10K hosts, and assuming you have as many as 5-7 in your cluster today (that already have licencing), you may just save enough on storage to replace every compute node and half your back end networking, for ‘free’. Assuming that’s not reasonable, another way to look at it is “for a cost of $280/month over 3 years, I can ensure that during maintenance, we have x # of core systems still reachable at all times.”. If you can’t sell $300/month so that the C-level can be sure their phones still get e-mail at 3AM, which makes them happy – you shouldn’t be selling. That’s almost in budget for my home lab… (if I had some reason to require said uptime)

You know that old adage “if you always do what you’ve always done, you’ll always get what you’ve always gotten”? Maybe it’s time to do something different…

For what it’s worth, I do this as much as possible in my home lab. A 4 node Dell C6100 is the cluster and a single node C2100 with 12x3TB runs everything that can or wants to be “off cluster”. Between this, and DPM with DRS which powers off most of the cluster when there’s no load, I can shut down everything but the firewall and move the C2100 into the LAN/Trusted port and I can still be 100% “Up” even if I ripped apart the rest of the rack for a weekend – which tends to make upgrades a lot less stressful!

Advertisements
  1. ddestinyx
    April 4, 2014 at 7:41 AM

    Fantastic points! Looking at tiered storage from this new perspective really opens my eyes to the possibilities. This DAS concept should really be part of a reference architecture from the big guys so the CTOs of the world can get their warm and fuzzys that it has been validated. Great post!
    Sent on the TELUS Mobility network with BlackBerry

    • April 4, 2014 at 7:52 AM

      Agreed! If this was posted up somewhere nice as ‘reference’ it would probably get more thought. The problem is that everyone wants the lowest tier to be part of the SAN. So it’s the non-cached, non SAS, big SATA /NL-SAS tier usually. If you’re going to go that far down, one needs to ask if it needs to be on the SAN at all.

      Granted, SAN gets you vMotion and HA, but what if you remove that requirement. You can start doing some creative things. With everyone talking VSAN lately, I think sometimes people forget the value of two big Windows 2012+ VM’s running DFS and some Active/Active no shared disk solutions on two standalone hosts.

      I still think you need a SAN of course, and it should “bring the magic” – but not everything needs to be that fancy.

  2. hawkbox
    July 7, 2014 at 3:02 PM

    That’s a good post Avram. I’ve always been a bit torn on this one but you make good arguments for things that especially might not warrant the cost of SAN storage. For example I could easily load as much storage on a DAS install for my old jobs backup jobs as I could in the storage they had and it would have been effectively as reliable, with the tape offsites factored in for a substantial decrease in cost. Even a secondary DC that way could be fine, it might not even matter if that one goes off during host downtime.

    My last file server plan was a pair of 2012R2 boxes in DFS but I never got an opportunity to implement that.

    • August 25, 2014 at 12:42 AM

      A pair of DFS boxes would certainly work. DFS-R’s big issues are related to multi-site locking. But if you just want there to be an Active/Passive sort of relationship, it can be a very good way to do it. Use DFS-R replicate, but only have DFS-N to point to the Active copy. In the event of a failure, in that scenario, you’d need to update the DFS-N reference to the secondary, but then you’d have complete control over which site is being accessed. The benefit of using DFS-N is that you wouldn’t ever have to update the UNC paths, and now you have DFS-N for all the long term benefits that brings….

  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: