Home > Dell, Design, Hardware, VMware > Design Exercise–Scaling Up–Real World Example

Design Exercise–Scaling Up–Real World Example

My previous post on Design Exercise- Scaling up vs Scaling out appeared to be quite popular. A friend of mine recently told me of an environment, and while I have only rough details of it, it gives me enough to make a practical example of a real world environment – which I figured might be fun. He indicated that while we’d talked about the ideas in my post for years, it wasn’t until this particular environment that it really hit home.

Here are the highlights of the current environment:

  • Various versions of vSphere – v3.5, v4.x, v5.x, multiple vCenters
  • 66 hosts – let’s assume dual six core Intel 55xx/56xx (Nahelem/Westermere) CPU’s
  • A quick tally suggests 48GB of RAM per host.
  • These hosts are blades, likely HP. 16 Blades per chassis, so at least 4 chassis. For the sake of argument, let’s SAY it’s 64 hosts, just to keep it nice and easy.
  • Unknown networking, but probably 2x 10GbE, and 2x 4Gbit/FC, with passthru modules

It might be something very much like this. In which case, it might be dual 6 core CPU’s, and likely only using 1GbE on the front side. This is probably a reasonable enough assumption for this example, especially since I’m not trying to be exact and keep it theoretical.

http://www.ebay.ca/itm/HP-c7000-Blade-Chassis-16x-BL460c-G6-2x-6-C-2-66GHz-48GB-2x-146GB-2x-Gbe2c-2x-FC-/221303055238?pt=COMP_EN_Servers&hash=item3386b0a386

I’ve used the HP Power Advisor (http://www8.hp.com/ca/en/products/servers/solutions.html?compURI=1439951#.VDnvBfldV8E) to determine the power load for a similarly configured system with the following facts:

  • 5300 VA
  • 18,000 BTU
  • 26 Amps
  • 5200 Watts total
  • 2800 Watts idle
  • 6200 Watts circuit sizing
  • 6x 208V/20A C19 power outlets
    clip_image001

We’ll get to that part later on. For now, let’s just talk about the hosts and the sizing.

Next, we need to come up with some assumptions.

  • The hosts are likely running at 90% memory and 30% CPU, based on examples I’ve seen. Somewhere in the realm of 2764GB of RAM and 230 Cores.
  • The hosts are running 2 sockets of vSphere Enterprise Plus, with SnS – so we have 128 sockets of licences. There will be no theoretical savings on net-new licences as they’re already owned – but we might save money on SnS. There is no under-licencing that we’re trying to top up.
  • vSphere Enterprise Plus we’ll assume to be ~ $3500 CAD/socket and 20% for SnS or about $700/year/socket.
  • The hosts are probably not licenced for Windows Data Center, given the density – but who knows. Again, we’re assuming the licences are owned, so no net-new savings but there might be on Software Assurance.
  • We’re using at least 40U of space, or a full rack for the 4 chassis
  • We’re using 20,800 Watts or 21 kWhr
  • While the original chassis are likely FC, let’s assume for the moment that it’s 10gbE ISCSI or NFS.

Now, let’s talk about how we can replace this all – and where the money will come from.

I just configured some Dell R630 1U Rack servers. I’ve used two different memory densities to deal with some cost assumptions. The general and common settings are:

  • Dell R630 1U Rack server
  • 2x 750 Watt Power Supply
  • 1x 250GB SATA just have “a disk”
  • 10 disk 2.5” chassis – we won’t be using local disks though.
  • 1x PERC H730 – we don’t need it, but we’ll have it in case we add disks later.
  • Dual SD module
  • 4x Emulex 10GbE CNA on board
  • 2x E5-2695 v3 2.3GHz 14C/28T CPU’s

With memory we get the following numbers:

  • 24x 32GB for 768GB total – $39.5 Web Price, assume a 35% discount = $26K
  • 24x 16GB for 368GB total – $23.5 Web Price, assume a 35% discount = $15.5K

The first thing we want to figure out is if the memory density is cost effective. We know that 2x of the 384GB configs would come to $31K or $6K more than the 2x servers. So even without bothering to factor for licencing costs, we know it’s cheaper. If you had to double up on vSphere, Windows Data Center, Veeam, vCOPS, etc, etc, then it gets worse. So very quickly we can make the justification to only include the 768GB configurations. So that’s out of the way. However, it also tells us that if we need more density, we do have some wiggle room to spend more on better CPU’s with more cores/speeds – we can realistically spend up to $3K/CPU more and still work out to be the same as doubling the hosts with half the RAM.

Now how many will we need? We know from above “Somewhere in the realm of 2764GB of RAM and 230 Cores”. 230 cores / 28 cores per server means we need at least 8.2 hosts – we’ll assume 9.; 2764GB of RAM, only requires 3.6 hosts. But we also need to assume we’ll need room for growth. Based on these numbers, let’s work with the understanding we’ll want at least 10 hosts to give us some overhead on the CPU’s, and room for growth. If we’re wrong, we have lots of spare room for labs, DEV/TEST, finally building redundancy, expanding poorly performing VM’s, etc. No harm In that. This makes the math fairly easy as well:

  • $260K – 10x Dell R630’s with 768GB
  • $0 – licence savings from buying net new

We’ve now cost the company, $260K, and so far, haven’t shown any savings or justification. Even just based on hardware refresh and lifecycle costs, this is probably a doable number. This is $7.2K/month over 36 months.

What if we could get some of that money back? Let’s find some change in the cushions.

  • Licence SnS savings. We know we only need 20 sockets now to licence 10 hosts, so we can potentially let the other 108 sockets lapse. At $700/socket/year this results in a savings of $75,600 per year, or $227K over 36 months. This is 87% of our purchase cost for the new equipment. We only need to find $33K now
  • Power savings.
    clip_image002
    The Dell Energy Smart Solution Advisor (http://essa.us.dell.com/dellstaronline/Launch.aspx/ESSA?c=us&l=en&s=corp) suggests that each server will require 456Watts, 2.1 Amps and 1600 BTU of cooling. So our two solutions look like
    clip_image003
    I pay $0.085/kWhr here so I’ll use that number. In the co-location facilities I’m familiar with, you’re charged per power whip not usage. But as this environment is on site, we can assume they’re being charged only as used.
    We’ve now saved another $1K/month or $36K over 36 months. We have saved $263K on a $260K purchase. How am I doing so far?
  • I

  • Rack space – we’re down from 40U to 10U of space. Probably no cost savings here, but we can reuse the space
  • Operational Maintenance – we are now doing Firmware, Patching, Upgrades, Host Configuration, etc, across 10 systems vs 64. Regardless of if that time accounts for 1 or 12 hours per year per server, we are now doing ~ 84% less work. Perhaps now we’ll find the time to actually DO that maintenance.

So based on nothing more than power and licence *maintenance*, we’ve managed to recover all the costs. We also have drastically consolidated our environment, we can likely “finally” get around to migrating all the VM’s into a single vSphere v5.5+ environment and getting rid of the v3.5/v4.x/etc mixed configuration that likely was left that way due to “lack of time and effort”.

We also need to consider the “other” ancillary things we’re likely forgetting as benefits. Everyone one of these things that a site of this size might have, represents a potential savings – either in net-new or maintenance:

  • vCloud Suite vs vSphere
  • vCOPS
  • Veeam or some other backup product, per socket/host
  • Window Server Data Center
  • SQL Server Enterprise
  • PernixData host based cache acceleration
  • PCIe/2.5” SSD’s for said caching

Maybe the site already has all of these things. Maybe they’re looking at it for next year’s budget. If they have it, they can’t reduce their licences, but could drop their SnS/Maintenance. If they’re planning for it, they now need 84% less licencing. My friends in sales for these vendors won’t like me very much for this, I’m sure, but they’d also be happy to have the solution be sellable and implemented and a success story – which is always easier when you don’t need as many.

I always like to provide more for less. The costs are already a wash, what else could we provide? Perhaps this site doesn’t have a DR site. Here’s an option to make that plausible:

  • $260K – 10x R630’s for the DR site
  • $0K – 20 sockets of vSphere Enterprise – we’ll just reuse some of the surplus licencing. We will need to keep paying SnS though.
  • $15K – 20 sockets of vSphere Enterprise SnS
  • $40K – Pair of Nexus 5548 switches? Been a while since I looked at pricing
    Spend $300K and you have most of a DR environment – at least the big part. You still have no storage, power, racks, etc. But you’re far closer. This is a much better use of the same original dollars. The reason for this part of the example is because of the existing licences and we’re not doing net-new. The question of course from the bean-counters will be “so what are we going to do, just throw them away???”

Oh. Right. I totally forgot. Resale J

http://www.ebay.ca/itm/HP-C7000-Blade-Enclosure-16xBL460C-G6-Blades-2xSix-Core-2-66GHZ-X5650-64GB-600GB-/271584371114?pt=COMP_EN_Servers&hash=item3f3bb0a1aa

There aren’t many C7000/BL460C listed as “Sold” on eBay, but the above one sold for ~ $20K Canadian. Let’s assume you chose to sell the equipment to a VAR that specializes in refurbishing – they’re likely to provide you with 50% of that value. That’s another $10K/chassis or $40K for the 4 chassis’.

As I do my re-read of the above, I realize something. We need 9 hosts to meet CPU requirements, but we’d end up with 7680GB of RAM where we only really require 2764GB today. This brings the cost down to ~ $31K Web Price or $20K with 35% discount. At a savings of $6K/server, we’d end up with 5120GB of RAM – just about double what we use today, so lots of room for scale up. We’ll save another $60K today. In the event that we ever require that capacity, we can easily purchase the 8*32GB/host at a later date – and likely at a discount as prices drop over time. However – often the original discount is not applied to parts and accessory pricing for a smaller deal, so consider if it actually is a savings. How would you like a free SAN? J Or 10 weeks of training @ $6K each? I assume you have people on your team who could benefit from some training? Better tools? Spend your money BETTER! Better yet, spend the money you’re entrusted to be the steward of, better – it’s not your money, treat it with respect.

A re-summary of the numbers:

  • +$200K – 10x R630’s with 512GB today
  • +$0K – net-new licencing for vSphere Enterprise Plus
  • -$227K – 108 sockets of vSphere SnS we can drop, over 3 years.
  • -$36K – Power savings over 3 years
  • -$40K – Resale of the original equipment

Total: $103K to the good.

 

Footnote: I came back thinking about power.  The Co-Location facility I’ve dealt with charges roughly:

  • $2000/month for a pair of 208V/30A circuits
  • $400/month for a pair of 110V/15A circuits
  • $Unknown for a pair of 20A circuits, unfortunately.

I got to thinking about what this environment would need – but also what it has.  In my past, I’ve seen a single IBM Blade Center chassis using 4x 208V/30A circuits, even if it could have been divided up better.  So let’s assume the same inefficiency was done here.  Each HP C-Series chassis at 25.4A would require 3x Pairs, or 12x Pairs for the total configuration – somewhere in the area of $24,000/month in power.  Yikes!  Should it be less?  Absolutely.  But it likely isn’t, based on the horrible things I’ve seen – probably people building as though they’re charged by usage and not by drop.

The 10x Rack servers if I switch them to 110V vs 208V indicate they need the 3.5A each – which is across both circuits..  This I think is at max, but let’s be fair and say you wouldn’t put more than 3x (10.5A) on a 15A circuit.  So you need 4x $400 pairs for $1600/month in power.  Alternatively, you could put them all on a 208V/30A pair for 21A total, for $2000/month.  If you could, this would be the better option as it lets you use only one pair of PDU’s, and you have surplus for putting in extra growth, Top of Rack switching, etc. 

So potentially, you’re also going to go from $24K to $2K/month in power.  For the sake of argument, let’s assume I’m way wrong on the blades, and it’s using half the power or $12K.  You’re still saving $10K/month – or $360K over 36 months.  Did you want a free SAN for your DR site maybe?  Don’t forget to not include the numbers previously based on usage vs drop power, or your double dipping on your savings. 

(New) Total: $427K to the good – AFTER getting your new equipment. 

Hi.  I just saved you half a million bucks

Advertisements
Categories: Dell, Design, Hardware, VMware
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: