Archive for the ‘RackSwitch’ Category

IBM RackSwitch–40GbE comes to the lab!

May 20, 2015 3 comments

Last year, I had a post about 10GbE coming to my home lab (  This year, 40GbE comes! 

This definitely falls into the traditional “too good to pass up” category.  A company I’m doing work for picked up a couple of these, and there was enough of a supply that I was able to get my hands on a pair for a reasonable price.  Reasonable at least after liquidating the G8124’s from last year.  (Drop me a line, they’re available for sale! Smile)

Some quick high level on these switches, summarized from the IBM/Lenovo RedBooks (

  • 1U Fully Layer 2 and Layer 3 capable
  • 4x 40Gbe QSFP+ and 48x 10GbE SFP+
  • 2x power supply, fully redundant
  • 4x fan modules, also hot swappable.
  • Mini-USB to serial console cable (dear god, how much I hate this non-standard part)
  • Supports 1GbE Copper Transceiver – no issues with Cisco GLC-T= units so far
  • Supports Cisco Copper TwinAx DAC cabling at 10GbE
  • Supports 40GbE QSFP+ cables from 10GTek
  • Supports virtual stacking, allowing for a single management unit

Front panel of the RackSwitch G8264

Everything else generally falls into line with the G8124.  Where those are listed as “Access” switches, these are listed as “Aggregation” switches.  Truly, I’ll probably NEVER have any need for this many 10GbE ports in my home lab, but I’ll also never run out.  Equally, I now have switches that match production in one of my largest environments, so I can get good and familiar with them.

I’m still on the fence about the value of the stacking.  While these are largely going to be used for ISCSI or NFS based storage, stacking may not even be required.  In fact there’s an argument to be made about having them be completely segregated other than port-channels between them, so as to ensure that a bad stack command doesn’t take out both.  Also the Implementing IBM System Networking 10Gb Ethernet Switches guide, it shows the following limitations:

When in stacking mode, the following stand-alone features are not supported:
Active Multi-Path Protocol (AMP)
BCM rate control
Border Gateway Protocol (BGP)
Converge Enhanced Ethernet (CEE)
Fibre Channel over Ethernet (FCoE)
IGMP Relay and IGMPv3
Link Layer Detection Protocol (LLDP)
Loopback Interfaces
MAC address notification
Port flood blocking
Protocol-based VLANs
Router IDs
Route maps
sFlow port monitoring
Static MAC address addition
Static multicast
Uni-Directional Link Detection (UDLD)
Virtual NICs
Virtual Router Redundancy Protocol (VRRP)

That sure seems like a lot of limitations.  At a glance, I’m not sure anything there is end of the world, but it sure is a lot to give up. 

At this point, I’m actually considering filling a number of ports with GLC-T’s and using that for 1GbE.  A ‘waste’, perhaps, but if it means I can recycle my 1GbE switches, that’s an additional savings.  If anyone has a box of them they’ve been meaning to get rid of, I’d be happy to work something out. 

Some questions that will likely get asked, that I’ll tackle in advance:

  • Come on, seriously – they’re data center 10/40GbE switches.  YES, they’re loud.  They’re not, however, unliveable.  They do quite down a bit after warm up, where they run everything at 100% cycle to POST.  But make no mistake, you’re not going to put one of these under the OfficeJet in your office and hook up your NAS to it, and not shoot yourself. 
  • Power is actually not that bad.  These are pretty green, and drop power to unlit ports.  I haven’t hooked up a Kill-a-Watt to them, but will tomorrow.  They’re on par with the G8124’s based on the amp display on the PDU’s I have them on right now. 
  • Yes, there are a couple more Winking smile  To give you a ballpark, if you check eBay for a Dell PowerConnect 8024F and think that’s doable – then you’re probably going to be interested.  You’d lose the 4x10GBaseT combo ports, but you’d gain 24x10GbE and 4x 40GbE.
  • I’m not sure yet if there are any 40GbE compatible HBA – just haven’t looked into it.  I’m guessing Mellanox ConnectX-3 might do it.  Really though, even at 10GbE, you’re not saturating that without a ton of disk IO. 

More to come as I build out various configurations for these and come up with what seems to be the best option for a couple of C6100 hosts. 

Wish me luck!

Categories: Hardware, Home Lab, IBM, RackSwitch

Got 10GbE working in the lab–first good results

October 2, 2014 12 comments

I’ve done a couple of posts recently on some IBM RackSwitch G8124 10GbE switches I’ve picked up.  While I have a few more to come with the settings I finally got working and how I figured them out, I have had some requests from a few people as to how well it’s all working.   So a very quick summary of where I’m at and some results…

What is configured:

  • 4x ESXi hosts running ESXi v5.5 U2 on a Dell C6100 4 node
  • Each node uses the Dell X53DF dual 10GbE Mezzanine cards (with mounting dremeled in, thanks to a DCS case)
  • 2x IBM RackSwitch G8124 10GbE switches
  • 1x Dell R510 Running Windows 2012 R2 and StarWind SAN v8.  With both an SSD+HDD VOL, as well as a 20GB RAMDisk based VOL.  Using a BCM57810 2pt 10GbE NIC
    IOMeter against the RAMDisk VOL, configured with 4 workers, 64 threads each, 4K 50% Read/50% Write, 100% Random:


StarWind side:


Shows about 32,000 IOPS

And an Atto Bench32 run:


Those numbers seem a little high.

I’ll post more details once I’ve had some sleep, I had to get something out, I was excited Smile

Soon to come are some details on the switches, for ISCSI configuration without any LACP other than for inter-switch traffic using the ISL/VLAG ports, as well as a “First time, Quick and Dirty Setup for StarWind v8”, as I needed something in the lab that could actually DO 10GbE, and  had to use SSD and/or RAM to get it to have enough ‘go’ to actually see if the 10GbE was working at all.

I wonder what these will look like with some PernixData FVP as well…

UPDATED – 6/10/2015 – I’ve been asked for photos of the work needed to Dremel in the 10GbE Mezz cards on the C6100 server – and have done so!

HOWTO: IBM RackSwitch G8124 – Stacking and Port Channels

September 26, 2014 Leave a comment

Welcome to a work in progress J I fully suspect I’ll end up having to circle around and update some of this as I actually get more opportunity to test. I’m still working on some infrastructure in the lab to let me test these switches to their fullest, but in the meantime I’m looking to try to figure out how to get them setup the way I would if I had them at a client site. In general, this means supporting stacking or vPC LACP Port Channels, and connectivity to Cisco Nexus 5548’s.

I managed to find a PDF that shows just such a configuration:

The first figure covers a scenario with teamed NIC’s, with either a Windows host or vSphere ESXi with vDS and LACP:


The second option shows how one might do it with individual non-teamed NIC’s:


The importance of these slides is that the confirm:

  • Cisco Nexus vPC connectivity if certainly a valid use case.
  • The IBM/BNT/Blade terminology for vPC is vLAG – I can live with that

What isn’t shown on THESE slides is some model information:

  • IBM G8000 48x 1GbE switches DO support stacking
  • IBM G8052 52x 1GbE switches do NOT support stacking, but support vLAG
  • IBM G8124 24x 10GbE switches do NOT support stacking, but support vLAG
  • IBM Virtual Fabric 10GbE BladeChassis switches DO support stacking

So there goes my hope for stacking. Not really the end of the world, if it supports vPC(vLAG). So with that in mind, we’ll move on.

I did manage to find a fellow who’s documented the VLAG and VRRP configuration on similar switches:

So with some piecing together, I get, for Switch 2 (Switch 1 was already configured):

# Configure the LACP Trunk/Port-Channel to be used for the ISL, using ports 23 and 24

interface port 23-24


lacp mode active

# Set the LACP key to 200

lacp key 200

pvid 4094



# Configure VLAN 4094 for the ISL VLAN and move the ports into it.

vlan 4094


name "VLAN 4094"

member 23-24


# Set a new STPG of 20 with STP disabled

no spanning-tree stp 20 enable

# Add ports 23 and 24 to said STPG

interface port 23-24

no spanning-tree stp 20 enable


# Create the VLAN and IP Interface

interface ip 100

# Remember that this is on Switch2, so it is using IP2

# Change this when configuring Switch1

ip address

# configure this subnet configuraiton for VLAN4094

vlan 4094




# Configure the vLAG

vlag tier-id 10

# Indicate that the ISL VLAN is 4094

vlag isl vlan 4094

# As we’re on Switch2, this IP will be for Switch1 as the Peer

vlag hlthchk peer-ip

# Specify that same LACP ISL key of 200

vlag isl adminkey 200

# Enable the VLAG

vlag enable


If all goes well, you’ll see:


Sep 25 22:58:02 NW-IBMG8124B ALERT vlag: vLAG Health check is Up

Sep 25 22:58:11 NW-IBMG8124B ALERT vlag: vLAG ISL is up

Now, the questions I have for this:

· How do I create an actual vLAG – say using Ports 20 on both switches?

· What traffic is passing on this vLAG ISL? Is this just a peer-configuration check, or is it actually passing data? I’m going to assume it’s functioning as a TRUNK ALL port, but I should probably sift through the docs

· When will I have something configured that can use this J

Expect me to figure out how to configure the first in the next few days. It can’t be that much harder. In the meantime, I’m also building up a HDD+SSD StarWind SAN in a host with 2x 10GbE SFP+ that should let me configure port channels all day long. For now, I don’t really need them, so it might be a bit before I come back to this. Realistically, for now, I just need ISCSI, which doesn’t really want any LACP, just each switch/path to be in its own subnet/VLAN/fabric, with individual target/initiator NIC’s, unteamed. So as soon as I get a device up that can handle 10GbE traffic, I’ll be testing that!

HOWTO: IBM RackSwitch G8124 – Initial Configuration

September 23, 2014 2 comments

With the acquiring of my new G8124F 10GbE switches ( , we need to look at the basic configuration. This is going to include general switch management that will be generic to any switches, such as:

  • Setting hostname and management IP on the OoB interface
  • DNS, SysLog, NTP
  • Management users
  • Confirming we can back up the config files to a TFTP server
  • RADIUS – I expect to need a HOWTO of its own, largely because I’m going to have to figure out what the RADIUS Server side requires

Information we’ll need:

Top Switch:

  • Hostname: NW-IBMG8124A
  • IP:
  • MGMT_A: NW-PC6248_1/g39 – VLAN 1 – Access
  • p24 -> NW-IBMG8124B/p24
  • p23 -> NW-IBMG8124B/p23
  • p01 -> NW-ESXI04 vmnic5

Bottom Switch:

  • Hostname: NW-IBMG8124B
  • IP:
  • MGMT_A: NW-PC6248_1/g39 – VLAN 1 – Access
  • p24 -> NW-IBMG8124A/p24
  • p23 -> NW-IBMG8124A/p23

Common Information:

  • Subnet:
  • Gateway:
  • DNS1:
  • DNS2:
  • NTP:
  • SysLog:

Manual Links:

What you can tell from above, is that ports 23/24 are linked together with a pair of Cisco passive DAC SFP+ TwinAx cables. Port 1 on the top switch is connected to an unused 10GbE port on an ESXi host so we can do some basic testing. Both switches have their MGTA ports connected to my current Dell PowerConnect 6248 switches, on ports {Top/Bottom}/g39 respectively, with no VLAN trunking. This won’t really matter for the basic configuration we’re doing now, but it will once we start configuring data ports vs simply management interfaces.

1) Initial Login:

I was going to use my Digi CM32 and an RJ45 cable and converter to connect to the DB9, however, both the cable and my converters are both female and I have no serial gender benders on hand. So instead, I opted to use two serial ports on two ESXi hosts, and connect the COM port to a VM. Note, you will have to power down the VM to do so, and it will prevent vMotion, etc. I’m using disposable VM’s I use for benchmarking and testing, so this isn’t a concern. Port speeds are whatever the default PuTTY assumes – 9600,8,N,1, I’m sure.


First, the hard part. The default password is “admin” with no password.

2) Enter configuration:


The first thing you’ll notice, is that so far, this feels very Cisco like. To get started, we enter the “enable” mode and then “conf t” to configure from the terminal.



configure terminal

3) Let’s confirm our running configuration:


Yup. That’s pretty reset to factory.


show running-config

4) As per the manual, we’ll set up the management IP’s on both switches:


Page 44 suggests the following commands:

interface ip-mgmt address

interface ip-mgmt netmask

interface ip-mgmt enable

interface ip-mgmt gateway

interface ip-mgmt gateway enable

However, as you can see above, it appears that the version of the firmware I’m running has two options for “interface ip-mgmt gateway” – address w.x.y.z and enable. So the actual commands are:


interface ip-mgmt address

interface ip-mgmt netmask

interface ip-mgmt enable

interface ip-mgmt gateway address

interface ip-mgmt gateway enable


You can expect to see a message like the above when the link comes up. In my case, this was because I didn’t configure the Dell PC6248’s until after doing this step.

5) Set the hostname:



hostname NW-IBMG8124B

We can set the hostname. Note that it changes immediately.

6) Now would be a good time to save our work:


Just like on a Cisco, we can use:

wr mem


copy running-config startup-config

Note the prompt above – because the switch is restored to factory defaults, it is booting in a special mode that bypasses any existing configurations. This is why it confirming if you want your next boot to go to the current running/startup config.

7) Set NTP server(s):


You will need to configure at least the “primary-server” if not also the “secondary-server” with an IP address as well as the PORT on the switch that will do the communication. In my case, I’ll be letting the mgta-port connect out, but this could easily be a data port on the switch as well. Do note that it requires an IP address, so you won’t be able to use DNS names such as “”, unfortunately. Then, enable the NTP functionality.


ntp primary-server mgta-port

ntp enable

You’ll note I made a typo, and used the wrong IP. That actually worked out well for the documentation:


When I changed the IP, you can see console immediately displays that it has updated the time.

This is also a good time (pun intended) to set up your timezone. You can use the “system timezone” command to be prompted via menus to select your numbered timezone. As I had no clue what my number might be for Alberta (DST-7?), I ran through the wizard – then checked the running config:


There we go. Command to set America/Canada/Mountain-Alberta as your timezeone:

system timezone 93

8) Setup an admin user:


User access is a little different from a Cisco switch. Here we need to set the name, enter a password, give the user a level, and then enable the user. Note that you cannot enter the password at the command line – it will interactively prompt you. So there’s no point entering any password in the config


access user 10 name nwadmin

access user 10 password

access user 10 level administrator

access user 10 enable

The running-config shows the password command as:

access user 10 password "f2cbfe00a240aa00b396b7e361f009f2402cfac143ff32cb09efa7212f92cef2"

Which suggests you must be able to provide the password at the command line, non-interactively.

It is worth noting the built in “administrator” account has some specialty to it. To change this password you would use:

Access user administrator-password <password>

Setting the password to blank (null) will disable the account. Similar also exists for “operator-password” for the “oper” account, but it is disabled by default.

9) Setup SSH:

At this point, the switches are on the network, but I’m still configuring them via serial console. If we attempt to connect to them, we’ll realize that SSH doesn’t work but Telnet does – which is generally expected.



ssh port 22

ssh enable

You should now be able to connect as the user you just created, AS WELL AS the default user – admin with a password of admin.

10) Disable Telnet

Now that we’ve configured SSH, let’s get rid of telnet. There is no equivalent “telnet disable”, but you can use “no …” commands.



no access telnet enable

Note that my active Telnet configurations has their configurations closed, and indicated on the console.

11) Set SNMP:

My SNMP needs are basic – I largely use it for testing monitoring and management products. So we’ll just set a basic Read Only and Read Write community, and we’ll set it for SNMP v2 which is the most common:



snmp location "NetWise Lab"

snmp name NW-IBMG8124B

snmp read-community "nw-ro"

snmp write-community "nw-rw"

snmp version v1v2v3

access snmp read-only

access snmp read-write

NOTE: The SNMP name will change the HOSTNAME, and should not include quotes. This makes me believe it would ASSUME the hostname, which is what most people set to anyway.

12) Configure HTTPS access:

Some people like HTTPS configuration access, some see it as a security risk. I’ll enable it so I have the option of seeing what it looks like



access https enable

If there is no self signed certificate, it will generate one.

13) Configure DNS

It would be nice if we could get DNS for hostname resolution. Nothing is worse than having to remember IP’s.



ip dns primary-server mgta-port

ip dns secondary-server mgta-port

ip dns domain-name

14) Configure Spanning Tree

Any good switch should do some manner of Spanning Tree. As these will be my storage switches, we’ll ensure these are set to protect against loops and also set as Rapid Spanning Tree (RSTP)



spanning-tree loopguard

spanning-tree mode rstp

15) Configure SysLog:


This is pretty simple, we simply point it at the IP and tell it to use the mgta-port.


logging host 1 address mgta-port

logging host 1 severity 7

logging log all

What is nice is you can define two of them, by specifying “host 2”

16) Backup the running config:


Configuring the switch isn’t a lot of good if you don’t back up the configuration. So we’ll make a copy of the config to our TFTP server.


copy running-config tftp address filename NW-IBMG8124B_orig.cfg mgta-port

It is worth noting that it does support standard FTP as well, if you desire.

So if we take all of the above and put the commands together, we get:


conf t

interface ip-mgmt address

interface ip-mgmt netmask

interface ip-mgmt enable

interface ip-mgmt gateway address

interface ip-mgmt gateway enable

hostname NW-IBMG8124A

copy running-config startup-config

ntp primary-server mgta-port

ntp enable

access user 10 name nwadmin

access user 10 password "f2cbfe00a240aa00b396b7e361f009f2402cfac143ff32cb09efa7212f92cef2"

access user 10 level administrator

access user 10 enable

#access user administrator-password <ChangeMe>

ssh port 22

ssh enable

no access telnet enable

snmp location "NetWise Lab"

snmp name NW-IBMG8124A

snmp read-community "nw-ro"

snmp write-community "nw-rw"

snmp version v1v2v3

access snmp read-only

access snmp read-write

access https enable

ip dns primary-server mgta-port

ip dns secondary-server mgta-port

ip dns domain-name

spanning-tree loopguard

spanning-tree mode rstp

logging host 1 address mgta-port

logging host 1 severity 7

logging log all

We now have a basically working switch, from a management perspective.  Next will be to get it passing some actual data!


Some other interesting command:

While poking around in the (conf t) “list” command, which will show you all the command options, I found some interesting ones:

boot cli-mode ibmnos-cli

boot cli-mode iscli

boot cli-mode prompt

The ISCLI is the “Is Cisco Like” which is why it seems familiar. The other option is IBMNOS-CLI, which is… probably painful


boot configuration-block active

boot configuration-block backup

boot configuration-block factory

Here is how we can tell the switch to reset itself or boot clean. It’s not immediately clear to me how this would be better than “erase startup-config”, “reload”, but it’s there.


boot schedule friday hh:mm

boot schedule monday hh:mm

boot schedule saturday hh:mm

boot schedule sunday hh:mm

boot schedule thursday hh:mm

boot schedule tuesday hh:mm

boot schedule wednesday hh:mm

I can’t think of a lot of times I’ve wanted to schedule the reboot of switches on a weekly basis. Or reasons why I’d need to, on a good switch. But… maybe it’s to know that it WILL reboot when the time comes? If you reboot it weekly, then you might not be so timid to do so after the uptime is 300+ days and no one remembers if this is the switch that has startup issues?


interface ip-mgta address A.B.C.D A.B.C.D A.B.C.D enable

Not sure why I’d want multiple IP’s on the management interface – but you can.

interface ip-mgta dhcp

In case you want to set your management IP’s to DHCP. Which sounds like a fun way to have a bad day someday…


ldap-server backdoor

Not sure what on earth this does


ldap-server domain WORD

ldap-server enable

ldap-server primary-host A.B.C.D mgta-port

ldap-server secondary-host A.B.C.D mgta-port

Need to look into what LDAP supports


logging console severify <0-7>

logging console

Sets up how much is logged to the console


logging host 1 address A.B.C.D mgta-port

Configures syslog via the mgta-port


logging log all

Logs everything, but you can do very granular enablement.


radius-server backdoor

Not sure what on earth this does

radius-server domain WORD

radius-server enable

radius-server primary-host A.B.C.D mgta-port

radius-server secondary-host A.B.C.D mgta-port

I’ll need to find the appropriate commands for both the switches as well as the RADIUS server to enable groups.


virt vmware dpg update WORD WORD <1-4094>

virt vmware dpg vmac WORD WORD

virt vmware dvswitch add WORD WORD WORD

virt vmware dvswitch add WORD WORD

virt vmware dvswitch addhost WORD WORD

virt vmware dvswitch adduplnk WORD WORD WORD

virt vmware dvswitch del WORD WORD

virt vmware dvswitch remhost WORD WORD

virt vmware dvswitch remuplnk WORD WORD WORD

virt vmware export WORD WORD WORD

I understood the switch was virtualization aware – but this is going to need some deeper investigation!

Categories: Hardware, Home Lab, IBM, RackSwitch

IBM RackSwitch–10GbE comes to the lab.

September 20, 2014 3 comments

I’ve recently come into some IBM RackSwitch switches in both the 1Gbe and 10GbE varieties, that I’m hoping will work out well for the home lab. While they’re neither the Dell 8024F’s or Cisco Nexus 5548’s that I have the most experience with – they’re significantly cheaper. I’m going to be doing a series of posts on getting these units running over the next week or two, but wanted to collect some notes first for those who might like some background on these switches.

First, let’s get some information links out there:

IBM G8000 48 port 1GbE 1U Switch with dual uplink module options:
446013__98888.1408029068.386.513.jpg (386×47)

IBM G8124 24 port 10GbE SFP+ 1U Switch

First a little background on these switches, as you’ve probably never heard of them. BLADE Networks was the name of a company that came from the remnants of Nortel’s Blade Chassis Networking division. This would eventually get purchased by IBM and become IBM RackSwitch and BNT products, often seen used in IBM BladeCenter chassis’. So when you go looking for information, you may have to dig and switch names around to find what you’re looking for. The links above should certainly help you out.

So why would someone want some non-Cisco, HP, Juniper, Dell switching? Cost. They’re not very popular, and that works for us. Normally the more popular enterprise equipment was, the lower their resale as off-lease or on eBay. However, when no one has ever heard of it – it tends to make them cheap. Like:

$1000 USD for the G8124 24pt 10GbE switch

$160 USD for the G8000 48pt 1GbE switch

What made me prefer these over the Dell’s I’ve been using? Well I can find a Dell PowerConnect 8024F 24pt 10GbE switch for about $1500 if I get lucky. I do like them as they have 4xRJ45 ports for 1000/10000 ports, and standard RJ45 based console ports. But the power supplies are often $350 and the $1500 switches seldom come with two. The G8124 comes with both power supplies, and supports stacking for $1000. How can I pass that up?

The PowerConnect 6248’s have always treated me well, and support CX4 based stacking in the rear, which removes the need to tie up any front based ports. Also support a 2pt 10GbE SFP+ module that I can never find cheap. With the CX4 stacking module and cables, they’re worth about $600 typically, each. Understandably, the G8000 at $160 was a steal. Especially when you consider that they come with dual power supplies.

So the hope is to take:

1x PC8024F working, single power supply (~$1300)

1x PC8024F non-working, single power supply (~$500)

2x PC6248 working, single power supply, CX4 stacking (~$500)

Or about $2800 in switching

And turn it into:

2x IBM G8124 (~$1300 landed in CAD)

2x IBM G8000 (~$250 landed in CAD)

For about $3100 in switching.

The hope, of course, was to get dual redundant 10GbE switching, and have everything have dual power supplies, for a near wash.

Then I ran into a snag. Turns out these stupid switches have some “mini USB B to DB9” console cable required. It’s the same as the IBM VirtualFabric BladeChassis switches, I could tell based on some units one of my clients had. However, they manage theirs via the chassis/AMM and not the console and didn’t have cables available. So I had to dig some up. If you’re looking for them, you might find this PDF – – which will help. It’s even so kind as to list some part numbers:

· Blade Part: BN-SB-SRL-CBL

· IBM FRU/CRU: 43X0510

Except, it turns out that FRU isn’t an orderable number, and you’ll get bounced all over IBM if you’re looking for it. You do have two orderable numbers you can try:

· 90Y9338 – mini-USB B to DB9 only

· 90Y9462 – mini-USB B to DB9 and mini-USB B to RJ45 for use with a Cisco console, this is the kit that comes with the VirtualFabric switches.

You’ll find out that no one stocks this cable, and no one is even sure where to order it from. I randomly googled for nights while watching TV until I found a place that said they could. And 3 weeks later I had some cables. They work J

So where am I at now? I have completed the basic configuration and documentation of the G8124’s, and verified they work. I need to get them configured for some networking, primarily stacking and then simple 10GbE so I can use them as vMotion and PernixData interconnects on my ESXi hosts. ISCSI is coming, but right now all my iSCSI is 1GbE. SO I’d have connect the SAN’s to an SFP+ to 1GbE RJ45 transciever. I DO have XFP HBA’s for my NetApp FAS3040’s, so with any luck, and some fibre cables, I can make that work out. What I’d really like to do is find some of the 2pt SFP+ modules for the G8000 and then I think I could get fancy.

If nothing else, I’m learning IBM Networking, and having some fun.

More to come soon!

Categories: Hardware, Home Lab, IBM, RackSwitch