Archive

Archive for the ‘Home Lab’ Category

C6100 IPMI Issues with vSphere 6

July 15, 2015 Leave a comment

So I’m not 100% certain if the issues I’m having on my C6100 server are vSphere 6 related or not.  But I have seen similar issues before in my lab, so it may be one of a few things.

After a recent upgrade, I noted that some of my VM’s seemed “slow” – which is hard to quantify.  Then this morning I wake up to having internet but no DNS, so I know my DC is down.  Hosts are up though.  So I give them a hard boot, connect to the IPMI KVM, and watch the startup.  To see “loading IPMI_SI_SRV…” and it just sitting there.

In the past, this seemed to be related to a failing SATA disk, and the solution was to pop it up – which helped temporarily until I replaced the disk outright.  But these are new drives.  Trying the same here did not work, though I only tried the spinning disks and not the SSD’s.  Rather than mess around, I thought I’d find a way to see if I could disable IPMI at least to troubleshoot.

Turns out, I wasn’t alone – though just not specific to vSphere 6:

https://communities.vmware.com/message/2333989

http://www.itblah.com/installing-or-upgrading-to-vmware-vsphere-hypervisor-5-esxi-5-using-the-interactive-method/

https://xuri.me/2014/12/06/avoid-vmware-esxi-loading-module-ipmi_si_drv.html

That last one is the option I took:

  • Press SHIFT+O during the Hypervisor startup
  • Append “noipmiEnabled” to the boot args

Which got my hosts up and running. 

I haven’t done any deeper troubleshooting, nor have I permanently disabled the IPMI with the options of:

Manually turn off or remove the module by turning the option “VMkernel.Boot.ipmiEnabled” off in vSphere or using the commands below:

# Do a dry run first:
esxcli software vib remove –dry-run —vibname ipmi–ipmi–si–drv
# Remove the module:
esxcli software vib remove —vibname ipmi–ipmi–si–drv

We’ll see what comes when I get more time…

Advertisements
Categories: C6100, ESXi, Home Lab, vSphere

Modifying the Dell C6100 for 10GbE Mezz Cards

June 11, 2015 3 comments

In a previous post, Got 10GbE working in the lab – first good results, I talked about getting 10GbE working with my Dell C6100 series.  Recently, a commenter asked me if I had any pictures of the modifications I had to make to the rear panel to make these 10GBE cards work.  As I have another C6100 I recently acquired (yes, I have a problem…), that needs the mods, it seems only prudent to share the steps I took in case it helps someone else.

First a little discussion about what you need:

  • Dell C6100 without the rear panel plate to be removed
  • Dell X53DF/TCK99 2 Port 10GbE Intel 82599 SFP+ Adapter
  • Dell HH4P1 PCI-E Bridge Card

You may find the Mezz card under either part number – it seems that the X53DF replaced the TCK99.  Perhaps one is the P/N and one is the FRU or some such.  But you NEED that little PCI-E bridge card.  It is usually included, but pay special attention to the listing to ensure it does.  What you DON’T really need, is the mesh back plate on the card – you can get it bare. 

2015-06-11 21.18.132015-06-11 21.17.46

Shown above are the 2pt 10GbE SFP+ card in question, and also the 2pt 40GbE Infiniband card.  Above them both is the small PCI-E bridge card.

2015-06-11 21.19.24

You want to remove the two screws to remove the backing plate on the card.  You won’t be needing it, and you can set it aside.  The screws attach through the card and into the bracket, so once removed, reinsert the screws to the bracket to keep from losing them.

2015-06-11 21.17.14

Here we can see the back panel of the C6100 sled.  Ready to go for cutting.

2015-06-11 21.22.232015-06-11 21.24.48

You can place the factory rear plate over the back plate.  Here you can see where you need to line it up and mark the cuts you’ll be doing.  Note that of course the bracket will sit higher up on the unit, so you’ll have to adjust for your horizontal lines. 

2015-06-11 21.23.092015-06-11 21.22.49

If we look to the left, we can see the source of the problem that causes us to have to do this work.  The back panel here is not removable, and wraps around the left corner of the unit.  In systems with the removable plate, this simply unscrews and panel attached to the card slots in.  In the right hand side you can see the two screws that would attach the panel and card in that case.

2015-06-11 21.35.38

Here’s largely what we get once we complete the cuts.  Perhaps you’re better with a Dremel than I am. Note that the vertical cuts can be tough depending on the size of the cutting disk you have, as they may have interference from the bar to remove the sled. 

2015-06-11 21.36.162015-06-11 21.36.202015-06-11 21.36.28

You can now attach the PCI-E bridge card to the Mezz card, and slot it in.  I found it easiest to come at about 20 degree angle and slot in the 2 ports into the cut outs, then drop the PCI-E bridge into the slot.  When it’s all said and done, you’ll find it pretty secure and good to go.

That’s really about it.  Not a whole lot to it, and if you have it all in hand, you’d figure it out pretty quick.  This is largely to help show where my cut lines ended up compared tot he actual cuts and where adjustments could be made to make the cuts tighter if you wanted.  Also, if you’re planning to order, but are not sure if it works or is possible, then this is going to help out quite a bit.

Some potential vendors I’ve had luck with:

http://www.ebay.com/itm/DELL-X53DF-10GbE-DUAL-PORT-MEZZANINE-CARD-TCK99-POWEREDGE-C6100-C6105-C6220-/181751541002? – accepted $60 USD offer.

http://www.ebay.com/itm/DELL-X53DF-DUAL-PORT-10GE-MEZZANINE-TCK99-C6105-C6220-/181751288032?pt=LH_DefaultDomain_0&hash=item2a513890e0 – currently lists for $54 USD, I’m sure you could get them for $50 without too much negotiating.

Categories: C6100, Dell, Hardware, Home Lab

EVALExperience now includes vSphere 6!

June 3, 2015 Leave a comment

I know I’ve had both a lot of local VMUG members as well as forum members where I frequent, asking about when vSphere 6, vCenter 6, and ESXi 6 would be available as part of EVALExperience – as understandably, people are anxious to get their learning, labbing, and testing on. 

I’m happy to announce that it looks like it’s up.  If you head over to the VMUG page found at http://www.vmug.com/p/cm/ld/fid=8792 you’ll note that they show:

NEW! vSphere 6 and Virtual SAN 6 Now Available!

Of course, if you’ve signed up with VMUG, you should be getting the e-mail I just received as well.  I’m not certain if it would go to all VMUG Members, only those that are already EVALExperience subscribers, or what. 

What is now included, as per the e-mail blast is:

NOW AVAILABLE! VMware recently announced the general availability of VMware vSphere 6, VMware Integrated OpenStack and VMware Virtual SAN 6 – the industry’s first unified platform for the hybrid cloud! EVALExperience will be releasing the new products and VMUG Advantage subscribers will be able to download the latest versions of:

  • vCenter Server Standard for vSphere 6
  • vSphere with Operations Management Enterprise Plus
  • vCloud Suite Standard
  • Virtual SAN 6
  • *New* Virtual SAN 6 All Flash Add-On
    It is worth noting that the product download has been updated and upgraded.  They do call out that the old product and keys will no longer be available.  I can understand why, as part of this will be to help members stay current.  But it would be nice if you could use the N-1 version for a year of transition, etc.  Not everyone can cut over immediately and some people use their home labs to mirror the production environment at work so they can come home and try something they couldn’t at the office.
    Some questions I’ve had asked, and the answers I’m aware of:
  1. How many sockets are included?  The package includes 6 sockets for 3 hosts.
  2. Are the keys 365 days or dated expiry?  I understand they’re dated expiry, so if you install a new lab 2 weeks before the end of your subscription, you’ll see 14 days left, not 364.
  3. What about VSAN?  There had previously been a glitch which gave only one host worth of licences – which clearly does not work.  This has been corrected. 
    Just a friendly reminder, as a VMUG leader to look into the VMUG Advantage membership.  As always, VMUG membership itself is free, come on down and attend a local meeting (the next Edmonton VMUG is June 16 and you can sign up here – http://www.vmug.com/p/cm/ld/fid=10777). 

In addition, your VMUG Advantage subscriber benefits include:  

  • FREE Access to VMworld 2015 Content
  • 20% discount on VMware Certification Exams & Training Courses (If you have a $3500 course you need/want, plus a $225 exam, for $3725 total, spending $200 or so on a VMUG Advantage to make your costs $2800+$180=$2980 is a great way to get $745 off.  This is the sell you should be giving your employer Smile)
  • $100 discount on VMworld 2015 registration (This is the only “stackable” discount for VMworld.  Pre-registration/early-bird ends on June 8th I believe)
  • 35% discount on VMware’s Lab Connect
  • 50% discount on Fusion 7 Professional
  • 50% discount on VMware Player 7 Pro
  • 50% discount on VMware Workstation 11
    Happy labbing, and if you’re local, hope to see you on June 16!

    IBM RackSwitch–40GbE comes to the lab!

    May 20, 2015 3 comments

    Last year, I had a post about 10GbE coming to my home lab (https://vnetwise.wordpress.com/2014/09/20/ibm-rackswitch10gbe-comes-to-the-lab/).  This year, 40GbE comes! 

    This definitely falls into the traditional “too good to pass up” category.  A company I’m doing work for picked up a couple of these, and there was enough of a supply that I was able to get my hands on a pair for a reasonable price.  Reasonable at least after liquidating the G8124’s from last year.  (Drop me a line, they’re available for sale! Smile)

    Some quick high level on these switches, summarized from the IBM/Lenovo RedBooks (http://www.redbooks.ibm.com/abstracts/tips1272.html?open):

    • 1U Fully Layer 2 and Layer 3 capable
    • 4x 40Gbe QSFP+ and 48x 10GbE SFP+
    • 2x power supply, fully redundant
    • 4x fan modules, also hot swappable.
    • Mini-USB to serial console cable (dear god, how much I hate this non-standard part)
    • Supports 1GbE Copper Transceiver – no issues with Cisco GLC-T= units so far
    • Supports Cisco Copper TwinAx DAC cabling at 10GbE
    • Supports 40GbE QSFP+ cables from 10GTek
    • Supports virtual stacking, allowing for a single management unit

    Front panel of the RackSwitch G8264

    Everything else generally falls into line with the G8124.  Where those are listed as “Access” switches, these are listed as “Aggregation” switches.  Truly, I’ll probably NEVER have any need for this many 10GbE ports in my home lab, but I’ll also never run out.  Equally, I now have switches that match production in one of my largest environments, so I can get good and familiar with them.

    I’m still on the fence about the value of the stacking.  While these are largely going to be used for ISCSI or NFS based storage, stacking may not even be required.  In fact there’s an argument to be made about having them be completely segregated other than port-channels between them, so as to ensure that a bad stack command doesn’t take out both.  Also the Implementing IBM System Networking 10Gb Ethernet Switches guide, it shows the following limitations:

    When in stacking mode, the following stand-alone features are not supported:
    Active Multi-Path Protocol (AMP)
    BCM rate control
    Border Gateway Protocol (BGP)
    Converge Enhanced Ethernet (CEE)
    Fibre Channel over Ethernet (FCoE)
    IGMP Relay and IGMPv3
    IPv6
    Link Layer Detection Protocol (LLDP)
    Loopback Interfaces
    MAC address notification
    MSTP
    OSPF and OSPFv3
    Port flood blocking
    Protocol-based VLANs
    RIP
    Router IDs
    Route maps
    sFlow port monitoring
    Static MAC address addition
    Static multicast
    Uni-Directional Link Detection (UDLD)
    Virtual NICs
    Virtual Router Redundancy Protocol (VRRP)

    That sure seems like a lot of limitations.  At a glance, I’m not sure anything there is end of the world, but it sure is a lot to give up. 

    At this point, I’m actually considering filling a number of ports with GLC-T’s and using that for 1GbE.  A ‘waste’, perhaps, but if it means I can recycle my 1GbE switches, that’s an additional savings.  If anyone has a box of them they’ve been meaning to get rid of, I’d be happy to work something out. 

    Some questions that will likely get asked, that I’ll tackle in advance:

    • Come on, seriously – they’re data center 10/40GbE switches.  YES, they’re loud.  They’re not, however, unliveable.  They do quite down a bit after warm up, where they run everything at 100% cycle to POST.  But make no mistake, you’re not going to put one of these under the OfficeJet in your office and hook up your NAS to it, and not shoot yourself. 
    • Power is actually not that bad.  These are pretty green, and drop power to unlit ports.  I haven’t hooked up a Kill-a-Watt to them, but will tomorrow.  They’re on par with the G8124’s based on the amp display on the PDU’s I have them on right now. 
    • Yes, there are a couple more Winking smile  To give you a ballpark, if you check eBay for a Dell PowerConnect 8024F and think that’s doable – then you’re probably going to be interested.  You’d lose the 4x10GBaseT combo ports, but you’d gain 24x10GbE and 4x 40GbE.
    • I’m not sure yet if there are any 40GbE compatible HBA – just haven’t looked into it.  I’m guessing Mellanox ConnectX-3 might do it.  Really though, even at 10GbE, you’re not saturating that without a ton of disk IO. 

    More to come as I build out various configurations for these and come up with what seems to be the best option for a couple of C6100 hosts. 

    Wish me luck!

    Categories: Hardware, Home Lab, IBM, RackSwitch

    Got 10GbE working in the lab–first good results

    October 2, 2014 12 comments

    I’ve done a couple of posts recently on some IBM RackSwitch G8124 10GbE switches I’ve picked up.  While I have a few more to come with the settings I finally got working and how I figured them out, I have had some requests from a few people as to how well it’s all working.   So a very quick summary of where I’m at and some results…

    What is configured:

    • 4x ESXi hosts running ESXi v5.5 U2 on a Dell C6100 4 node
    • Each node uses the Dell X53DF dual 10GbE Mezzanine cards (with mounting dremeled in, thanks to a DCS case)
    • 2x IBM RackSwitch G8124 10GbE switches
    • 1x Dell R510 Running Windows 2012 R2 and StarWind SAN v8.  With both an SSD+HDD VOL, as well as a 20GB RAMDisk based VOL.  Using a BCM57810 2pt 10GbE NIC
      Results:
      IOMeter against the RAMDisk VOL, configured with 4 workers, 64 threads each, 4K 50% Read/50% Write, 100% Random:

    image

    StarWind side:

    image

    Shows about 32,000 IOPS

    And an Atto Bench32 run:

    image

    Those numbers seem a little high.

    I’ll post more details once I’ve had some sleep, I had to get something out, I was excited Smile

    Soon to come are some details on the switches, for ISCSI configuration without any LACP other than for inter-switch traffic using the ISL/VLAG ports, as well as a “First time, Quick and Dirty Setup for StarWind v8”, as I needed something in the lab that could actually DO 10GbE, and  had to use SSD and/or RAM to get it to have enough ‘go’ to actually see if the 10GbE was working at all.

    I wonder what these will look like with some PernixData FVP as well…

    UPDATED – 6/10/2015 – I’ve been asked for photos of the work needed to Dremel in the 10GbE Mezz cards on the C6100 server – and have done so!  https://vnetwise.wordpress.com/2015/06/11/modifying-the-dell-c6100-for-10gbe-mezz-cards/

    HOWTO: IBM RackSwitch G8124 – Stacking and Port Channels

    September 26, 2014 Leave a comment

    Welcome to a work in progress J I fully suspect I’ll end up having to circle around and update some of this as I actually get more opportunity to test. I’m still working on some infrastructure in the lab to let me test these switches to their fullest, but in the meantime I’m looking to try to figure out how to get them setup the way I would if I had them at a client site. In general, this means supporting stacking or vPC LACP Port Channels, and connectivity to Cisco Nexus 5548’s.

    I managed to find a PDF that shows just such a configuration: http://www.fox-online.cz/ibm/systemxtraining/soubory/czech-2013-bp-final_slawomir-slowinski.pdf

    The first figure covers a scenario with teamed NIC’s, with either a Windows host or vSphere ESXi with vDS and LACP:

    clip_image001

    The second option shows how one might do it with individual non-teamed NIC’s:

    clip_image002

    The importance of these slides is that the confirm:

    • Cisco Nexus vPC connectivity if certainly a valid use case.
    • The IBM/BNT/Blade terminology for vPC is vLAG – I can live with that

    What isn’t shown on THESE slides is some model information:

    • IBM G8000 48x 1GbE switches DO support stacking
    • IBM G8052 52x 1GbE switches do NOT support stacking, but support vLAG
    • IBM G8124 24x 10GbE switches do NOT support stacking, but support vLAG
    • IBM Virtual Fabric 10GbE BladeChassis switches DO support stacking

    So there goes my hope for stacking. Not really the end of the world, if it supports vPC(vLAG). So with that in mind, we’ll move on.

    I did manage to find a fellow who’s documented the VLAG and VRRP configuration on similar switches: http://pureflexbr.blogspot.ca/2013/10/switch-en4093-vlag-and-vrrp-config.html

    So with some piecing together, I get, for Switch 2 (Switch 1 was already configured):

    # Configure the LACP Trunk/Port-Channel to be used for the ISL, using ports 23 and 24

    interface port 23-24

    tagging

    lacp mode active

    # Set the LACP key to 200

    lacp key 200

    pvid 4094

    exit

    !

    # Configure VLAN 4094 for the ISL VLAN and move the ports into it.

    vlan 4094

    enable

    name "VLAN 4094"

    member 23-24

    !

    # Set a new STPG of 20 with STP disabled

    no spanning-tree stp 20 enable

    # Add ports 23 and 24 to said STPG

    interface port 23-24

    no spanning-tree stp 20 enable

    exit

    # Create the VLAN and IP Interface

    interface ip 100

    # Remember that this is on Switch2, so it is using IP2

    # Change this when configuring Switch1

    ip address 10.0.100.252 255.255.255.0

    # configure this subnet configuraiton for VLAN4094

    vlan 4094

    enable

    exit

    !

    # Configure the vLAG

    vlag tier-id 10

    # Indicate that the ISL VLAN is 4094

    vlag isl vlan 4094

    # As we’re on Switch2, this IP will be for Switch1 as the Peer

    vlag hlthchk peer-ip 10.0.100.251

    # Specify that same LACP ISL key of 200

    vlag isl adminkey 200

    # Enable the VLAG

    vlag enable

    !

    If all goes well, you’ll see:

    clip_image003

    Sep 25 22:58:02 NW-IBMG8124B ALERT vlag: vLAG Health check is Up

    Sep 25 22:58:11 NW-IBMG8124B ALERT vlag: vLAG ISL is up

    Now, the questions I have for this:

    · How do I create an actual vLAG – say using Ports 20 on both switches?

    · What traffic is passing on this vLAG ISL? Is this just a peer-configuration check, or is it actually passing data? I’m going to assume it’s functioning as a TRUNK ALL port, but I should probably sift through the docs

    · When will I have something configured that can use this J

    Expect me to figure out how to configure the first in the next few days. It can’t be that much harder. In the meantime, I’m also building up a HDD+SSD StarWind SAN in a host with 2x 10GbE SFP+ that should let me configure port channels all day long. For now, I don’t really need them, so it might be a bit before I come back to this. Realistically, for now, I just need ISCSI, which doesn’t really want any LACP, just each switch/path to be in its own subnet/VLAN/fabric, with individual target/initiator NIC’s, unteamed. So as soon as I get a device up that can handle 10GbE traffic, I’ll be testing that!

    HOWTO: IBM RackSwitch G8124 – Initial Configuration

    September 23, 2014 6 comments

    With the acquiring of my new G8124F 10GbE switches (https://vnetwise.wordpress.com/2014/09/20/ibm-rackswitch10gbe-comes-to-the-lab/) , we need to look at the basic configuration. This is going to include general switch management that will be generic to any switches, such as:

    • Setting hostname and management IP on the OoB interface
    • DNS, SysLog, NTP
    • Management users
    • Confirming we can back up the config files to a TFTP server
    • RADIUS – I expect to need a HOWTO of its own, largely because I’m going to have to figure out what the RADIUS Server side requires

    Information we’ll need:

    Top Switch:

    • Hostname: NW-IBMG8124A
    • IP: 10.0.0.94
    • MGMT_A: NW-PC6248_1/g39 – VLAN 1 – Access
    • p24 -> NW-IBMG8124B/p24
    • p23 -> NW-IBMG8124B/p23
    • p01 -> NW-ESXI04 vmnic5

    Bottom Switch:

    • Hostname: NW-IBMG8124B
    • IP: 10.0.0.95
    • MGMT_A: NW-PC6248_1/g39 – VLAN 1 – Access
    • p24 -> NW-IBMG8124A/p24
    • p23 -> NW-IBMG8124A/p23

    Common Information:

    • Subnet: 255.255.255.0
    • Gateway: 10.0.0.1
    • DNS1: 10.0.0.11
    • DNS2: 10.0.0.12
    • NTP: 10.0.0.11
    • SysLog: 10.0.0.10

    Manual Links:

    What you can tell from above, is that ports 23/24 are linked together with a pair of Cisco passive DAC SFP+ TwinAx cables. Port 1 on the top switch is connected to an unused 10GbE port on an ESXi host so we can do some basic testing. Both switches have their MGTA ports connected to my current Dell PowerConnect 6248 switches, on ports {Top/Bottom}/g39 respectively, with no VLAN trunking. This won’t really matter for the basic configuration we’re doing now, but it will once we start configuring data ports vs simply management interfaces.

    1) Initial Login:

    I was going to use my Digi CM32 and an RJ45 cable and converter to connect to the DB9, however, both the cable and my converters are both female and I have no serial gender benders on hand. So instead, I opted to use two serial ports on two ESXi hosts, and connect the COM port to a VM. Note, you will have to power down the VM to do so, and it will prevent vMotion, etc. I’m using disposable VM’s I use for benchmarking and testing, so this isn’t a concern. Port speeds are whatever the default PuTTY assumes – 9600,8,N,1, I’m sure.

    clip_image001

    First, the hard part. The default password is “admin” with no password.

    2) Enter configuration:

    clip_image002

    The first thing you’ll notice, is that so far, this feels very Cisco like. To get started, we enter the “enable” mode and then “conf t” to configure from the terminal.

    Command:

    enable

    configure terminal

    3) Let’s confirm our running configuration:

    clip_image003

    Yup. That’s pretty reset to factory.

    Command:

    show running-config

    4) As per the manual, we’ll set up the management IP’s on both switches:

    clip_image004

    Page 44 suggests the following commands:

    interface ip-mgmt address 10.0.0.94

    interface ip-mgmt netmask 255.255.255.0

    interface ip-mgmt enable

    interface ip-mgmt gateway 10.0.0.1

    interface ip-mgmt gateway enable

    However, as you can see above, it appears that the version of the firmware I’m running has two options for “interface ip-mgmt gateway” – address w.x.y.z and enable. So the actual commands are:

    Commands:

    interface ip-mgmt address 10.0.0.94

    interface ip-mgmt netmask 255.255.255.0

    interface ip-mgmt enable

    interface ip-mgmt gateway address 10.0.0.1

    interface ip-mgmt gateway enable

    clip_image005

    You can expect to see a message like the above when the link comes up. In my case, this was because I didn’t configure the Dell PC6248’s until after doing this step.

    5) Set the hostname:

    clip_image006

    Command:

    hostname NW-IBMG8124B

    We can set the hostname. Note that it changes immediately.

    6) Now would be a good time to save our work:

    clip_image007

    Just like on a Cisco, we can use:

    wr mem

    or

    copy running-config startup-config

    Note the prompt above – because the switch is restored to factory defaults, it is booting in a special mode that bypasses any existing configurations. This is why it confirming if you want your next boot to go to the current running/startup config.

    7) Set NTP server(s):

    clip_image008

    You will need to configure at least the “primary-server” if not also the “secondary-server” with an IP address as well as the PORT on the switch that will do the communication. In my case, I’ll be letting the mgta-port connect out, but this could easily be a data port on the switch as well. Do note that it requires an IP address, so you won’t be able to use DNS names such as “ntp1.netwise.ca”, unfortunately. Then, enable the NTP functionality.

    Command:

    ntp primary-server 10.0.0.11 mgta-port

    ntp enable

    You’ll note I made a typo, and used the wrong IP. That actually worked out well for the documentation:

    clip_image009

    When I changed the IP, you can see console immediately displays that it has updated the time.

    This is also a good time (pun intended) to set up your timezone. You can use the “system timezone” command to be prompted via menus to select your numbered timezone. As I had no clue what my number might be for Alberta (DST-7?), I ran through the wizard – then checked the running config:

    clip_image010

    There we go. Command to set America/Canada/Mountain-Alberta as your timezeone:

    system timezone 93

    8) Setup an admin user:

    clip_image011

    User access is a little different from a Cisco switch. Here we need to set the name, enter a password, give the user a level, and then enable the user. Note that you cannot enter the password at the command line – it will interactively prompt you. So there’s no point entering any password in the config

    Commands:

    access user 10 name nwadmin

    access user 10 password

    access user 10 level administrator

    access user 10 enable

    The running-config shows the password command as:

    access user 10 password "f2cbfe00a240aa00b396b7e361f009f2402cfac143ff32cb09efa7212f92cef2"

    Which suggests you must be able to provide the password at the command line, non-interactively.

    It is worth noting the built in “administrator” account has some specialty to it. To change this password you would use:

    Access user administrator-password <password>

    Setting the password to blank (null) will disable the account. Similar also exists for “operator-password” for the “oper” account, but it is disabled by default.

    9) Setup SSH:

    At this point, the switches are on the network, but I’m still configuring them via serial console. If we attempt to connect to them, we’ll realize that SSH doesn’t work but Telnet does – which is generally expected.

    clip_image012

    Commands:

    ssh port 22

    ssh enable

    You should now be able to connect as the user you just created, AS WELL AS the default user – admin with a password of admin.

    10) Disable Telnet

    Now that we’ve configured SSH, let’s get rid of telnet. There is no equivalent “telnet disable”, but you can use “no …” commands.

    clip_image013

    Commands:

    no access telnet enable

    Note that my active Telnet configurations has their configurations closed, and indicated on the console.

    11) Set SNMP:

    My SNMP needs are basic – I largely use it for testing monitoring and management products. So we’ll just set a basic Read Only and Read Write community, and we’ll set it for SNMP v2 which is the most common:

    clip_image014

    Commands:

    snmp location "NetWise Lab"

    snmp name NW-IBMG8124B

    snmp read-community "nw-ro"

    snmp write-community "nw-rw"

    snmp version v1v2v3

    access snmp read-only

    access snmp read-write

    NOTE: The SNMP name will change the HOSTNAME, and should not include quotes. This makes me believe it would ASSUME the hostname, which is what most people set to anyway.

    12) Configure HTTPS access:

    Some people like HTTPS configuration access, some see it as a security risk. I’ll enable it so I have the option of seeing what it looks like

    clip_image015

    Commands:

    access https enable

    If there is no self signed certificate, it will generate one.

    13) Configure DNS

    It would be nice if we could get DNS for hostname resolution. Nothing is worse than having to remember IP’s.

    clip_image016

    Commands:

    ip dns primary-server 10.0.0.11 mgta-port

    ip dns secondary-server 10.0.0.12 mgta-port

    ip dns domain-name netwise.ca

    14) Configure Spanning Tree

    Any good switch should do some manner of Spanning Tree. As these will be my storage switches, we’ll ensure these are set to protect against loops and also set as Rapid Spanning Tree (RSTP)

    clip_image017

    Command:

    spanning-tree loopguard

    spanning-tree mode rstp

    15) Configure SysLog:

    clip_image018

    This is pretty simple, we simply point it at the IP and tell it to use the mgta-port.

    Command:

    logging host 1 address 10.0.0.10 mgta-port

    logging host 1 severity 7

    logging log all

    What is nice is you can define two of them, by specifying “host 2”

    16) Backup the running config:

    clip_image019

    Configuring the switch isn’t a lot of good if you don’t back up the configuration. So we’ll make a copy of the config to our TFTP server.

    Command:

    copy running-config tftp address 10.0.0.48 filename NW-IBMG8124B_orig.cfg mgta-port

    It is worth noting that it does support standard FTP as well, if you desire.

    So if we take all of the above and put the commands together, we get:

    enable

    conf t

    interface ip-mgmt address 10.0.0.94

    interface ip-mgmt netmask 255.255.255.0

    interface ip-mgmt enable

    interface ip-mgmt gateway address 10.0.0.1

    interface ip-mgmt gateway enable

    hostname NW-IBMG8124A

    copy running-config startup-config

    ntp primary-server 10.0.0.11 mgta-port

    ntp enable

    access user 10 name nwadmin

    access user 10 password "f2cbfe00a240aa00b396b7e361f009f2402cfac143ff32cb09efa7212f92cef2"

    access user 10 level administrator

    access user 10 enable

    #access user administrator-password <ChangeMe>

    ssh port 22

    ssh enable

    no access telnet enable

    snmp location "NetWise Lab"

    snmp name NW-IBMG8124A

    snmp read-community "nw-ro"

    snmp write-community "nw-rw"

    snmp version v1v2v3

    access snmp read-only

    access snmp read-write

    access https enable

    ip dns primary-server 10.0.0.11 mgta-port

    ip dns secondary-server 10.0.0.12 mgta-port

    ip dns domain-name netwise.ca

    spanning-tree loopguard

    spanning-tree mode rstp

    logging host 1 address 10.0.0.10 mgta-port

    logging host 1 severity 7

    logging log all

    We now have a basically working switch, from a management perspective.  Next will be to get it passing some actual data!

     

    Some other interesting command:

    While poking around in the (conf t) “list” command, which will show you all the command options, I found some interesting ones:

    boot cli-mode ibmnos-cli

    boot cli-mode iscli

    boot cli-mode prompt

    The ISCLI is the “Is Cisco Like” which is why it seems familiar. The other option is IBMNOS-CLI, which is… probably painful

     

    boot configuration-block active

    boot configuration-block backup

    boot configuration-block factory

    Here is how we can tell the switch to reset itself or boot clean. It’s not immediately clear to me how this would be better than “erase startup-config”, “reload”, but it’s there.

     

    boot schedule friday hh:mm

    boot schedule monday hh:mm

    boot schedule saturday hh:mm

    boot schedule sunday hh:mm

    boot schedule thursday hh:mm

    boot schedule tuesday hh:mm

    boot schedule wednesday hh:mm

    I can’t think of a lot of times I’ve wanted to schedule the reboot of switches on a weekly basis. Or reasons why I’d need to, on a good switch. But… maybe it’s to know that it WILL reboot when the time comes? If you reboot it weekly, then you might not be so timid to do so after the uptime is 300+ days and no one remembers if this is the switch that has startup issues?

     

    interface ip-mgta address A.B.C.D A.B.C.D A.B.C.D enable

    Not sure why I’d want multiple IP’s on the management interface – but you can.

    interface ip-mgta dhcp

    In case you want to set your management IP’s to DHCP. Which sounds like a fun way to have a bad day someday…

     

    ldap-server backdoor

    Not sure what on earth this does

     

    ldap-server domain WORD

    ldap-server enable

    ldap-server primary-host A.B.C.D mgta-port

    ldap-server secondary-host A.B.C.D mgta-port

    Need to look into what LDAP supports

     

    logging console severify <0-7>

    logging console

    Sets up how much is logged to the console

     

    logging host 1 address A.B.C.D mgta-port

    Configures syslog via the mgta-port

     

    logging log all

    Logs everything, but you can do very granular enablement.

     

    radius-server backdoor

    Not sure what on earth this does

    radius-server domain WORD

    radius-server enable

    radius-server primary-host A.B.C.D mgta-port

    radius-server secondary-host A.B.C.D mgta-port

    I’ll need to find the appropriate commands for both the switches as well as the RADIUS server to enable groups.

     

    virt vmware dpg update WORD WORD <1-4094>

    virt vmware dpg vmac WORD WORD

    virt vmware dvswitch add WORD WORD WORD

    virt vmware dvswitch add WORD WORD

    virt vmware dvswitch addhost WORD WORD

    virt vmware dvswitch adduplnk WORD WORD WORD

    virt vmware dvswitch del WORD WORD

    virt vmware dvswitch remhost WORD WORD

    virt vmware dvswitch remuplnk WORD WORD WORD

    virt vmware export WORD WORD WORD

    I understood the switch was virtualization aware – but this is going to need some deeper investigation!

    Categories: Hardware, Home Lab, IBM, RackSwitch