Home > FAS2020, Hardware, Home Lab, NetApp > HOWTO: Reconfigure a NetApp FAS2020HA Pair to use internal disks vs external DS14MK2 shelf

HOWTO: Reconfigure a NetApp FAS2020HA Pair to use internal disks vs external DS14MK2 shelf

I recently was able to obtain a set of 12 300GB SAS disks for a FAS2020 – which I happen to have in my home lab.  While DS14MK2 shelves are great for expansion, one of my personal goals is to try to limit my home lab to the 24U rack I have.  This means that finding a way to have a NetApp FAS2020HA in 2U instead of having it with no internal drives and at least one DS14MK2 being 2U+3U=5U.  3U isn’t a lot of space in a 42U rack in an enterprise, but in 24U, it goes pretty quickly.

This is the process I used to move my aggregates and volumes including the root vol0 from the external shelf to the internal disks.  Note though, that this just covers that portion and not migrating the data volumes – eg: NFS volumes, LUN volumes, etc.  There are many ways to do this, but because I was uncertain and fiddling around, I opted to allow myself to be completely disruptive if need be and offloaded all of my VM’s on NFS to local ESXi disks for the interim.  Better safe than sorry.

1) A good portion of this is going to need Advanced mode, so:

FILERA> priv set advanced

FILERB> priv set advanced

2) If the old disks are detected from an original filer, then you may need to:

a) Get the status of the aggregates: aggr status

b) Looking for any foreign aggregates, which will usually look like “aggr0(1)”

c) Remove the existing aggregates: aggr destroy aggr0(1)

3) Get your list of disks:

FILERA*> disk show

DISK OWNER POOL SERIAL NUMBER

———— ————- —– ————-

0a.26 FILERB(135048xxx) Pool0 A81G4Mxx

0a.23 FILERB(135048xxx) Pool0 A81H4Lxx

0a.19 FILERB(135048xxx) Pool0 A81GMVxx

0c.00.1 FILERA(135022xxx) Pool0 3LM3LNFW00009834Lxxx

0c.00.8 FILERA(135022xxx) Pool0 3LM3LNYE00009834Lxxx

0c.00.3 FILERA(135022xxx) Pool0 3LM3SXDY00009833Gxxx

0c.00.5 FILERA(135022xxx) Pool0 3LM3SBH400009834Mxxx

0c.00.6 FILERA(135022xxx) Pool0 3LM3SASD00009833Kxxx

0c.00.10 FILERA(135022xxx) Pool0 3LM3LP0Y00009833Kxxx

0c.00.7 FILERA(135022xxx) Pool0 3LM3SAPE00009833Kxxx

0c.00.2 FILERA(135022xxx) Pool0 3LM3LMB300009834Lxxx

0c.00.0 FILERA(135022xxx) Pool0 3LM3SAN800009833Kxxx

0c.00.9 FILERA(135022xxx) Pool0 3LM3LPZA00009834Lxxx

0c.00.4 FILERA(135022xxx) Pool0 3LM3LPA000009834Lxxx

0c.00.11 FILERA(135022xxx) Pool0 3LM3SAHF00009834Lxxx

Disks for the internal drives will show up as 0c.00.0 through 0c.00.11 on my system. This is going to depend on how many shelves and what configuration you already have.

4) Assign the disks to the filer heads. You may assign individual disks or all of them. If you have two heads and are doing a 10/2 disk split, then the following could be done:

FILERA*> disk remove_ownership 0c.00.10

Disk 0c.00.10 will have its ownership removed

Note: Disks may be automatically assigned to this node, since option disk.auto_assign is on.

Volumes must be taken offline. Are all impacted volumes offline(y/n)?? y

FILERA*> disk remove_ownership 0c.00.11

Disk 0c.00.11 will have its ownership removed

Note: Disks may be automatically assigned to this node, since option disk.auto_assign is on.

Volumes must be taken offline. Are all impacted volumes offline(y/n)?? y

Note the message – because “option disk.auto_assign” is “on”, disks will get seized by this controller again quickly. You either need to do the next part very quickly on FILERB, or turn off this option to give you some breathing room with:

FILERA> option disk.auto_assign=off

Then on FILERB, grab those two disks and assign them:

FILERB*> disk assign all -o FILERB

Fri Mar 22 00:22:13 MDT [FILERB: diskown.changingOwner:info]: changing ownership for disk 0c.00.10 (S/N 3LM3LP0Y00009833xxx) from unowned (ID -1) to FILERB (ID 135048xxx)

Fri Mar 22 00:22:13 MDT [FILERB: diskown.changingOwner:info]: changing ownership for disk 0c.00.11 (S/N 3LM3SAHF00009834Lxxx) from unowned (ID -1) to FILERB (ID 135048xxx)

5) Create the RAID-4 aggregate on FILERB:

FILERB*> aggr create aggr2 -t raid4 -d 0c.00.11 0c.00.10

aggregate has been created with 1 disk added to the aggregate. 1 more disk needs

to be zeroed before addition to the aggregate. The process has been initiated

and you will be notified via the system log as the remaining disks are added.

Note however, that if system reboots before the disk zeroing is complete, the

volume won’t exist.

Then confirm it is built:

FILERB*> aggr status aggr2 -r

Aggregate aggr2 (creating, raid4, initializing) (block checksums)

Plex /aggr2/plex0 (offline, empty, active)

Targeted to traditional volume or aggregate but not yet assigned to a raid group

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)

——— —— ————- —- —- —- —– ————– ————–

pending 0c.00.10 0c 0 10 SA:A 0 SAS 15000 272000/557056000 280104/573653840 (zeroing, 3% done)

pending 0c.00.11 0c 0 11 SA:A 0 SAS 15000 272000/557056000 280104/573653840

6) Create the RAID-DP aggregate on FILERA, using 9 of the 10 disks, leaving one for hot spare.

FILERA*> aggr create aggr2 -t raid_dp -d 0c.00.1 0c.00.2 0c.00.3 0c.00.4 0c.00.5 0c.00.6 0c.00.7 0c.00.8 0c.00.9

aggregate has been created with 5 disks added to the aggregate. 4 more disks need

to be zeroed before addition to the aggregate. The process has been initiated

and you will be notified via the system log as the remaining disks are added.

Note however, that if system reboots before the disk zeroing is complete, the

volume won’t exist.

And confirm it is built:

FILERA*> aggr status aggr2 -r

Aggregate aggr2 (creating, raid_dp, initializing) (block checksums)

Plex /aggr2/plex0 (offline, empty, active)

Targeted to traditional volume or aggregate but not yet assigned to a raid group

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)

——— —— ————- —- —- —- —– ————– ————–

pending 0c.00.1 0c 0 1 SA:B 0 SAS 15000 272000/557056000 280104/573653840

pending 0c.00.2 0c 0 2 SA:B 0 SAS 15000 272000/557056000 280104/573653840

pending 0c.00.3 0c 0 3 SA:B 0 SAS 15000 272000/557056000 280104/573653840

pending 0c.00.4 0c 0 4 SA:B 0 SAS 15000 272000/557056000 280104/573653840

pending 0c.00.5 0c 0 5 SA:B 0 SAS 15000 272000/557056000 280104/573653840 (zeroing, 1% done)

pending 0c.00.6 0c 0 6 SA:B 0 SAS 15000 272000/557056000 280104/573653840 (zeroing, 1% done)

pending 0c.00.7 0c 0 7 SA:B 0 SAS 15000 272000/557056000 280104/573653840 (zeroing, 1% done)

pending 0c.00.8 0c 0 8 SA:B 0 SAS 15000 272000/557056000 280104/573653840

pending 0c.00.9 0c 0 9 SA:B 0 SAS 15000 272000/557056000 280104/573653840 (zeroing, 1% done)

7) Once the above is complete, you’ll have two new “aggr2” on each controller. Then, we’ll need to look at migrating from the external shelf, which will place everything on internal disks. To do so, we’ll want to migrate the “vol0” root volume from the existing “aggr0” to “aggr2”, so we can remove “aggr0”. The detail for this can be found in NetApp KB 1010097 – https://kb.netapp.com/support/index?page=content&id=1010097.

8) Turn on NDMP daemon:

FILERA> ndmpd on

FILERB> ndmpd on

9) Use NDMP to copy the /etc and /home folders:

FILERA> ndmpcopy /etc /vol/vol0new/etc

FILERA> ndmpcopy /home /vol/vol0new/home

FILERB> ndmpcopy /etc /vol/vol0new/etc

FILERB> ndmpcopy /home /vol/vol0new/home

10) Terminate CIFS. It will automatically update the CIFS shares to the new volume when you rename the volumes. But we don’t want this, we want it to keep the “vol0” path, which will be the new vol0 on the new aggr.

11) Rename the volumes, to make everything ‘proper’ again, and set them to be the “root”:

FILERA> vol rename vol0 vol0old

FILERA> vol rename vol0new vol0

FILERA> vol options vol0 root

FILERB> vol rename vol0 vol0old

FILERB> vol rename vol0new vol0

FILERB> vol options vol0 root

12) Update CIFS so it knows about the new vol0:

FILERA> cifs restart

FILERA> cifs shares

FILERA> exportfs

FILERB> cifs restart

FILERB> cifs shares

FILERB> exportfs

13) Reboot both filers. If in a maintenance window, you can do both together. However, to do it non-disruptively, reboot one at a time, and let CF takeovers get you through.

FILERA> reboot

FILERB(takeover)> cf giveback

FILERB> reboot

FILERA(takeover)> cf giveback

Seriously though, don’t reboot your controllers unless it’s okay to do so, even if “some random internet guy” said it’s okay…..Smile 

Advertisements
Categories: FAS2020, Hardware, Home Lab, NetApp
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: