Sunday, February 23, 2020

Configuring SCSI3 Disk-Based I/O Fencing

Below are the steps to enable disk-based I/O fencing and verify fencing configuration. You then observe registrations and reservation on both coordinator disks and the data disks.
This article contains the following :
  • Fencing configuration pre-checks
  • Configuring VCS for I/O fencing
  • I/O fencing configuration verification
  • Verifying data disks for I/O fencing
Fencing configuration pre-checks
Perform several I/O fencing pre-checks prior to configuring I/O fencing.
1. Navigate to the /etc/sysconfig directory and examine the vxfen file to determine how I/O fencing is started and stopped at system startup and shutdown.

Note: These steps apply to both cluster systems and strictly speaking do not need to be performed on sys2 unless otherwise noted.
cd /etc/sysconfig
ls -l vxfen
-rw-r--r-- 1 root root 3396 Jun 16 20:24 vxfen
more vxfen
#
# This file is sourced from /opt/VRTSvcs/vxfen/bin/vxfen.
#
# Set the two environment variables below as follows:
#
# 1 = start or stop vxfen
# 0 = do not start or stop vxfen
#
VXFEN_START=0
VXFEN_STOP=0
2. Navigate to the /etc directory and recursively list I/O fencing configuration files whose name starts with vxfen. List the contents of any subdirectories found.
cd /etc
ls -lR vxfen*
vxfen.d:
total 28
-r-xr--r-- 1 root root 1274 Jun 2 17:17 README
drwxr-xr-x 2 root root 4096 Jun 16 20:22 script
-rwxr--r-- 1 root root 4662 Jun 2 17:17 vxfenmode_cps
-r-xr--r-- 1 root root 323 Jun 2 17:17 vxfenmode_disabled
-r-xr--r-- 1 root root 323 Jun 2 17:17 vxfenmode_majority
-r-xr--r-- 1 root root 506 Jun 2 17:17 vxfenmode_scsi3_dmp
vxfen.d/script:
total 44
-r-xr--r-- 1 root root 64744 Jun 2 17:17 vxfen_scriptlib.sh
3. Is the I/O fencing GAB port open?

GAB port b is not open.
gabconfig -a
GAB Port Memberships
====================================================
Port a gen 8da501 membership 01
Port h gen 8da504 membership 01
4. Determine the value of the cluster attribute named UseFence.
UseFence value:NONE

haclus -value UseFence
NONE
5.Use the vxfenadm command to display I/O fencing status. Use the vxfenconfig command to determine if there are any coordination points in use.
vxfenadm -d
VXFEN vxfenadm ERROR V-11-2-1101 Open failed for device: /dev/vxfen
vxfenconfig -l
VXFEN vxfenconfig ERROR V-11-2-1002 Open failed for device: /dev/vxfen with error 2
Note: These errors are expected when I/O fencing has not been configured.
 Configuring VCS for I/O fencing

 Here you configure VCS for SCSI3 disk-based I/O fencing.
 Login to sys1 as root/pwd
1. Navigate to the /opt/VRTS/install directory and list the contents.

cd /opt/VRTS/install
ls -l
total 16
drwxr-x--- 3 root root 4096 Jun 16 20:24 bin
-rwxr-x--- 1 root root 1666 Jun 16 20:24 installer
drwxr-xr-x 11 root root 4096 Jun 16 20:57 logs
-rwxr-x--- 1 root root 131 Jun 16 20:24 minstaller
-rwxr-x--- 1 root root 1829 Jun 16 20:24 showversion

2. Use the installer script with the fencing option to begin configuring VCS for I/O fencing. Use the following information.

  • Configure disk based fencing.
  • Acknowledge that VCS will be restarted.
  • Confirm that you have SCSI3 PR enabled disks.
    ./installer -fencing
  1. To enter the name of one system in the VCS cluster for which you would like to configure I/O fencing, type: sys1
  2. Observe the system checks.
    Checking communication on sys1 …………………………. Done
    Checking release compatibility on sys1 …………………… Done
    Checking InfoScale Enterprise installation on sys1 .Version 7.3.0.000
  3. Observe the cluster information verification
    Cluster information verification:
    Cluster Name: west
    Cluster ID Number: 5
    Systems: sys1 sys2
  4. To configure I/O fencing on the cluster, type: y
  5. Observe the cluster verification checks.
    Checking communication on sys1 …………………………. Done
    Checking release compatibility on sys1 ………………….. Done
    Checking InfoScale Enterprise installation on sys1 .Version 7.3.0.000
    Checking communication on sys2 ………………………….. Done
    Checking release compatibility on sys2 …………………… Done
    Checking InfoScale Enterprise installation on sys2 .Version 7.3.0.000
    Checking configured component …………………………. Done
  6. To configure disk based fencing, type: 2
  7. To acknowledge that fencing requires a restart of VCS and to continue, type: y
  8. To confirm that you have SCSI3 PR enabled disks, type: y

 3. Continue configuring VCS for I/O fencing using the following information.

  • Create a new disk group using the disks:
    • emc0_d10
    • emc0_d11
    • emc0_d12
  • Disk group name: westfendg
  • Fencing disk policy: dmp
  • Stop VCS and apply fencing configuration on all nodes at this time.
  1. To create a new disk group, type: 1
  2. To initialize more disks as VxVM disks, type: y
  3. Observe the list of disks which can be initialized as VxVM disks.
    1) emc0_dd6
    2) emc0_dd7
    3) emc0_dd8
    4) emc0_dd9
    5) emc0_d10
    6) emc0_d11
    7) emc0_d12
    ……
    ……
  4. If necessary, press Enter to view all available disks.
  5. To enter the disk options, separated by spaces, type: 5 6 7
  6. Observe these disks being initialized and then listed for selection.
  7. To select odd number of disks and at least three disks to form a disk group, type: 1 2 3
  8. To enter the new disk group name, type: westfendg
  9. To continue with the selected disk group, type: y
  10. To confirm that the I/O fencing configuration is correct, type: y
  11. To stop VCS and apply fencing configuration on all nodes at this point, type: y
  12. Observe the steps taken to complete the I/O fencing configuration.
    Stopping VCS on sys1……………………….Done
    Stopping VCS on sys2……………………….Done
    Starting Fencing on sys1……………………Done
    Starting Fencing on sys2……………………Done
    Updating main.cf with fencing……………….Done
    Starting VCS on sys1……………………….Done
    Starting VCS on sys2……………………….Done
  13. Type n to avoid configuring Coordination Point Agent on the client cluster.
    Do you want to configure Coordination Point Agent on the client cluster? y,n,q n
    I/O Fencing configuration………………… Done
  14. Type n to avoid sending information to Veritas.

4. Complete the configuring of VCS for I/O fencing by reviewing the summary file. Optionally, examine the log file.

To view the summary file, type: y
Note: Press Enter as necessary to view the output.

I/O Fencing Configuration verification 
 Here we verify the I/O fencing configuration.
 Login to sys1 as root/pwd
 1. Navigate to the /etc/sysconfig directory and examine the vxfen file to determine how I/O fencing is now configured to be started and stopped at system startup and shutdown.

Note: These checks should be performed on both cluster systems, but for purposes of this lab, checking them on just sys1 is okay unless otherwise directed.
Will I/O fencing be started on system startup? Will I/O fencing be stopped at system shutdown?
I/O fencing will be started on system boot and stopped on system shutdown.
cd /etc/sysconfig
more vxfen
#
# This file is sourced from /opt/VRTSvcs/vxfen/bin/vxfen.
# Set the two environment variables below as follows:
#
# 1 = start or stop vxfen
# 0 = do not start or stop vxfen
#
VXFEN_START=1
VXFEN_STOP=1

2. Navigate to the /etc directory and list I/O fencing configuration files whose name starts with vxfen. Display the file contents. Ignore the vxfen.d sub directory as that was examined earlier. Which file contains the name of the coordinator disk group used during I/O fencing configuration? What is in the vxfentab file?
The vxfendg file contains the name of the coordinator disk group. The
vxfentab file lists the coordinator disks for this configuration.

cd /etc
ls -ld vxfen*
drwxr-xr-x 3 root root 4096 Jun 16 20:22 vxfen.d
-rw-r--r-- 1 root root 10 Jun 24 13:44 vxfendg
-r-xr--r-- 1 root root 506 Jun 24 13:44 vxfenmode
-rw-r--r-- 1 root root 306 Jun 24 13:44 vxfentab
more vxfendg
westfendg
more vxfenmode
#
# vxfen_mode determines in what mode VCS I/O Fencing should work.
#
# available options:
# scsi3 - use scsi3 persistent reservation disks
# customized - use script based customized fencing
# majority - use majority based fencing
# disabled - run the driver but don't do any actual fencing
#
vxfen_mode=scsi3
#
# scsi3_disk_policy determines the way in which I/O Fencing communicates with the coordination disks.
#
# available options:
# dmp - use dynamic multipathing
#
scsi3_disk_policy=dmp
more vxfentab
#
# /etc/vxfentab:
# DO NOT MODIFY this file as it is generated by the
# VXFEN rc script from the file /etc/vxfendg.
#
/dev/vx/rdmp/emc0_d10 …
/dev/vx/rdmp/emc0_d11 …
/dev/vx/rdmp/emc0_d12 …

3. Determine if the I/O fencing GAB port is open.

gabconfig -a
GAB Port Memberships
=========================================================
Port a gen 8da501 membership 01
Port b gen 8da507 membership 01
Port h gen 8da50a membership 01

4. Determine the value of the cluster attribute named UseFence.

haclus -value UseFence
SCSI3

5. Use the vxfenadm command to display I/O fencing status and use the vxfenconfig command to determine if there are any devices in use as coordination disks or coordination points.

vxfenadm -d
I/O Fencing Cluster Information:
================================
Fencing Protocol Version: 201
Fencing Mode: SCSI3
Fencing SCSI3 Disk Policy: dmp
Cluster Members:
* 0 (sys1)
1 (sys2)
RFSM State Information:
node 0 in state 8 (running)
node 1 in state 8 (running)
vxfenconfig -l
I/O Fencing Configuration Information:
======================================
Single Disk Flag : 0
Count : 3
Disk List
Disk Name Major Minor Serial Number Policy
/dev/vx/rdmp/emc0_d12 201 176 512345600000000B dmp
/dev/vx/rdmp/emc0_d11 201 32 512345600000000A dmp
/dev/vx/rdmp/emc0_d10 201 160 5123456000000009 dmp
Note: The order of the command output may be different than what is displayed in this sample output.

6. Display the state of the coordinator disk group on sys1.
vxdisk -o alldgs list | grep westfendg
emc0_d10 auto:cdsdisk - (westfendg) online
emc0_d11 auto:cdsdisk - (westfendg) online
emc0_d12 auto:cdsdisk - (westfendg) online
7. Display the state of the coordinator disk group on sys2. Is it imported or deported on all cluster nodes?
The westfendg disk group is deported on all cluster nodes.

ssh sys2 vxdisk -o alldgs list | grep westfendg
emc0_d10 auto:cdsdisk - (westfendg) online
emc0_d11 auto:cdsdisk - (westfendg) online
emc0_d12 auto:cdsdisk - (westfendg) online

8. From sys1, use the vxdisk list emc0_d10 | grep flags command to confirm that the coordinator flag is set. This applies to all coordinator disks and the coordinator disk group.

vxdisk list emc0_d10 | grep flags
flags: online ready private autoconfig coordinator

9. Use the vxdisk path | grep emc0_d10 command to determine how many DMP I/O paths there are to each coordinator disk using the emc0_d10 coordinator disk as an example. Then, display the registrations placed on the coordinator disks. How many registrations are placed on each disk? Are any reservations placed on the coordinator disks?
There are two I/O paths per disk. Since there are two systems in the cluster,
there are 2 X 2 = 4 registrations on each coordinator disk. There are no
reservations placed on coordinator disks.

vxdisk path | grep emc0_d10
sdv emc0_d10 - - ENABLED
sdw emc0_d10 - - ENABLED
vxfenadm -s all -f /etc/vxfentab
Device Name: /dev/vx/rdmp/emc0_d10
Total Number Of Keys: 4
key[0]:
[Numeric Format]: 86,70,48,48,48,53,48,48
[Character Format]: VF000500
*[Node Format]: Cluster ID: 5 Node ID: 0 Node Name: sys1
key[1]:
[Numeric Format]: 86,70,48,48,48,53,48,48
[Character Format]: VF000500
*[Node Format]: Cluster ID: 5 Node ID: 0 Node Name: sys1
key[2]:
[Numeric Format]: 86,70,48,48,48,53,48,49
[Character Format]: VF000501
*[Node Format]: Cluster ID: 5 Node ID: 1 Node Name: sys2
key[3]:
[Numeric Format]: 86,70,48,48,48,53,48,49
[Character Format]: VF000501
*[Node Format]: Cluster ID: 5 Node ID: 1 Node Name: sys2
Device Name: /dev/vx/rdmp/emc0_d11
Total Number Of Keys: 4
key[0]:
[Numeric Format]: 86,70,48,48,48,53,48,48
[Character Format]: VF000500
*[Node Format]: Cluster ID: 5 Node ID: 0 Node Name: sys1
key[1]:
[Numeric Format]: 86,70,48,48,48,53,48,48
[Character Format]: VF000500
*[Node Format]: Cluster ID: 5 Node ID: 0 Node Name: sys1
key[2]:
[Numeric Format]: 86,70,48,48,48,53,48,49
[Character Format]: VF000501
*[Node Format]: Cluster ID: 5 Node ID: 1 Node Name: sys2
key[3]:
[Numeric Format]: 86,70,48,48,48,53,48,49
[Character Format]: VF000501
*[Node Format]: Cluster ID: 5 Node ID: 1 Node Name: sys2
Device Name: /dev/vx/rdmp/emc0_d12
Total Number Of Keys: 4
key[0]:
[Numeric Format]: 86,70,48,48,48,53,48,48
[Character Format]: VF000500
*[Node Format]: Cluster ID: 5 Node ID: 0 Node Name: sys1
key[1]:
[Numeric Format]: 86,70,48,48,48,53,48,48
[Character Format]: VF000500
*[Node Format]: Cluster ID: 5 Node ID: 0 Node Name: sys1
key[2]:
[Numeric Format]: 86,70,48,48,48,53,48,49
[Character Format]: VF000501
*[Node Format]: Cluster ID: 5 Node ID: 1 Node Name: sys2
key[3]:
[Numeric Format]: 86,70,48,48,48,53,48,49
[Character Format]: VF000501
*[Node Format]: Cluster ID: 5 Node ID: 1 Node Name: sys2
vxfenadm -r all -f /etc/vxfentab
Device Name: /dev/vx/rdmp/emc0_d10
Total Number Of Keys: 0
No keys…
Device Name: /dev/vx/rdmp/emc0_d11
Total Number Of Keys: 0
No keys…
Device Name: /dev/vx/rdmp/emc0_d12
Total Number Of Keys: 0
No keys…

Verifying data disks for I/O fencing
1. Determine the system where the appsg service group is online. Confirm that the loopydatadg disk group is imported.

Note: The appsg service group should be online on sys1. If not, switch the appsg service group to sys1.
hagrp -state appsg
#Group Attribute System Value
appsg State sys1 |ONLINE|
appsg State sys2 |OFFLINE|
vxdisk -o alldgs list | grep loopy
emc0_dd5 auto:cdsdisk loopydatadg01 loopydatadg online

2. Use the vxfenadm command to display the I/O fencing reservations and registration on the loopy data disk. Why are there the number of reservations and registrations as compared to the coordinator disks shown in the previous exercise?
The data disks are part of regular volume manager disk groups and so are
only imported on one system at a time (unlike shared cluster volume
manager disk groups which can be imported on multiple cluster systems at
one time). Therefore, I/O fencing creates registrations only for the node on
which the disk group is imported. Further, I/O fencing creates one
registration per DMP path on the node. In this lab environment, there are
two registrations for the dual paths on the one node that the disk group is
imported. Only a single reservation is needed regardless of the number of
DMP paths. I/O fencing sets this reservation for the node on which the disk
group is imported. The coordinator disk group is not used as a writable disk
group and so I/O fencing does not create reservations on coordinator disks,
but does manage multiple node registrations as noted earlier.

vxfenadm -s /dev/vx/rdmp/emc0_dd5
Reading SCSI Registration Keys…
Device Name: /dev/vx/rdmp/emc0_dd5
Total Number Of Keys: 2
key[0]:
[Numeric Format]: 65,86,67,83,0,0,0,0
[Character Format]: AVCS
Use only the numeric format to perform operations. The key has null characters which are represented as spaces in the character format.
[Node Format]: Cluster ID: unknown Node ID: 0 Node Name: sys1
key[1]:
[Numeric Format]: 65,86,67,83,0,0,0,0
[Character Format]: AVCS
Use only the numeric format to perform operations. The key has null characters which are represented as spaces in the character format.
[Node Format]: Cluster ID: unknown Node ID: 0 Node Name: sys1
vxfenadm -r /dev/vx/rdmp/emc0_dd5
Reading SCSI Reservation Information…
Device Name: /dev/vx/rdmp/emc0_dd5
Total Number Of Keys: 1
Key[0]:
Reservation Type: SCSI3_RESV_WRITEEXCLUSIVEREGISTRANTSONLY
Key Value [Numeric Format]: 65,86,67,83,0,0,0,0
Key Value [Character Format]: AVCS

Login to sys2 as root user.
3. From sys2, use the vxfenadm command to display the I/O fencing registrations and reservation on the loopy data disk even though the disk group containing the data disk is not imported on sys2. Are there any differences in the output compared to that on sys1?
There are no differences in the output.
vxfenadm -s /dev/vx/rdmp/emc0_dd5
Reading SCSI Registration Keys…
Device Name: /dev/vx/rdmp/emc0_dd5
Total Number Of Keys: 2
key[0]:
[Numeric Format]: 65,86,67,83,0,0,0,0
[Character Format]: AVCS
Use only the numeric format to perform operations. The key has null characters which are represented as spaces in the character format.
[Node Format]: Cluster ID: unknown Node ID: 0 Node Name: sys1
key[1]:
[Numeric Format]: 65,86,67,83,0,0,0,0
[Character Format]: AVCS
Use only the numeric format to perform operations. The key has null characters which are represented as spaces in the character format.
[Node Format]: Cluster ID: unknown Node ID: 0 Node Name: sys1
vxfenadm -r /dev/vx/rdmp/emc0_dd5
Reading SCSI Reservation Information…
Device Name: /dev/vx/rdmp/emc0_dd5
Total Number Of Keys: 1
Key[0]:
Reservation Type: SCSI3_RESV_WRITEEXCLUSIVEREGISTRANTSONLY
Key Value [Numeric Format]: 65,86,67,83,0,0,0,0
Key Value [Character Format]: AVCS
4. Switch the appsg service group to sys2 and use the vxfenadm command to display the I/O fencing registrations and reservation on the loopy data disk. Now that the disk group containing the data disk is imported, are there any differences in the output from the previous step?
The registrations and reservation change to reflect that the disk group is
imported on sys2.

hagrp -state appsg
#Group Attribute System Value
appsg State sys1 |ONLINE|
appsg State sys2 |OFFLINE|
hagrp -switch appsg -to sys2
hagrp -state appsg
#Group Attribute System Value
appsg State sys1 |OFFLINE|
appsg State sys2 |ONLINE|
Note: Do not continue until appsg is online on sys2.
vxdisk -o alldgs list | grep loopy
emc0_dd5 auto:cdsdisk loopydatadg01 loopydatadg online
vxfenadm -s /dev/vx/rdmp/emc0_dd5
Reading SCSI Registration Keys…
Device Name: /dev/vx/rdmp/emc0_dd5
Total Number Of Keys: 2
key[0]:
[Numeric Format]: 66,86,67,83,0,0,0,0
[Character Format]: BVCS
Use only the numeric format to perform operations. The key has null characters which are represented as spaces in the character format.
[Node Format]: Cluster ID: unknown Node ID: 1 Node Name: sys2
key[1]:
[Numeric Format]: 66,86,67,83,0,0,0,0
[Character Format]: BVCS
Use only the numeric format to perform operations. The key has null characters which are represented as spaces in the character format.
[Node Format]: Cluster ID: unknown Node ID: 1 Node Name: sys2
vxfenadm -r /dev/vx/rdmp/emc0_dd5
Reading SCSI Reservation Information…
Device Name: /dev/vx/rdmp/emc0_dd5
Total Number Of Keys: 1
Key[0]:
Reservation Type: SCSI3_RESV_WRITEEXCLUSIVEREGISTRANTSONLY
Key Value [Numeric Format]: 66,86,67,83,0,0,0,0
Key Value [Character Format]: BVCS

For any queries you can email me samiappsdba@gmail.com