Thursday, November 28, 2019

Veritas InfoScale Enterpise 7.3 Cluster Setup

Step by step Installation of Veritas InfoScale Enterpise 7.3 on two node cluster machines ed-olvrtslin1 and ed-olvrtslin2 using the Common Product Installer (CPI) and configure Storage Foundation and High Availability (SFHA). The 'TEST' cluster, made up of ed-olvrtslin1 and ed-olvrtslin2, is placed under Veritas InfoScale Operations Manager (VIOM) control.

Installing InfoScale Enterprise using the Common Product Installer (CPI)

1. Navigate to and display a long listing of the /software/is/is73/rhel7_x86_64 directory.
cd /software/is/is73/rhel7_x86_64
ls -l
2. Begin the installation and configuration of InfoScale Availability 7.3 on sys1 and sys2 as follows:

  1. Start the installer script. Select I for Install a Product option.
    ./installer
  2. Select 4 for Veritas InfoScale Enterprise option.
  3. Type y when asked Would you like to configure InfoScale Enterprise after installation?.
  4. Select 3 for Storage Foundation and High Availability (SFHA).
  5. Type y to agree to the terms of the End User License Agreement (EULA).
  6. Type the names of the two lab systems, sys1 and sys2, when prompted.
  7. Observe that the following checks complete successfully:
  • System communications
  • Release compatibility
  • Installed product
  • Platform version
  • Prerequisite patches and rpms
  • File system free space
  • Configured component
  • Product prechecks

    Note: After the system checks got completed successfully if the installer prompts to synchronize the systems with NTP server then select mgt as the NTP server, else continue with next step 2i.
    i. When prompted, type y to stop InfoScale Enterprise processes now.
    j. Observe the list of packages being installed. The software installation may take up to 5 minutes to complete.

    3. When package installation completes, continue with the cluster configuration as follows:

    1. Select 2 for the Enable keyless licensing and complete system licensing later option.
    2. Select 4 for Veritas InfoScale Enterprise when asked which product you would like to register.
    3. Type n to avoid configuring I/O fencing in enabled mode.
      Do you want to configure I/O Fencing in enabled mode? n
    4. When prompted, press Enter to continue.
    5. Enter west as the unique cluster name.
      Veritas InfoScale Enterprise 7.3 Install Program
      sys1 sys2
      To configure VCS for InfoScale Enterprise the following information is required:
      A unique cluster name
      Two or more NICs per system used for heartbeat links
      A unique cluster ID number between 0-65535
      One or more heartbeat links are configured as private links
      You can configure one heartbeat link as a low-priority link
      NetworkManager services are stopped on all systems
      All systems are being configured to create one cluster.
      Enter the unique cluster name: [q,?] TEST
    4.  When asked to configure heartbeat links for the cluster, provide the following input step by step.

    1. Type 1 to configure heartbeat links using LLT over Ethernet.
      1) Configure heartbeat links using LLT over Ethernet
      2) Configure heartbeat links using LLT over UDP
      3) Configure heartbeat links using LLT over RDMA
      4) Automatically detect configuration for LLT over Ethernet
      b) Back to previous menu
      How would you like to configure heartbeat links? 1-4,b,q,? 1
    2. Type ens225 as the NIC for the first private heartbeat link on sys1.
    3. Type ens256 as the NIC for the second private heartbeat link on sys1.
    4. Type n to avoid configuring the third private heartbeat link.
    5. Type n to avoid configuring an additional low-priority heartbeat link.
    6. Type y for using the same NICs for private heart beat links on all systems.
    7. Type 100 when prompted for a unique cluster ID.
    8. Type n to check if the cluster ID is in use by another cluster.
    9. Verify the information you provided by typing y if the provided information is correct.

      Cluster information verification:
      Cluster Name: west
      Cluster ID Number: 100
      Private Heartbeat NICs for sys1:
      link1=ens225
      link2=ens256
      Private Heartbeat NICs for sys2:
      link1=ens225
      link2=ens256
      Is this information correct? y,n,q,? y
     5. Continue the installation and configuration of InfoScale Enterprise by configuring the virtual IP address of the cluster, secure mode, users, SMTP notifications and SNMP notifications as follows:
    1. Type y to configure the Virtual IP.
    2. Type ens162 when prompted to enter the NIC for Virtual IP of the cluster to use on sys1.
    3. Type y to specify ens162 as the public NIC used by all systems.
    4. Type 10.10.2.51 when asked to specify the Virtual IP address for the Cluster.
    5. Press Enter to accept the default 255.255.255.0 as the NetMask for IP 10.10.2.51.
    6. Type y to confirm that the Cluster Virtual IP information is correct.
    7. Type y to configure the VCS cluster in secure mode.
    8. Type n to avoid granting read access to everyone.
    9. Type n to avoid providing any usergroups that you would like to grant read access.
    10. Type 1 to select the Configure the cluster in secure mode option (this is secure mode but not with FIPS).
    11. Type n to avoid configuring SMTP notification.
    12. Type n to avoid configuring SNMP notification.
    13. Type n to avoid configuring the Global Cluster Option.

      6. When prompted, type y to stop InfoScale Enterprise processes now.

      7. Observe the installation as InfoScale Enterprise processes are stopped, the SFHA configuration is performed, and the processes are started.

      8. Type n to avoid sending the installation information to Veritas.

      9. Complete the installation and configuration of InfoScale Enterprise by recording the directory within the /opt/VRTS/install/logs directory that contains the installer log files and viewing the summary file.

      Directory within the /opt/VRTS/install/logs directory that contains the log files, summary file and response file:
      The directory will be in the form installer-yyyymmddhhmmxxx
      a. Notice that the installer log files, summary file, and response file are saved at:
      /opt/VRTS/install/logs/installer-yyyymmddhhmmxxx
      b. Type y to view the summary file.
      c. To scroll through the summary file, press Enter multiple times.
      Note: The CPI ends at this point and control is returned to the login shell.
        10. Navigate to the log directory, list the directory contents, and examine the *.log0 log file and any other files you wish to view.

      cd /opt/VRTS/install/logs/installer-yyyymmddhhmmxxx
      Note: Use the directory you recorded in the previous step in place of installer-yyyymmddhhmmxxx.
      ls
        
      installer-yyyymmddhhmmxxx.log#
      installer-yyyymmddhhmmxxx.metrics
      installer-yyyymmddhhmmxxx.response
      installer-yyyymmddhhmmxxx.summary
      installer-yyyymmddhhmmxxx.tunables
      install.package.system
      start.SFHAprocess.system
      stop.SFHAprocess.system
      grep had installer-yyyymmddhhmmxxx.log0
      0 14:19:43 had stopped successfully on sys1
      0 14:19:43 had stopped successfully on sys2
      0 14:21:55 had started successfully on sys1
      0 14:21:55 had started successfully on sys2
      Note: Examine any other files using the more, cat or grep commands.
     11. Use the lltstat -c command on both systems to display Low Latency Transpor (LLT) status and notice the differences in node and name. Then, use the lltstat –nvv active command only on sys1.
    lltstat -c
    LLT configuration information:
    node: 0
    name: sys1
    cluster: 100
    Supported Protocol Version(s) : 6.0
    nodes: 0 - 127
    max nodes: 128
    max ports: 32
    links: 2
    mtu: 1452
    max sdu: 66560
    max iovecs: 100
    broadcast HB: 0
    use LLT ARP: 1
    checksum level: 10
    response hbs: 4
    trace buffer size: 10 KB
    trace level: 1
    strict source check: 0
    max threads: 4
    rexmit factor: 10
    link fail detect level: 0
    max buffer pool size: 1GB
    max rreqs: 500
    max advertised buffers: 1000
    advertised buffer size: 65536
    max send work requests: 1000
    max receive work requests: 10000
    max CQ size: 10000 multiports configured: 4
    ssh sys2 lltstat -c
    LLT configuration information:
    node: 1
    name: sys2
    cluster: 100
    Supported Protocol Version(s) : 6.0
    nodes: 0 - 127
    max nodes: 128
    max ports: 32
    links: 2
    mtu: 1452
    max sdu: 66560
    max iovecs: 100
    broadcast HB: 0
    use LLT ARP: 1
    checksum level: 10
    response hbs: 4
    trace buffer size: 10 KB
    trace level: 1
    strict source check: 0
    max threads: 4
    rexmit factor: 10
    link fail detect level: 0
    max buffer pool size: 1GB
    max rreqs: 500
    max advertised buffers: 1000
    advertised buffer size: 65536
    max send work requests: 1000
    max receive work requests: 10000
    max CQ size: 10000 multiports configured: 4
    lltstat -nvv active
    LLT node information:
    Node State Link Status Address
    * 0 sys1 OPEN ens225 UP 00:15:5D:01:80:07 ens256 UP 00:15:5D:01:80:08
    1 sys2 OPEN ens225 UP 00:15:5D:01:80:0F ens256 UP 00:15:5D:01:80:10
    The MAC addresses you observe in your environment may be different.

    12. Use the gabconfig -a command to display Group Membership and Atomic Broadcast (GAB) status.
    gabconfig -a
    GAB Port Memberships
    =========================================================
    Port a gen 8ca401 membership 01
    Port h gen 8ca404 membership 01

    13. Use the hastatus -sum command to display cluster summary status.
    hastatus -sum
    -- SYSTEM STATE
    -- System State Frozen
    A sys1 RUNNING 0
    A sys2 RUNNING 0
    -- GROUP STATE
    -- Group System Probed AutoDisabled State
    B ClusterService sys1 Y N ONLINE
    B ClusterService sys2 Y N OFFLINE
    Note: The ClusterService service group is configured to be able to run on both systems, but is ONLINE on only one system.
     14. Examine the LLT configuration by displaying the contents of the /etc/llttab and /etc/llthosts files.

    cat /etc/llttab
    set-node sys1
    set-cluster 100
    link ens225 eth-00:15:5d:01:80:07 - ether - -
    link ens256 eth-00:15:5d:01:80:08 - ether - -
    cat /etc/llthosts
    0 sys1
    1 sys2

    15. Examine the GAB configuration and verify that the number of systems in the cluster matches the value for the -n flag in the /etc/gabtab file.

    cat /etc/gabtab
    /sbin/gabconfig -c -n2

    16. Verify the VCS cluster name and system names by examining the /etc/VRTSvcs/conf/config/main.cf file.
    more /etc/VRTSvcs/conf/config/main.cf
    include "OracleASMTypes.cf"
    include "types.cf"
    include "Db2udbTypes.cf"
    include "MultiPrivNIC.cf"
    include "OracleTypes.cf"
    include "PrivNIC.cf"
    include "SybaseTypes.cf"
    cluster west (
    ClusterAddress = "10.10.2.51"
    SecureClus = 1
    )
    system sys1 (
    )
    system sys2 (
    )
    group ClusterService (
    SystemList = { sys1 = 0, sys2 = 1 }
    AutoStartList = { sys1, sys2 }
    OnlineRetryLimit = 3

     Running a post-installation check
    Now perform a post install check on the installation and configuration of Veritas InfoScale Enterprise on sys1 and sys2 that was completed in above steps.

    login to sys1 as root
     
     1. Navigate to the /opt/VRTS/install directory that contains the InfoScale 7.3 installer script.
    cd /opt/VRTS/install  

    2. Perform a CPI post-installation check of the InfoScale Enterprise 7.3 systems.
    1. Start the installer script.
      ./installer
    2. Select O for the Perform a Post-Installation Check option.
    3. Accept the default system names of sys1 sys2 when prompted.
    4. Observe that the following checks complete successfully;
      • Status of packages
      • Status of processes and drivers
      • LLT configurations
      • Clusterid configuration
      • GAB, HAD, VXFEN status
      • Disk, Diskgroup, and volume status
      • VxFS status
      If you discover any issues, report them at this time.
    5. Observe that the Veritas InfoScale Enterprise postcheck did not complete successfully and note the Skipped result for vxfen status and Failed for Disk status.
    6. Review the warning messages to determine the reason for the failed status for disk.
      The post installation check displays a warning for disk status because some disks are not in online or online shared state on both systems. The warning can be ignored.
    7. View the summary file, if desired.
      Note: The CPI ends at this point and control is returned to the login shell.
     3. Perform a version check of the installed InfoScale Enterprise 7.3 systems. Start the installer script with the -version option. Specify the sys1 and sys2 system names.
    • Answer n when prompted to view available upgrade options.
    • Do not version check additional systems.
    a. ./installer -version sys1 sys2
    Platform of sys1:
    Linux RHEL 7.0 x86_64
    Installed product(s) on sys1:
    InfoScale Enterprise – 7.3 - Licensed
    Product:
    InfoScale Enterprise – 7.3 - Licensed
    Packages:
    Installed Required packages for InfoScale Enterprise 7.3:
    #PACKAGE #VERSION
    Summary:
    Packages:
    26 of 26 required InfoScale Enterprise 7.3 packages installed
    Installed Patches for InfoScale Enterprise 7.3:
    None
    Platform of sys2:
    Linux RHEL 7.0 x86_64
    Installed product(s) on sys2:
    InfoScale Enterprise – 7.3 - Licensed
    Product:
    InfoScale Enterprise – 7.3 - Licensed
    Packages:
    Installed Required packages for InfoScale Enterprise 7.3:
    #PACKAGE #VERSION
    Summary:
    Packages:
    26 of 26 required InfoScale Enterprise 7.3 packages installed
    Installed Patches for InfoScale Enterprise 7.3:
    None
    b. Would you like to view Available Upgrade Options? y,n,q n
    c. Do you want to version check additional systems? y,n,q n
    Please visit https://sort.Veritas.com for more information.

    Adding cluster systems to VIOM as managed hosts

    Now you add sys1 and sys2 to Veritas InfoScale Operations Manager as managed hosts.
    Login as root to sys1 server.

    1. Start the Firefox Web browser.
    firefox &

    2. Connect to the VIOM management console using the https://mgt.samiora.blogspot.com:14161 URL.
     
     3 Log in using root as the user name and train as the password.

    4 Click the Settings icon.

    5 Click Host.

    6 Click Add Hosts (upper left) and from the drop-down list, select Agent.

    7 Ensure that option None is selected for installing managed host package on the host before adding it to Management Server. Add sys1.example.com using root as the user name and train as the password. Click Finish.

    8 Wait while the system is prepared and added. Click Close.

    9 Add sys2.example.com in the same manner by repeating Steps 6-8.

    10 Note the managed host version (MH Version) for the newly added hosts. Wait until the Discovery State changes to Successful for both sys1 and sys2.

    11 Return to the VIOM Home page.

    12 Click Availability. From the Availability perspective, select Data Center in the left pane and click the Clusters tab in the right-pane to verify that the west cluster is Healthy.

    13 Under Data Center, expand Uncategorized Clusters > west > Systems. Note the Running state for HA Service Status.


    14. Log out of the VIOM management console and close the Firefox window.
    1. From the top right corner of the Veritas InfoScale Operations Manager console, from the Welcome (root) drop-down menu, select Logout.
    2. Close the browser window.
     
    For any queries on Vertias Info Scale cluster setup email me on samiappsdba@gmail.com