Tuesday, December 31, 2019

Configuring an Oracle Service Group in Veritas InfoScale for High Availability

Overall there are below steps to configure an Oracle single instance database and local listener under VCS control. The Oracle binaries, database, and networking is considered to be pre-installed and configured. 
  • A: Verifying the Oracle configuration
  • B: Preparing storage and network resources for the Oracle service group
  • C: Testing the Oracle database manually
  • D: Configuring Oracle under VCS control
  • E: Running a virtual fire drill and switching the Oracle service group
  • F: (Optional) Oracle monitoring
Here we will do only D i.e., to configure oracle under VCS control.
Login to sys1 as root.
  1. Add a resource of type Netlsnr named oralisten to the orasg service group.
    Note: You must be the root user.
    hares -add oralisten Netlsnr orasg
    VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
  2. Modify the Critical, Owner, Home and TnsAdmin resource attributes for the oralisten resource using the following information.
    • Set Critical to 0 (zero).
    • Set Owner to oracle.
    • Set Home to /u01/app/oracle/product/12.1.0/dbhome_1.
    • Set TnsAdmin to /u01/app/oracle/product/12.1.0/dbhome_1/network/admin.
      hares -modify oralisten Critical 0
      hares -modify oralisten Owner oracle
      hares -modify oralisten Home /u01/app/oracle/product/12.1.0/dbhome_1
      hares -modify oralisten TnsAdmin /u01/app/oracle/product/12.1.0/dbhome_1/network/admin
  3. Display the resource attribute values for the oralisten resource to confirm your input.
    hares -display oralisten | more
    #Resource Attribute System Value
    oralisten Group global orasg
    oralisten Type global Netlsnr
    oralisten AutoStart global 1
    oralisten Critical global 0
    oralisten Enabled global 0

    oralisten Home global /u01/app/oracle/product/12.1.0/dbhome_1
    oralisten Listener global LISTENER

    oralisten Owner global oracle

    oralisten TnsAdmin global /u01/app/oracle/product/12.1.0/dbhome_1/network/admin …
  4. Enable the oralisten resource. Bring the oralisten resource online on sys1 and wait for it to come online. Display the state of the oralisten resource and use the ps -ef | grep tnslsnr command to confirm the resource is online.
    hares -modify oralisten Enabled 1
    hares -value oralisten Enabled
    1
    hares -online oralisten -sys sys1
    hares -state oralisten
    #Resource Attribute System Value
    oralisten State sys1 ONLINE
    oralisten State sys2 OFFLINE
    ps -ef | grep tnslsnr
    oracle 21572 1 0 08:24 ? 00:00:00 /u01/app/oracle/product/12.1.0/dbhome_1/bin/tnslsnr LISTENER -inherit
    root 22051 2198 0 08:24 pts/0 00:00:00 grep tnslsnr
  5. Save the VCS configuration, but do not close it.
    haconf -dump
  6. Add a CDB resource of type Oracle named oracdb1 to the orasg service group.
    hares -add oracdb1 Oracle orasg
    VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
  7. Modify the Critical, Sid, Owner and Home resource attributes for the oracdb1 resource using the following information.
    Set Critical to 0 (zero).
    Set Sid to paycdb1.
    Set Owner to oracle.
    Set Home to /u01/app/oracle/product/12.1.0/dbhome_1.
    Set StartUpOpt to STARTUP
    hares -modify oracdb1 Critical 0
    hares -modify oracdb1 Sid paycdb1
    hares -modify oracdb1 Owner oracle
    hares -modify oracdb1 Home /u01/app/oracle/product/12.1.0/dbhome_1
    hares -modify oracdb1 StartUpOpt STARTUP
  8. Add a PDB resource of type Oracle named orapdb1 to the orasg service group.
    hares -add orapdb1 Oracle orasg
    VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
  9. Modify the Critical, Sid, Owner and Home resource attributes for the orapdb1 resource using the following information.
    • Set Critical to 0 (zero).
    • Set Sid to paycdb1.
    • Set Owner to oracle.
    • Set Home to /u01/app/oracle/product/12.1.0/dbhome_1.
    • Set PDBName to paypdb1
      hares -modify orapdb1 Critical 0
      hares -modify orapdb1 Sid paycdb1
      hares -modify orapdb1 Owner oracle
      hares -modify orapdb1 Home /u01/app/oracle/product/12.1.0/dbhome_1
      hares -modify orapdb1 PDBName paypdb1
  10. Display the resource attribute values for the oracdb1 resource to confirm your input.
    hares -display oracdb1 | more
    #Resource Attribute System Value
    oracdb1 Group global orasg
    oracdb1 Type global Oracle
    oracdb1 AutoStart global 1
    oracdb1 Critical global 0
    oracdb1 Enabled global 0

    oracdb1 Home global /u01/app/oracle/product/12.1.0/dbhome_1

    oracdb1 Owner global oracle

    oracdb1 ShutDownOpt global IMMEDIATE
    oracdb1 Sid global paycdb1
    oracdb1 StartUpOpt global STARTUP
  11. Display the resource attribute values for the orapdb1 resource to confirm your input.
    hares -display orapdb1 | more
    #Resource Attribute System Value
    orapdb1 Group global orasg
    orapdb1 Type global Oracle
    orapdb1 AutoStart global 1
    orapdb1 Critical global 0
    orapdb1 Enabled global 0

    orapdb1 Home global /u01/app/oracle/product/12.1.0/dbhome_1

    orapdb1 Owner global oracle
    orapdb1 PDBName global paypdb1

    orapdb1 ShutDownOpt global IMMEDIATE
    orapdb1 Sid global paycdb1
    orapdb1 StartUpOpt global STARTUP_FORCE
  12. Enable the oracdb1 and orapdb1 resource. Bring the oracdb1 resource online on sys1 and wait for it to come online. Display the state of the oracdb1 resource and use the ps -ef | grep oracle command to confirm the resource is online.
    hares -modify oracdb1 Enabled 1
    hares -value oracdb1 Enabled
    1
    hares -modify orapdb1 Enabled 1
    hares -value orapdb1 Enabled
    1
    hares -online oracdb1 -sys sys1
    hares -state oracdb1
    #Resource Attribute System Value
    oracdb1 State sys1 ONLINE
    oracdb1 State sys2 OFFLINE
    ps -ef | grep oracle
    root 15755 15727 0 07:32 pts/1 00:00:00 su - oracle
    oracle 15756 15755 0 07:32 pts/1 00:00:00 -bash
    oracle 21572 1 0 08:24 ? 00:00:00 /u01/app/oracle/product/12.1.0/dbhome_1/bin/tnslsnr LISTENER -inherit
    oracle 22643 1 0 08:29 ? 00:00:00 ora_pmon_paycdb1
    ……
    oracle 22952 1 0 08:29 ? 00:00:00 ora_j010_paycdb1
    oracle 22954 1 0 08:29 ? 00:00:00 ora_j011_paycdb1
    oracle 22956 1 0 08:29 ? 00:00:00 ora_j012_paycdb1
    oracle 22958 1 0 08:29 ? 00:00:00 ora_j013_paycdb1
    oracle 22968 1 0 08:29 ? 00:00:00 ora_j014_paycdb1
    root 22982 2198 0 08:30 pts/0 00:00:00 grep --color=auto oracle
  13. Bring the orapdb1 resource online on sys1 and wait for it to come online. Display the state of the orapdb1 resource and use the ps -ef | grep oracle command to confirm the resource is online.
    hares -online orapdb1 -sys sys1
    hares -state orapdb1
    #Resource Attribute System Value
    orapdb1 State sys1 ONLINE
    orapdb1 State sys2 OFFLINE
  14. Save the VCS configuration, but do not close it.
    haconf -dump
  15. Use the hares command to set the following resource dependencies.
    • oralisten requires oraip
    • oralisten requires oracdb1
    • orapdb1 requires oracdb1
    • oracdb1 requires oraredomnt
    • oracdb1 requires oraarchivemnt
    • oracdb1 requires oraredomnt
      hares -link oralisten oraip
      hares -link oralisten oracdb1
      hares -link orapdb1 oracdb1
      hares -link oracdb1 oradatamnt
      hares -link oracdb1 oraarchivemnt
      hares -link oracdb1 oraredomnt
      hares -dep | grep orasg
      orasg oraarchivemnt oraarchivevol
      orasg oraarchivevol oradg
      orasg oracdb1 oraredomnt
      orasg oracdb1 oraarchivemnt
      orasg oracdb1 oradatamnt
      orasg oradatamnt oradatavol
      orasg oradatavol oradg
      orasg oraip oranic
      orasg oralisten oracdb1
      orasg oralisten oraip
      orasg orapdb1 oracdb1
      orasg oraredomnt oraredovol
      orasg oraredovol oradg
  16. Save the VCS configuration, but do not close it.
    haconf -dump
  17. Display the orasg service group resources that are set to non-critical.
    hares -list Critical=0 Group=orasg
    oracdb1 sys1
    oracdb1 sys2
    oralisten sys1
    oralisten sys2
    orapdb1 sys1
    orapdb1 sys2
  18. Use the hares command to set the identified resources to critical.
    hares -modify oralisten Critical 1
    hares -modify oracdb1 Critical 1
    hares -modify orapdb1 Critical 1
  19. Confirm that all orasg service group resources are now set to critical.
    hares -list Critical=1 Group=orasg
    oraarchivemnt sys1
    oraarchivemnt sys2
    oraarchivevol sys1
    oraarchivevol sys2
    oracdb1 sys1
    oracdb1 sys2
    oradatamnt sys1
    oradatamnt sys2
    oradatavol sys1
    oradatavol sys2
    oradg sys1
    oradg sys2
    oraip sys1
    oraip sys2
    oralisten sys1
    oralisten sys2
    oranic sys1
    oranic sys2
    orapdb1 sys1
    orapdb1 sys2
    oraredomnt sys1
    oraredomnt sys2
    oraredovol sys1
    oraredovol sys2
    Note: IMF is supported only in traditional and container database.
  20. Override the IMF attribute for the PDB resource orapdb1:
    hares -override orapdb1 IMF
  21. Verify if the IMF attribute is overridden using the following command:
    hares -value orapdb1 IMF
    Mode 3 MonitorFreq 5 RegisterRetryLimit 3
  22. Modify the Mode value to 0:
    hares -modify orapdb1 IMF -update Mode 0
  23. Verify the Mode value using the following command:
    hares -value orapdb1 IMF
    Mode 0 MonitorFreq 5 RegisterRetryLimit 3
  24. Save and close the VCS configuration.
    haconf -dump -makero

Thursday, November 28, 2019

Veritas InfoScale Enterpise 7.3 Cluster Setup

Step by step Installation of Veritas InfoScale Enterpise 7.3 on two node cluster machines ed-olvrtslin1 and ed-olvrtslin2 using the Common Product Installer (CPI) and configure Storage Foundation and High Availability (SFHA). The 'TEST' cluster, made up of ed-olvrtslin1 and ed-olvrtslin2, is placed under Veritas InfoScale Operations Manager (VIOM) control.

Installing InfoScale Enterprise using the Common Product Installer (CPI)

1. Navigate to and display a long listing of the /software/is/is73/rhel7_x86_64 directory.
cd /software/is/is73/rhel7_x86_64
ls -l
2. Begin the installation and configuration of InfoScale Availability 7.3 on sys1 and sys2 as follows:

  1. Start the installer script. Select I for Install a Product option.
    ./installer
  2. Select 4 for Veritas InfoScale Enterprise option.
  3. Type y when asked Would you like to configure InfoScale Enterprise after installation?.
  4. Select 3 for Storage Foundation and High Availability (SFHA).
  5. Type y to agree to the terms of the End User License Agreement (EULA).
  6. Type the names of the two lab systems, sys1 and sys2, when prompted.
  7. Observe that the following checks complete successfully:
  • System communications
  • Release compatibility
  • Installed product
  • Platform version
  • Prerequisite patches and rpms
  • File system free space
  • Configured component
  • Product prechecks

    Note: After the system checks got completed successfully if the installer prompts to synchronize the systems with NTP server then select mgt as the NTP server, else continue with next step 2i.
    i. When prompted, type y to stop InfoScale Enterprise processes now.
    j. Observe the list of packages being installed. The software installation may take up to 5 minutes to complete.

    3. When package installation completes, continue with the cluster configuration as follows:

    1. Select 2 for the Enable keyless licensing and complete system licensing later option.
    2. Select 4 for Veritas InfoScale Enterprise when asked which product you would like to register.
    3. Type n to avoid configuring I/O fencing in enabled mode.
      Do you want to configure I/O Fencing in enabled mode? n
    4. When prompted, press Enter to continue.
    5. Enter west as the unique cluster name.
      Veritas InfoScale Enterprise 7.3 Install Program
      sys1 sys2
      To configure VCS for InfoScale Enterprise the following information is required:
      A unique cluster name
      Two or more NICs per system used for heartbeat links
      A unique cluster ID number between 0-65535
      One or more heartbeat links are configured as private links
      You can configure one heartbeat link as a low-priority link
      NetworkManager services are stopped on all systems
      All systems are being configured to create one cluster.
      Enter the unique cluster name: [q,?] TEST
    4.  When asked to configure heartbeat links for the cluster, provide the following input step by step.

    1. Type 1 to configure heartbeat links using LLT over Ethernet.
      1) Configure heartbeat links using LLT over Ethernet
      2) Configure heartbeat links using LLT over UDP
      3) Configure heartbeat links using LLT over RDMA
      4) Automatically detect configuration for LLT over Ethernet
      b) Back to previous menu
      How would you like to configure heartbeat links? 1-4,b,q,? 1
    2. Type ens225 as the NIC for the first private heartbeat link on sys1.
    3. Type ens256 as the NIC for the second private heartbeat link on sys1.
    4. Type n to avoid configuring the third private heartbeat link.
    5. Type n to avoid configuring an additional low-priority heartbeat link.
    6. Type y for using the same NICs for private heart beat links on all systems.
    7. Type 100 when prompted for a unique cluster ID.
    8. Type n to check if the cluster ID is in use by another cluster.
    9. Verify the information you provided by typing y if the provided information is correct.

      Cluster information verification:
      Cluster Name: west
      Cluster ID Number: 100
      Private Heartbeat NICs for sys1:
      link1=ens225
      link2=ens256
      Private Heartbeat NICs for sys2:
      link1=ens225
      link2=ens256
      Is this information correct? y,n,q,? y
     5. Continue the installation and configuration of InfoScale Enterprise by configuring the virtual IP address of the cluster, secure mode, users, SMTP notifications and SNMP notifications as follows:
    1. Type y to configure the Virtual IP.
    2. Type ens162 when prompted to enter the NIC for Virtual IP of the cluster to use on sys1.
    3. Type y to specify ens162 as the public NIC used by all systems.
    4. Type 10.10.2.51 when asked to specify the Virtual IP address for the Cluster.
    5. Press Enter to accept the default 255.255.255.0 as the NetMask for IP 10.10.2.51.
    6. Type y to confirm that the Cluster Virtual IP information is correct.
    7. Type y to configure the VCS cluster in secure mode.
    8. Type n to avoid granting read access to everyone.
    9. Type n to avoid providing any usergroups that you would like to grant read access.
    10. Type 1 to select the Configure the cluster in secure mode option (this is secure mode but not with FIPS).
    11. Type n to avoid configuring SMTP notification.
    12. Type n to avoid configuring SNMP notification.
    13. Type n to avoid configuring the Global Cluster Option.

      6. When prompted, type y to stop InfoScale Enterprise processes now.

      7. Observe the installation as InfoScale Enterprise processes are stopped, the SFHA configuration is performed, and the processes are started.

      8. Type n to avoid sending the installation information to Veritas.

      9. Complete the installation and configuration of InfoScale Enterprise by recording the directory within the /opt/VRTS/install/logs directory that contains the installer log files and viewing the summary file.

      Directory within the /opt/VRTS/install/logs directory that contains the log files, summary file and response file:
      The directory will be in the form installer-yyyymmddhhmmxxx
      a. Notice that the installer log files, summary file, and response file are saved at:
      /opt/VRTS/install/logs/installer-yyyymmddhhmmxxx
      b. Type y to view the summary file.
      c. To scroll through the summary file, press Enter multiple times.
      Note: The CPI ends at this point and control is returned to the login shell.
        10. Navigate to the log directory, list the directory contents, and examine the *.log0 log file and any other files you wish to view.

      cd /opt/VRTS/install/logs/installer-yyyymmddhhmmxxx
      Note: Use the directory you recorded in the previous step in place of installer-yyyymmddhhmmxxx.
      ls
        
      installer-yyyymmddhhmmxxx.log#
      installer-yyyymmddhhmmxxx.metrics
      installer-yyyymmddhhmmxxx.response
      installer-yyyymmddhhmmxxx.summary
      installer-yyyymmddhhmmxxx.tunables
      install.package.system
      start.SFHAprocess.system
      stop.SFHAprocess.system
      grep had installer-yyyymmddhhmmxxx.log0
      0 14:19:43 had stopped successfully on sys1
      0 14:19:43 had stopped successfully on sys2
      0 14:21:55 had started successfully on sys1
      0 14:21:55 had started successfully on sys2
      Note: Examine any other files using the more, cat or grep commands.
     11. Use the lltstat -c command on both systems to display Low Latency Transpor (LLT) status and notice the differences in node and name. Then, use the lltstat –nvv active command only on sys1.
    lltstat -c
    LLT configuration information:
    node: 0
    name: sys1
    cluster: 100
    Supported Protocol Version(s) : 6.0
    nodes: 0 - 127
    max nodes: 128
    max ports: 32
    links: 2
    mtu: 1452
    max sdu: 66560
    max iovecs: 100
    broadcast HB: 0
    use LLT ARP: 1
    checksum level: 10
    response hbs: 4
    trace buffer size: 10 KB
    trace level: 1
    strict source check: 0
    max threads: 4
    rexmit factor: 10
    link fail detect level: 0
    max buffer pool size: 1GB
    max rreqs: 500
    max advertised buffers: 1000
    advertised buffer size: 65536
    max send work requests: 1000
    max receive work requests: 10000
    max CQ size: 10000 multiports configured: 4
    ssh sys2 lltstat -c
    LLT configuration information:
    node: 1
    name: sys2
    cluster: 100
    Supported Protocol Version(s) : 6.0
    nodes: 0 - 127
    max nodes: 128
    max ports: 32
    links: 2
    mtu: 1452
    max sdu: 66560
    max iovecs: 100
    broadcast HB: 0
    use LLT ARP: 1
    checksum level: 10
    response hbs: 4
    trace buffer size: 10 KB
    trace level: 1
    strict source check: 0
    max threads: 4
    rexmit factor: 10
    link fail detect level: 0
    max buffer pool size: 1GB
    max rreqs: 500
    max advertised buffers: 1000
    advertised buffer size: 65536
    max send work requests: 1000
    max receive work requests: 10000
    max CQ size: 10000 multiports configured: 4
    lltstat -nvv active
    LLT node information:
    Node State Link Status Address
    * 0 sys1 OPEN ens225 UP 00:15:5D:01:80:07 ens256 UP 00:15:5D:01:80:08
    1 sys2 OPEN ens225 UP 00:15:5D:01:80:0F ens256 UP 00:15:5D:01:80:10
    The MAC addresses you observe in your environment may be different.

    12. Use the gabconfig -a command to display Group Membership and Atomic Broadcast (GAB) status.
    gabconfig -a
    GAB Port Memberships
    =========================================================
    Port a gen 8ca401 membership 01
    Port h gen 8ca404 membership 01

    13. Use the hastatus -sum command to display cluster summary status.
    hastatus -sum
    -- SYSTEM STATE
    -- System State Frozen
    A sys1 RUNNING 0
    A sys2 RUNNING 0
    -- GROUP STATE
    -- Group System Probed AutoDisabled State
    B ClusterService sys1 Y N ONLINE
    B ClusterService sys2 Y N OFFLINE
    Note: The ClusterService service group is configured to be able to run on both systems, but is ONLINE on only one system.
     14. Examine the LLT configuration by displaying the contents of the /etc/llttab and /etc/llthosts files.

    cat /etc/llttab
    set-node sys1
    set-cluster 100
    link ens225 eth-00:15:5d:01:80:07 - ether - -
    link ens256 eth-00:15:5d:01:80:08 - ether - -
    cat /etc/llthosts
    0 sys1
    1 sys2

    15. Examine the GAB configuration and verify that the number of systems in the cluster matches the value for the -n flag in the /etc/gabtab file.

    cat /etc/gabtab
    /sbin/gabconfig -c -n2

    16. Verify the VCS cluster name and system names by examining the /etc/VRTSvcs/conf/config/main.cf file.
    more /etc/VRTSvcs/conf/config/main.cf
    include "OracleASMTypes.cf"
    include "types.cf"
    include "Db2udbTypes.cf"
    include "MultiPrivNIC.cf"
    include "OracleTypes.cf"
    include "PrivNIC.cf"
    include "SybaseTypes.cf"
    cluster west (
    ClusterAddress = "10.10.2.51"
    SecureClus = 1
    )
    system sys1 (
    )
    system sys2 (
    )
    group ClusterService (
    SystemList = { sys1 = 0, sys2 = 1 }
    AutoStartList = { sys1, sys2 }
    OnlineRetryLimit = 3

     Running a post-installation check
    Now perform a post install check on the installation and configuration of Veritas InfoScale Enterprise on sys1 and sys2 that was completed in above steps.

    login to sys1 as root
     
     1. Navigate to the /opt/VRTS/install directory that contains the InfoScale 7.3 installer script.
    cd /opt/VRTS/install  

    2. Perform a CPI post-installation check of the InfoScale Enterprise 7.3 systems.
    1. Start the installer script.
      ./installer
    2. Select O for the Perform a Post-Installation Check option.
    3. Accept the default system names of sys1 sys2 when prompted.
    4. Observe that the following checks complete successfully;
      • Status of packages
      • Status of processes and drivers
      • LLT configurations
      • Clusterid configuration
      • GAB, HAD, VXFEN status
      • Disk, Diskgroup, and volume status
      • VxFS status
      If you discover any issues, report them at this time.
    5. Observe that the Veritas InfoScale Enterprise postcheck did not complete successfully and note the Skipped result for vxfen status and Failed for Disk status.
    6. Review the warning messages to determine the reason for the failed status for disk.
      The post installation check displays a warning for disk status because some disks are not in online or online shared state on both systems. The warning can be ignored.
    7. View the summary file, if desired.
      Note: The CPI ends at this point and control is returned to the login shell.
     3. Perform a version check of the installed InfoScale Enterprise 7.3 systems. Start the installer script with the -version option. Specify the sys1 and sys2 system names.
    • Answer n when prompted to view available upgrade options.
    • Do not version check additional systems.
    a. ./installer -version sys1 sys2
    Platform of sys1:
    Linux RHEL 7.0 x86_64
    Installed product(s) on sys1:
    InfoScale Enterprise – 7.3 - Licensed
    Product:
    InfoScale Enterprise – 7.3 - Licensed
    Packages:
    Installed Required packages for InfoScale Enterprise 7.3:
    #PACKAGE #VERSION
    Summary:
    Packages:
    26 of 26 required InfoScale Enterprise 7.3 packages installed
    Installed Patches for InfoScale Enterprise 7.3:
    None
    Platform of sys2:
    Linux RHEL 7.0 x86_64
    Installed product(s) on sys2:
    InfoScale Enterprise – 7.3 - Licensed
    Product:
    InfoScale Enterprise – 7.3 - Licensed
    Packages:
    Installed Required packages for InfoScale Enterprise 7.3:
    #PACKAGE #VERSION
    Summary:
    Packages:
    26 of 26 required InfoScale Enterprise 7.3 packages installed
    Installed Patches for InfoScale Enterprise 7.3:
    None
    b. Would you like to view Available Upgrade Options? y,n,q n
    c. Do you want to version check additional systems? y,n,q n
    Please visit https://sort.Veritas.com for more information.

    Adding cluster systems to VIOM as managed hosts

    Now you add sys1 and sys2 to Veritas InfoScale Operations Manager as managed hosts.
    Login as root to sys1 server.

    1. Start the Firefox Web browser.
    firefox &

    2. Connect to the VIOM management console using the https://mgt.samiora.blogspot.com:14161 URL.
     
     3 Log in using root as the user name and train as the password.

    4 Click the Settings icon.

    5 Click Host.

    6 Click Add Hosts (upper left) and from the drop-down list, select Agent.

    7 Ensure that option None is selected for installing managed host package on the host before adding it to Management Server. Add sys1.example.com using root as the user name and train as the password. Click Finish.

    8 Wait while the system is prepared and added. Click Close.

    9 Add sys2.example.com in the same manner by repeating Steps 6-8.

    10 Note the managed host version (MH Version) for the newly added hosts. Wait until the Discovery State changes to Successful for both sys1 and sys2.

    11 Return to the VIOM Home page.

    12 Click Availability. From the Availability perspective, select Data Center in the left pane and click the Clusters tab in the right-pane to verify that the west cluster is Healthy.

    13 Under Data Center, expand Uncategorized Clusters > west > Systems. Note the Running state for HA Service Status.


    14. Log out of the VIOM management console and close the Firefox window.
    1. From the top right corner of the Veritas InfoScale Operations Manager console, from the Welcome (root) drop-down menu, select Logout.
    2. Close the browser window.
     
    For any queries on Vertias Info Scale cluster setup email me on samiappsdba@gmail.com