Step by step Installation of Veritas InfoScale Enterpise 7.3 on two node cluster machines ed-olvrtslin1 and ed-olvrtslin2 using the Common Product Installer (CPI) and configure Storage Foundation and High Availability (SFHA). The 'TEST' cluster, made up of ed-olvrtslin1 and ed-olvrtslin2, is placed under Veritas InfoScale Operations Manager (VIOM) control.
Installing InfoScale Enterprise using the Common Product Installer (CPI)
1. Navigate to and display a long listing of the /software/is/is73/rhel7_x86_64 directory.
cd /software/is/is73/rhel7_x86_64
ls -l
2. Begin the installation and configuration of InfoScale Availability 7.3 on sys1 and sys2 as follows:
- Start the installer script. Select I for Install a Product option.
./installer - Select 4 for Veritas InfoScale Enterprise option.
- Type y when asked Would you like to configure InfoScale Enterprise after installation?.
- Select 3 for Storage Foundation and High Availability (SFHA).
- Type y to agree to the terms of the End User License Agreement (EULA).
- Type the names of the two lab systems, sys1 and sys2, when prompted.
- Observe that the following checks complete successfully:
- System communications
- Release compatibility
- Installed product
- Platform version
- Prerequisite patches and rpms
- File system free space
- Configured component
- Product prechecks
Note: After the system checks got completed successfully if the installer prompts to synchronize the systems with NTP server then select mgt as the NTP server, else continue with next step 2i.
i. When prompted, type y to stop InfoScale Enterprise processes now.
j. Observe the list of packages being installed. The software installation may take up to 5 minutes to complete.
3. When package installation completes, continue with the cluster configuration as follows:
- Select 2 for the Enable keyless licensing and complete system licensing later option.
- Select 4 for Veritas InfoScale Enterprise when asked which product you would like to register.
- Type n to avoid configuring I/O fencing in enabled mode.
Do you want to configure I/O Fencing in enabled mode? n - When prompted, press Enter to continue.
- Enter west as the unique cluster name.
Veritas InfoScale Enterprise 7.3 Install Program
sys1 sys2
To configure VCS for InfoScale Enterprise the following information is required:
A unique cluster name
Two or more NICs per system used for heartbeat links
A unique cluster ID number between 0-65535
One or more heartbeat links are configured as private links
You can configure one heartbeat link as a low-priority link
NetworkManager services are stopped on all systems
All systems are being configured to create one cluster.
Enter the unique cluster name: [q,?] TEST
- Type 1 to configure heartbeat links using LLT over Ethernet.
1) Configure heartbeat links using LLT over Ethernet
2) Configure heartbeat links using LLT over UDP
3) Configure heartbeat links using LLT over RDMA
4) Automatically detect configuration for LLT over Ethernet
b) Back to previous menu
How would you like to configure heartbeat links? 1-4,b,q,? 1 - Type ens225 as the NIC for the first private heartbeat link on sys1.
- Type ens256 as the NIC for the second private heartbeat link on sys1.
- Type n to avoid configuring the third private heartbeat link.
- Type n to avoid configuring an additional low-priority heartbeat link.
- Type y for using the same NICs for private heart beat links on all systems.
- Type 100 when prompted for a unique cluster ID.
- Type n to check if the cluster ID is in use by another cluster.
- Verify the information you provided by typing y if the provided information is correct.
…
Cluster information verification:
Cluster Name: west
Cluster ID Number: 100
Private Heartbeat NICs for sys1:
link1=ens225
link2=ens256
Private Heartbeat NICs for sys2:
link1=ens225
link2=ens256
Is this information correct? y,n,q,? y
- Type y to configure the Virtual IP.
- Type ens162 when prompted to enter the NIC for Virtual IP of the cluster to use on sys1.
- Type y to specify ens162 as the public NIC used by all systems.
- Type 10.10.2.51 when asked to specify the Virtual IP address for the Cluster.
- Press Enter to accept the default 255.255.255.0 as the NetMask for IP 10.10.2.51.
- Type y to confirm that the Cluster Virtual IP information is correct.
- Type y to configure the VCS cluster in secure mode.
- Type n to avoid granting read access to everyone.
- Type n to avoid providing any usergroups that you would like to grant read access.
- Type 1 to select the Configure the cluster in secure mode option (this is secure mode but not with FIPS).
- Type n to avoid configuring SMTP notification.
- Type n to avoid configuring SNMP notification.
- Type n to avoid configuring the Global Cluster Option.
6. When prompted, type y to stop InfoScale Enterprise processes now.
7. Observe the installation as InfoScale Enterprise processes are stopped, the SFHA configuration is performed, and the processes are started.
8. Type n to avoid sending the installation information to Veritas.
9. Complete the installation and configuration of InfoScale Enterprise by recording the directory within the /opt/VRTS/install/logs directory that contains the installer log files and viewing the summary file.
Directory within the /opt/VRTS/install/logs directory that contains the log files, summary file and response file: The directory will be in the form installer-yyyymmddhhmmxxx a. Notice that the installer log files, summary file, and response file are saved at:
/opt/VRTS/install/logs/installer-yyyymmddhhmmxxxb. Type y to view the summary file.c. To scroll through the summary file, press Enter multiple times.Note: The CPI ends at this point and control is returned to the login shell.
cd /opt/VRTS/install/logs/installer-yyyymmddhhmmxxxNote: Use the directory you recorded in the previous step in place of installer-yyyymmddhhmmxxx.ls
installer-yyyymmddhhmmxxx.log#installer-yyyymmddhhmmxxx.metricsinstaller-yyyymmddhhmmxxx.responseinstaller-yyyymmddhhmmxxx.summaryinstaller-yyyymmddhhmmxxx.tunablesinstall.package.systemstart.SFHAprocess.systemstop.SFHAprocess.systemgrep had installer-yyyymmddhhmmxxx.log0…0 14:19:43 had stopped successfully on sys10 14:19:43 had stopped successfully on sys20 14:21:55 had started successfully on sys10 14:21:55 had started successfully on sys2Note: Examine any other files using the more, cat or grep commands.
lltstat -cLLT configuration information:node: 0name: sys1cluster: 100Supported Protocol Version(s) : 6.0nodes: 0 - 127max nodes: 128max ports: 32links: 2mtu: 1452max sdu: 66560max iovecs: 100broadcast HB: 0use LLT ARP: 1checksum level: 10response hbs: 4trace buffer size: 10 KBtrace level: 1strict source check: 0max threads: 4rexmit factor: 10link fail detect level: 0max buffer pool size: 1GBmax rreqs: 500max advertised buffers: 1000advertised buffer size: 65536max send work requests: 1000max receive work requests: 10000max CQ size: 10000 multiports configured: 4ssh sys2 lltstat -cLLT configuration information:node: 1name: sys2cluster: 100Supported Protocol Version(s) : 6.0nodes: 0 - 127max nodes: 128max ports: 32links: 2mtu: 1452max sdu: 66560max iovecs: 100broadcast HB: 0use LLT ARP: 1checksum level: 10response hbs: 4trace buffer size: 10 KBtrace level: 1strict source check: 0max threads: 4rexmit factor: 10link fail detect level: 0max buffer pool size: 1GBmax rreqs: 500max advertised buffers: 1000advertised buffer size: 65536max send work requests: 1000max receive work requests: 10000max CQ size: 10000 multiports configured: 4lltstat -nvv activeLLT node information:Node State Link Status Address* 0 sys1 OPEN ens225 UP 00:15:5D:01:80:07 ens256 UP 00:15:5D:01:80:081 sys2 OPEN ens225 UP 00:15:5D:01:80:0F ens256 UP 00:15:5D:01:80:10The MAC addresses you observe in your environment may be different.12. Use the gabconfig -a command to display Group Membership and Atomic Broadcast (GAB) status.gabconfig -aGAB Port Memberships=========================================================Port a gen 8ca401 membership 01Port h gen 8ca404 membership 0113. Use the hastatus -sum command to display cluster summary status.hastatus -sum-- SYSTEM STATE-- System State FrozenA sys1 RUNNING 0A sys2 RUNNING 0-- GROUP STATE-- Group System Probed AutoDisabled StateB ClusterService sys1 Y N ONLINEB ClusterService sys2 Y N OFFLINENote: The ClusterService service group is configured to be able to run on both systems, but is ONLINE on only one system.14. Examine the LLT configuration by displaying the contents of the /etc/llttab and /etc/llthosts files.cat /etc/llttabset-node sys1set-cluster 100link ens225 eth-00:15:5d:01:80:07 - ether - -link ens256 eth-00:15:5d:01:80:08 - ether - -cat /etc/llthosts0 sys11 sys215. Examine the GAB configuration and verify that the number of systems in the cluster matches the value for the -n flag in the /etc/gabtab file.cat /etc/gabtab/sbin/gabconfig -c -n216. Verify the VCS cluster name and system names by examining the /etc/VRTSvcs/conf/config/main.cf file.more /etc/VRTSvcs/conf/config/main.cfinclude "OracleASMTypes.cf"include "types.cf"…include "Db2udbTypes.cf"include "MultiPrivNIC.cf"include "OracleTypes.cf"include "PrivNIC.cf"include "SybaseTypes.cf"cluster west (ClusterAddress = "10.10.2.51"SecureClus = 1)system sys1 ()system sys2 ()group ClusterService (SystemList = { sys1 = 0, sys2 = 1 }AutoStartList = { sys1, sys2 }OnlineRetryLimit = 3…Running a post-installation checkNow perform a post install check on the installation and configuration of Veritas InfoScale Enterprise on sys1 and sys2 that was completed in above steps.login to sys1 as root1. Navigate to the /opt/VRTS/install directory that contains the InfoScale 7.3 installer script.cd /opt/VRTS/install
2. Perform a CPI post-installation check of the InfoScale Enterprise 7.3 systems.
- Start the installer script.
./installer - Select O for the Perform a Post-Installation Check option.
- Accept the default system names of sys1 sys2 when prompted.
- Observe that the following checks complete successfully;
- Status of packages
- Status of processes and drivers
- LLT configurations
- Clusterid configuration
- GAB, HAD, VXFEN status
- Disk, Diskgroup, and volume status
- VxFS status
- Observe that the Veritas InfoScale Enterprise postcheck did not complete successfully and note the Skipped result for vxfen status and Failed for Disk status.
- Review the warning messages to determine the reason for the failed status for disk.
The post installation check displays a warning for disk status because some disks are not in online or online shared state on both systems. The warning can be ignored. - View the summary file, if desired.
Note: The CPI ends at this point and control is returned to the login shell.
- Answer n when prompted to view available upgrade options.
- Do not version check additional systems.
a. ./installer -version sys1 sys2…Platform of sys1:Linux RHEL 7.0 x86_64Installed product(s) on sys1:InfoScale Enterprise – 7.3 - LicensedProduct:InfoScale Enterprise – 7.3 - LicensedPackages:Installed Required packages for InfoScale Enterprise 7.3:#PACKAGE #VERSION…Summary:Packages:26 of 26 required InfoScale Enterprise 7.3 packages installedInstalled Patches for InfoScale Enterprise 7.3:NonePlatform of sys2:Linux RHEL 7.0 x86_64Installed product(s) on sys2:InfoScale Enterprise – 7.3 - LicensedProduct:InfoScale Enterprise – 7.3 - LicensedPackages:Installed Required packages for InfoScale Enterprise 7.3:#PACKAGE #VERSION…Summary:Packages:26 of 26 required InfoScale Enterprise 7.3 packages installedInstalled Patches for InfoScale Enterprise 7.3:Noneb. Would you like to view Available Upgrade Options? y,n,q nc. Do you want to version check additional systems? y,n,q nPlease visit https://sort.Veritas.com for more information.Adding cluster systems to VIOM as managed hosts
Now you add sys1 and sys2 to Veritas InfoScale Operations Manager as managed hosts.
1. Start the Firefox Web browser.
firefox &
2. Connect to the VIOM management console using the https://mgt.samiora.blogspot.com:14161 URL.
3 Log in using root as the user name and train as the password.
4 Click the Settings icon.
5 Click Host.
6 Click Add Hosts (upper left) and from the drop-down list, select Agent.
7 Ensure that option None is selected for installing managed host package on the host before adding it to Management Server. Add sys1.example.com using root as the user name and train as the password. Click Finish.
8 Wait while the system is prepared and added. Click Close.
9 Add sys2.example.com in the same manner by repeating Steps 6-8.
10 Note the managed host version (MH Version) for the newly added hosts. Wait until the Discovery State changes to Successful for both sys1 and sys2.
11 Return to the VIOM Home page.
12 Click Availability. From the Availability perspective, select Data Center in the left pane and click the Clusters tab in the right-pane to verify that the west cluster is Healthy.
13 Under Data Center, expand Uncategorized Clusters > west > Systems. Note the Running state for HA Service Status.
14. Log out of the VIOM management console and close the Firefox window.
- From the top right corner of the Veritas InfoScale Operations Manager console, from the Welcome (root) drop-down menu, select Logout.
- Close the browser window.
For any queries on Vertias Info Scale cluster setup email me on samiappsdba@gmail.com