Here you reconfigure the VCS configuration for loopy from a resource of type Process to a resource of type Application and then examine IMF monitoring for the new resource of type Application.
This article contains the following:
- Adding a resource of type Application
- Testing the resource
- IMF and Application agent monitoring options
Adding a resource of type Application
Here you remove the existing appproc resource of type Process and replace it with a resource named appapp of type Application. The appfoo resource of type FileOnOff is removed as well.
Login to sys1 as root user
1. Verify that the appsg service group is online on sys1. Switch it if it is not.
hagrp -state appsg
#Group Attribute System Value
appsg State sys1 |OFFLINE|
appsg State sys2 |ONLINE|
Note: If appsg service group is not online on sys1 then perform the following steps. Otherwise, skip to Step 2.
hagrp -switch appsg -to sys1
hagrp -state appsg
#Group Attribute System Value
appsg State sys1 |ONLINE|
appsg State sys2 |OFFLINE|
2. Open the cluster configuration for update.
haconf -makerw
3. Take the appproc and appfoo resources offline either individually or determine the resource dependencies and use the -parentprop switch to take both resources offline in one command. Wait for both resources to go offline.
hares -offline appproc -sys sys1
hares -state appproc
#Resource Attribute System Value
appproc State sys1 OFFLINE
appproc State sys2 OFFLINE
hares -offline appfoo -sys sys1
hares -state appfoo
#Resource Attribute System Value
appfoo State sys1 OFFLINE
appfoo State sys2 OFFLINE
Note: The following commands show the alternative method of determining the resource dependencies and using the -parentprop switch to take both resources offline in one command. Note that you do not need to perform the following commands if you have already taken the resources offline.
hares -dep
#Group Parent Child
ClusterService notifier csgnic
ClusterService webip csgnic
appsg appfoo appmnt
appsg appip appnic
appsg appmnt appvol
appsg appproc appip
appsg appproc appfoo
appsg appvol appdg
hares -offline -parentprop appfoo -sys sys1
hares -state appproc
#Resource Attribute System Value
appproc State sys1 OFFLINE
appproc State sys2 OFFLINE
hares -state appfoo
#Resource Attribute System Value
appfoo State sys1 OFFLINE
appfoo State sys2 OFFLINE
4. Disable the appproc and appfoo resources.
hares -modify appproc Enabled 0
hares -modify appfoo Enabled 0
hares -list Group=appsg Enabled=0
appfoo sys1
appfoo sys2
appproc sys1
appproc sys2
5. Unlink the appproc resource from the appip and appfoo resources. Unlink the appfoo resource from the appmnt resource.
hares -dep appproc
#Group Parent Child
appsg appproc appip
appsg appproc appfoo
hares -unlink appproc appip
hares -unlink appproc appfoo
hares -unlink appfoo appmnt
hares -dep appproc
VCS WARNING V-16-1-50034 No Resource dependencies are configured
6. Delete the appproc and appfoo resources from the cluster configuration.
hagrp -resources appsg
appdg
appfoo
appip
appmnt
appnic
appproc
appvol
hares -delete appproc
hares -delete appfoo
hagrp -resources appsg
appdg
appip
appmnt
appnic
appvol
7. Save the VCS configuration, but do not close it.
haconf -dump
8. Navigate to the /loopyfs directory, perform a long listing of the location, and display the startloopy and stoploopy scripts. Navigate away from /loopfs to /.
Note: The startloopy and stoploopy scripts will be used in the configuration of the resource of type Application.
cd /loopyfs
ls -l
total 12146
-rwxr-xr-x 1 root root 311 Nov 11 2011 loopy
-rw-r--r-- 1 root root 1023656 Jun 24 10:27 loopyout
drwxr-xr-x 2 root root 96 Oct 26 2011 lost+found
-rwxr-xr-x 1 root root 262 Oct 26 2011 startloopy
-rwxr-xr-x 1 root root 211 Oct 26 2011 stoploopy
cat startloopy
!/bin/ksh
# \$1 is SG name
SG_NAME=\$1
# execute loopy in the backgroup in the directory passed in (the SG filesystem).
/loopyfs/loopy \${SG_NAME} &
#capture the pid of loopy and store in the loopypid file in the SG filesystem
echo \$!>/loopyfs/loopypid
exit 0
cat stoploopy
#!/bin/ksh
# read the pid of loopy out of the loopypid file in the SG filesystem
PID=`cat /loopyfs/loopypid`
# Kill the pid for loopy
kill -9 \$PID
# Remove the Pid file for the SG
rm /loopyfs/loopypid
exit 0
cd /
9. Add a resource of type Application named appapp to the appsg service group.
hares -add appapp Application appsg
VCS NOTICE V-16-1-10242 Resource added. Enabled attribute must be set before agent monitors
10. Modify the following resource attributes for the appproc resource.
- Set Critical to 0 (zero).
- Set StartProgram to "/loopyfs/startloopy appsg".
Note: The double quotes are required for CLI input. - Set StopProgram to /loopyfs/stoploopy.
- Set PidFiles to /loopyfs/loopypid.
Note: One or more of MonitorProcesses, MonitorProgram, or PidFiles can be configured. PidFiles alone is used in this exercise and will be changed in a later exercise.
hares -modify appapp Critical 0
hares -value appapp Critical
0
hares -modify appapp StartProgram "/loopyfs/startloopy appsg"
hares -value appapp StartProgram
/loopyfs/startloopy appsg
hares -modify appapp StopProgram /loopyfs/stoploopy
hares -value appapp StopProgram
/loopyfs/stoploopy
hares -modify appapp PidFiles /loopyfs/loopypid
hares -value appapp PidFiles
/loopyfs/loopypid
11. Enable the appapp resource and verify it is enabled. Bring the appapp resource online on sys1 and wait for it to come online. Display the state of the appapp resource. Use the ps -ef | grep loopy command to confirm the resource is online and verify that the PID shown in the ps command matches the value in the /loopfs/loopypid file. Further, use the tail -f /loopyfs/loopyout command to verify that the resource is online by noting three or four logging iterations. Finally, terminate the tail command by pressing Ctrl-C.
hares -modify appapp Enabled 1
hares -value appapp Enabled
1
hares -online appapp -sys sys1
hares -state appapp
#Resource Attribute System Value
appapp State sys1 ONLINE
appapp State sys2 OFFLINE
ps -ef | grep loopy
root 18875 1 0 03:38 ? 00:00:00 /bin/ksh /loopyfs/loopy appsg
root 19410 7423 0 03:39 pts/2 00:00:00 grep –color=auto loopy
cat /loopyfs/loopypid
18875
tail -f /loopyfs/loopyout
…
Wed Jun 24 12:42:22 EDT 2015 appsg Loopy is alive
Wed Jun 24 12:42:26 EDT 2015 appsg Loopy is still alive
Wed Jun 24 12:42:30 EDT 2015 appsg Loopy is alive
Wed Jun 24 12:42:34 EDT 2015 appsg Loopy is still alive
…
Press Ctrl-C.
12. Save the VCS configuration, but do not close it.
haconf -dump
13. Link and verify the resources using the following information.
- appapp requires appmnt
- appapp requires appip
hares -link appapp appmnt
hares -link appapp appip
hares -dep appapp
#Group Parent Child
appsg appapp appip
appsg appapp appmnt
14. Save the VCS configuration, but do not close it.
haconf –dump
Testing the resource
Here you test the reconfigured appsg service group by switching it between cluster systems.
Login to sys1 as root
1. Test the appsg service group by switching it to sys2 using the -any switch. Wait until the appsg service group is fully online on sys2.
hagrp -state appsg
#Group Attribute System Value
appsg State sys1 |ONLINE|
appsg State sys2 |OFFLINE|
hagrp -switch appsg -any
hagrp -state appsg
#Group Attribute System Value
appsg State sys1 |OFFLINE|
appsg State sys2 |ONLINE|
2. Test the appsg service group by switching it back to sys1. Wait until the appsg service group is fully online on the sys1
hagrp -switch appsg -to sys1
hagrp -state appsg
#Group Attribute System Value
appsg State sys1 |ONLINE|
appsg State sys2 |OFFLINE|
3. Set the appapp resource to critical.
hares -value appapp Critical
0
hares -modify appapp Critical 1
hares -value appapp Critical
1
4. Save the VCS configuration, but do not close it.
haconf -dump
IMF and Application agent monitoring options
Here you examine the impact of setting various monitoring options for a resource of type Application under IMF.
Login as root to sys1 server
1. Display the MonitorMethod resource attribute values for the appapp resource. Is the appapp resource being monitored by IMF on both systems?
You should normally observe that the resource is being monitored by IMF |
---|
on both systems. However, if you recently switched the service group from |
sys2 to sys1 and the resource has not yet been monitored twice on sys2 |
using the Traditional method (based on the OfflineMonitorInterval), you |
may observe that the resource is still being monitored by the Traditional |
method on sys2. |
hares -display appapp -attribute MonitorMethod
#Resource Attribute System Value
appapp MonitorMethod sys1 IMF
appapp MonitorMethod sys2 IMF
Note: If the MonitorMethod value is displayed as Traditional on any system, use the hares -probe command to probe the resource on that system and then display the MonitorMethod values again.
2. Use the amfstat -t command on both systems to determine IMF monitoring. Is the appapp resource being monitored by IMF on both systems?
IMF is enabled and active for the Application agent on both systems. |
---|
amfstat -t
AMF Status Report
Process ONLINE Monitors (1):
============================
RID R_RID PID GROUP
69 65 10614 appapp
Mount ONLINE Monitors (1):
==========================
RID R_RID FSTYPE DEVICE
MOUNTPOINT GROUP CONTAINER
MOUNTPOINT GROUP CONTAINER
68 54 vxfs /dev/vx/dsk/loopydatadg/loopydatavol /loopyfs appmnt \<none>
DG online Monitors (1):
==========================
RID R_RID GROUP DGName
67 55 appdg loopydatadg
ssh sys2 /opt/VRTS/bin/amfstat -t
AMF Status Report
Process OFFLINE Monitors (1):
============================
RID R_RID PATH ARGV0
ARGS UID EUID GID EGID GROUP CONTAINER ACTION
ARGS UID EUID GID EGID GROUP CONTAINER ACTION
58 45 /loopyfs/startloopy /loopyfs/startloopy appsg 0 \<any> \<any> \<any> appapp \<none> Allow
Note: If the DiskGroup and Mount resources have also switched to IMF monitoring on sys2, you would also see entries for those resources.
3. Use the ps –ef | grep loopy command to determine the process name for the process that is to be monitored.
Note: This string is used as one value for the MonitorProcesses resource attribute value.
ps -ef | grep loopy
root 10614 1 0 06:09 ? 00:00:00 /bin/ksh /loopyfs/loopy appsg
root 15615 7423 0 22:17 pts/2 00:00:00 grep –color=auto loopy
4.
Modify the appapp resource so that the PidFiles resource attribute is no longer configured and set the MonitorProcesses value to the name of the loopy process as determined in the previous step. Do so by taking the resource offline and modifying it. Save the VCS configuration and close it. Then, bring the appapp resource online.
The MonitorProcesses value string will need to be quoted for CLI input.
hares -state appapp
#Resource Attribute System Value
appapp State sys1 ONLINE
appapp State sys2 OFFLINE
hares -offline appapp -sys sys1
hares -value appapp PidFiles
/loopyfs/loopypid
hares -modify appapp PidFiles -delete "/loopyfs/loopypid"
hares -value appapp PidFiles
hares -value appapp MonitorProcesses
hares -modify appapp MonitorProcesses "/bin/ksh /loopyfs/loopy appsg"
hares -value appapp MonitorProcesses
"/bin/ksh /loopyfs/loopy appsg"
hares -online appapp -sys sys1
hares -state appapp
#Resource Attribute System Value
appapp State sys1 ONLINE
appapp State sys2 OFFLINE
haconf -dump -makero
5. Probe the appapp resource on both systems to ensure that IMF has taken over from the initial Traditional mode startup.
hares -probe appapp -sys sys1
hares -probe appapp -sys sys2
6. Use the amfstat -t command on both systems. Is the appapp resource being monitored by IMF on both systems?
Yes, there are IMF online and offline monitors for the appapp resource. |
---|
amfstat -t
AMF Status Report
Process ONLINE Monitors (1):
============================
RID R_RID PID GROUP
82 65 11069 appapp
Mount ONLINE Monitors (1):
==========================
RID R_RID FSTYPE DEVICE
MOUNTPOINT GROUP CONTAINER
MOUNTPOINT GROUP CONTAINER
68 54 vxfs /dev/vx/dsk/loopydatadg/loopydatavol /loopyfs appmnt \<none>
DG online Monitors (1):
==========================
RID R_RID GROUP DGName
67 55 appdg loopydatadg
ssh sys2 /opt/VRTS/bin/amfstat -t
AMF Status Report
Process OFFLINE Monitors (2):
=============================
RID R_RID PATH ARGV0 ARGS UID EUID GID EGID GROUP CONTAINER ACTION
63 45 /loopyfs/startloopy /loopyfs/startloopy appsg 0 \<any> \<any> \<any> appapp \<none> Allow
64 45 /bin/ksh /bin/ksh /loopyfs/loopy appsg 0 \<any> \<any> \<any> appapp \<none> Allow
Mount OFFLINE Monitors (1):
===========================
RID R_RID FSTYPE DEVICE
MOUNTPOINT GROUP CONTAINER
MOUNTPOINT GROUP CONTAINER
59 35 vxfs /dev/vx/dsk/loopydatadg/loopydatavol /loopyfs appmnt \<none>
DG offline Monitors (1):
==========================
RID R_RID GROUP DGName
60 36 appdg loopydatadg
7. Display the MonitorMethod resource attribute values for the appapp resource. Is the appapp resource being monitored by IMF on both systems?
Yes, IMF is enabled and active for the Application agent on both systems. |
---|
hares -display appapp -attribute MonitorMethod
#Resource Attribute System Value
appapp MonitorMethod sys1 IMF
appapp MonitorMethod sys2 IMF