Monday, December 25, 2017

Protecting Oracle OHS/Apache using Oracle Clusterware 11g/12c

This month's article is about protecting Apache Application service using Oracle RAC Clusterware 12c. To do this we should create an application VIP, an Action Script and a Resource on the Clusterware.
Here we will see the step by step configuration to protect Apache application and similar configuration can be plagiarized to protect Oracle OHS or any other Application service.
The below steps will work for both 11g and 12c RAC Clusterware software,

Step1. As the root user, verify that the Apache RPMs, httpd, httpd-devel and httpd-manual are installed on the two nodes on which Oracle clusterware is installed and configured.
grid@host01# su -
root@host01# rpm -qa | grep httpd
httpd-2.4.6-40.0.1.el7.x86_64
httpd-manual-2.4.6-40.0.1.el7.noarch
httpd-tools-2.4.6-40.0.1.el7.x86_64

Repeat on second node
root@host02# rpm -qa | grep httpd
httpd-2.4.6-40.0.1.el7.x86_64
httpd-manual-2.4.6-40.0.1.el7.noarch
httpd-tools-2.4.6-40.0.1.el7.x86_64


Step2. As the root user, start the Apache application on first node,
# apachectl start
Now access the Apache home page and verify it is working,
http://host01.samiora.blogspot.com:7777

OHS is managed by OPMN, the command line interface to OPMN is opmnctl.
Start OPMN and all managed processed, if not already started,
# ./opmnctl startall
# ./opmnctl status -l
# ./opmnctl stopproc process-type=OHS
[appldev@host01 scripts]$ sh adopmnctl.sh status -l
You are running adopmnctl.sh version 120.0.12020000.2
Checking status of OPMN managed processes...
Processes in Instance: EBS_web_dev_OHS1
---------------------------------+--------------------+---------+----------+------------+----------+-----------+------
ias-component                    | process-type       |     pid | status   |        uid |  memused |    uptime | ports
---------------------------------+--------------------+---------+----------+------------+----------+-----------+------
EBS_web_dev                   | OHS                |    8356 | Alive    |  778125435 |  2125916 |   2:03:55 | https:4447,https:10004,http:7777

adopmnctl.sh: exiting with status 0
adopmnctl.sh: check the logfile /u01/dev/fs1/inst/apps/dev_host01/logs/appl/admin/log/adopmnctl.txt for more information ...

Verify OHS and Apache pages are appearing or not on host01 where the service is started,
 
 

Step3. Now create an action script to control the application. This script must be accessible by all nodes on which the application resource can be located.

A) As the root user, create a script on the first node called 'apache.scr' in /usr/local/bin that will start, stop, check status and clean up if the application does not exit cleanly. Make sure that the host specified in the WEBPAGECHECK variable is your first node.
root@host01# vi /usr/local/bin/apache.scr
#!/bin/bash
HTTPDCONFLOCATION=/etc/httpd/conf/httpd.conf
WEBPAGECHECK=http://host01.samiora.blogspot.com:80/icons/apache_pb.gif
case $1 in
'start')
/usr/sbin/apachectl -k start -f $HTTPDCONFLOCATION
RET=$?
;;
'stop')
/usr/sbin/apachectl -k stop
RET=$?
;;
'clean')
/usr/sbin/apachectl -k stop
RET=$?
;;
'check')
/usr/bin/wget -q --delete-after $WEBPAGECHECK
RET=$?
;;
*)
RET=0
;;
esac
# 0: success; 1 : error
if [ $RET -eq 0 ]; then
exit 0
else
exit 1
fi

root@host01# chmod 755 /usr/local/bin/apache.scr
root@host01# apache.scr start
Verify the web page and it should be working.
root@host01# apache.scr stop
Verify the web page and it will not be working.

B) As root, create a script on the second node called 'apache.scr' in /usr/bin/local that will start, stop, check status and clean up if the application does not exit cleanly. Make sure that the host specified in WEBPAGECHECK variable is your second node.
root@host02# vi /usr/local/bin/apache.scr
#!/bin/bash
HTTPDCONFLOCATION=/etc/httpd/conf/httpd.conf
WEBPAGECHECK=http://host02.samiora.blogspot.com:80/icons/apache_pb.gif
case $1 in
'start')
/usr/sbin/apachectl -k start -f $HTTPDCONFLOCATION
RET=$?
;;
'stop')
/usr/sbin/apachectl -k stop
RET=$?
;;
'clean')
/usr/sbin/apachectl -k stop
RET=$?
;;
'check')
/usr/bin/wget -q --delete-after $WEBPAGECHECK
RET=$?
;;
*)
RET=0
;;
esac
# 0: success; 1 : error
if [ $RET -eq 0 ]; then
exit 0
else
exit 1
fi

root@host02# chmod 755 /usr/local/bin/apache.scr
root@host02# apache.scr start
Verify the web page and it should be working.
root@host02# apache.scr stop
Verify the web page and it will not be working.

Step4. Next, you must validate the return code of a check failure using the new script. The Apache server should NOT be running on either node. Run 'apache.scr check' and immediately test the return code by issuing an 'echo $?' command. This must be run immediately after the 'apache.scr check' command because the shell variable $? holds the exit code of the previous command run from the shell. An unsuccessful check should return an exit code of 1. You should do this on both nodes.
root@host01# apache.scr check
root@host01# echo $?
1
root@host02# apache.scr check
root@host02# echo $?
1

Step5. As the grid user, create a server pool for the resource called myApache_sp. This pool contains your first two hosts of the cluster and is a child of the Generic pool.
grid@host01# id
uid=502(grid) gid=54321(oinstall) groups=504(asmadmin),505(asmdba),506(asmoper),54321(oinstall)
grid@host01# . oraenv
ORACLE_SID = [grid] ? +ASM1
The Oracle base has been set to /u01/app/grid
grid@host01# /u01/app/12.2.0/grid/bin/crsctl add serverpool myApache_sp -attr "PARENT_POOLS=Generic,SERVER_NAMES=host01 host02"

Step6. Check the status of the new pool of your cluster.
grid@host01# /u01/app/12.2.0/grid/bin/crsctl status server -f
NAME=host01
STATE=ONLINE
ACTIVE_POOLS=myApache_sp Generic
STATE_DETAILS=

NAME=host02
STATE=ONLINE
ACTIVE_POOLS=myApache_sp Generic
STATE_DETAILS=....

Step7. Add the Apache Resource, which can be called myApache, to the myApache_sp subpool that has Generic as a parent. It must be performed as root because the resource requires root authority because of listening on the default privileged port 80. set CHECK_INTERVAL to 30, RESTART_ATTEMPTS to 2 and PLACEMENT to restricted.
root@host01# su -
root@host01# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)
root@host01# /u01/app/12.2.0/grid/bin/crsctl add resource myApache -type cluster_resource -attr "ACTION_SCRIPT=/usr/local/bin/apache.scr, PLACEMENT='restricted', SERVER_POOLS=myApache_sp, CHECK_INTERVAL='30', RESTART_ATTEMPTS='2'"

Step8. View the static attributes of the myApache resource with the crsctl status resource myApache -p -f command.
root@host01# /u01/app/12.2.0/grid/bin/crsctl status resource myApache -f
NAME=myApache
TYPE=cluster_resource
STATE=OFFLINE
TARGET=ONLINE
ACL=owner:root:rwx,pgrp:root:r-x,other::r--
ACTION_FAILURE_TEMPLATE=
ACTION_SCRIPT=/usr/local/bin/apache.scr
ACTIVE_PLACEMENT=0
AGENT_FILENAME=%CRS_HOME%/bin/scriptagent
AUTO_START=restore
CARDINALITY=1
CARDINALITY_ID=0
CHECK_INTERVAL=30
CREATION_SEED=30
DEFAULT_TEMPLATE=
DEGREE=1
DESCRIPTION=
ENABLED=1
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=
ID=myApache
INSTANCE_FAILOVER=0
LOAD=1
LOGGING_LEVEL=1
NOT_RESTARTING_TEMPLATE=
OFFLINE_CHECK_INTERVAL=0
PLACEMENT=restricted
PROFILE_CHANGE_TEMPLATE=
RESTART_ATTEMPTS=2
SCRIPT_TIMEOUT=60
SERVER_POOLS=myApache_sp
START_DEPENDENCIES=
START_TIMEOUT=0
STATE_CHANGE_TEMPLATE=
STOP_DEPENDENCIES=
STOP_TIMEOUT=0
UPTIME_THRESHOLD=1h


Step9. Use the 'crsctl start resource myApache' command to start the new resource. Use the 'crsctl status resource myApache' command to confirm that the resource is online on the first node. If you like, open a browser and verify the apache home page as shown in step 2 above.
root@host01# /u01/app/12.2.0/grid/bin/crsctl start resource myApache
root@host01# /u01/app/12.2.0/grid/bin/crsctl status resource myApache
resource myApache
NAME=myApache
TYPE=cluster_resource
TARGET=ONLINE
STATE=ONLINE on host01

Step10. Confirm that Apache is NOT running on your second node. The easiest way to do this is to check for the running '/usr/sbin/httpd -k start -f /etc/httpd/conf/httpd.confd' processes with the ps command.
root@host02# ps -ef | grep -i "httpd -k"

Step11. Next, simulate a node failure on your first node using the init command as root. Before issuing the reboot on the first node, open a VNC session on the second node and as the root user execute below script so that you can monitor the failover.
monitor.sh
while true
do
ps -ef | grep -i "httpd -k"
  sleep 1
done

root@host01# reboot ==>To initiate a reboot, simulating a node failure.
At the same time on second node run the below script as root user,
root@host02# sh monitor.sh ==> you will see that after sometime the httpd service will be started on host02.

Step12. Verify the failover from the host01 to host02 with the 'crsctl status resource myApache -t' command.
root@host02# /u01/app/12.2.0/grid/bin/crsctl status resource myApache -t
NAME      TARGET  STATE  SERVER  STATE_DETAILS 
Cluster Resources
myApache

1               ONLINE  ONLINE  host02
Now access Apache page on host02 and it should display while on host01 it will not.
http://host02.samiora.blogspot.com

Step13. Use the 'crsctl relocate resource' command to move the myApache resource back to host01.
root@host01# /u01/app/12.2.0/grid/bin/crsctl relocate resource myApache
CRS-2673: Attempting to stop 'myApache' on 'host02'
CRS-2677: Stop of 'myApache' on 'host02' succeeded
CRS-2672: Attempting to start 'myApache' on 'host01'
CRS-2676: Start of 'myApache' on 'host01' succeeded
Now access Apache page on host01 and it should display while on host02 it will not.
http://host01.samiora.blogspot.com

For any queries on Oracle RAC 11g or RAC 12c you can email me on samiappsdba@gmail.com

Tuesday, November 28, 2017

Oracle ASM Cluster File Systems (ACFS)

ACFS is NOT yet certified to be used for Oracle E-Business Suite application files. 
https://blogs.oracle.com/stevenchan/choosing-a-shared-file-system-for-oracle-e-business-suite

OCFS2 is certified for EBS 12.2 application files. 
https://blogs.oracle.com/stevenchan/ocfs2-certified-with-ebs-122-shared-file-system-configurations
This article describes three ways to create an ASM Cluster File System (ACFS) in an Oracle 11G Release 2 RAC database. It is assumed that that RAC database is already present.

Oracle ASM Cluster File System (ACFS) is a general purpose cluster file system implemented as part of ASM. It can be used to store almost anything, including the database executables. The only things that should not be stored in ACFS are the Grid Infrastructure home and any Oracle files that can be directly stored in Oracle ASM.
ASM Configuration Assistant (ASMCA)
As the "oracle" user, switch to the ASM environment on node 1 of the RAC, then we start the ASM Configuration Assistant (asmca).
[oracle@rac1 ~]$ . oraenv
ORACLE_SID = [RAC1] ? +ASM1
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/oracle
[oracle@rac1 ~]$ dbhome
/u01/app/11.2.0/grid
[oracle@rac1 ~]$ asmca
 When the ASM configuration assistant starts you are presented with the "ASM Instances" tab.

 Click on the "ASM Cluster File Systems" tab, then click the "Create" button.
Select "Create Volume" from the Volume list.
Enter the Volume Name and Size and click the "OK" button. Wait for the volume to be created, then click the "OK" button on the subsequent message dialog.
 The newly created volume will now be selected in the Volume list. Select the "General Purpose File System" option, enter a previously created mount point directory (or leave the suggested mount point), select the "Yes" option for Register MountPoint and click the "OK" button.
Click the "OK" button on the resulting message.
The newly created cluster file system is now listed under the "ASM Cluster File Systems" tab.

 Either perform another action, or click the "Exit" button.
At the command line on the first RAC node, navigate to the ACFS mount point and create a test file.
[oracle@rac1 data_acfsvol1]$ cd /u01/app/oracle/acfsmounts/data_acfsvol1
[oracle@rac1 data_acfsvol1]$ echo "This is a test" > test.txt
[oracle@rac1 data_acfsvol1]$ ls -al
total 80
drwxrwx--- 4 root   dba       4096 Nov 28 16:39 .
drwxr-xr-x 3 root   root      4096 Nov 28 16:24 ..
drwxr-xr-x 5 root   root      4096 Nov 28 16:24 .ACFS
drwx------ 2 root   root     65536 Nov 28 16:24 lost+found
-rw-r--r-- 1 oracle oinstall    15 Nov 28 16:39 test.txt
[oracle@rac1 data_acfsvol1]$
Check the file is present on the second RAC node.
[oracle@rac2 data_acfsvol1]$ cd /u01/app/oracle/acfsmounts/data_acfsvol1
[oracle@rac2 data_acfsvol1]$ ls -al
total 80
drwxrwx--- 4 root   dba       4096 Nov 28 16:39 .
drwxr-xr-x 3 root   root      4096 Nov 28 16:24 ..
drwxr-xr-x 5 root   root      4096 Nov 28 16:24 .ACFS
drwx------ 2 root   root     65536 Nov 28 16:24 lost+found
-rw-r--r-- 1 oracle oinstall    15 Nov 28 16:39 test.txt
[oracle@rac2 data_acfsvol1]$ cat test.txt
This is a test
[oracle@rac2 data_acfsvol1]$
So the ASM Cluster File System is working as expected.
Oracle Enterprise Manager (OEM)
Oracle Enterprise Manager provides a similar interface for interacting with ASM Cluster File Systems.
First we need to create mount points on the file system of each node for the new volume.
# mkdir -p /u01/app/oracle/acfsmounts/data_acfsvol2
# chown oracle:oinstall /u01/app/oracle/acfsmounts/data_acfsvol2
Log in to OEM, scroll to the bottom of the home page, then click on one of the ASM instances listed. On the resulting ASM screen, click on the "ASM Cluster File System" tab. You are then presented with the following screen. Click the "Create" button.
Click the "Create ASM Volume" button.
Enter the Volume Name and Size and click the "OK" button. Wait for the volume to be created.
The newly created volume will now be entered in the Volume Device field. Enter a Volume Label and the previously created mount point directory, then click the "OK" button.
The newly created volume is listed as "Dismounted". Select it and click the "Mount" button.
Accept the default node selection by clicking the "Continue" button.
Enter the Mount Point and click the "Generate Command" button.
Run the suggested command as the "root" user on all nodes, then click the "Return" button on this and the previous screen.
The new ASM Cluster File System is ready for use.

 Command Line 
First we need to create mount points on the file system of each node for the new volume.
# mkdir -p /u01/app/oracle/acfsmounts/data_acfsvol3
# chown oracle:oinstall /u01/app/oracle/acfsmounts/data_acfsvol3
As the "oracle" user, switch to the ASM environment on node 1 of the RAC, then connect to the ASM instance using SQL*Plus.
[oracle@rac1 ~]$ . oraenv
ORACLE_SID = [RAC1] ? +ASM1
The Oracle base for ORACLE_HOME=/u01/app/11.2.0/grid is /u01/app/oracle
[oracle@rac1 ~]$ dbhome
/u01/app/11.2.0/grid
[oracle@rac1 ~]$ sqlplus / as sysasm
Issue to the following command to create a new volume.
SQL> ALTER DISKGROUP DATA ADD VOLUME ACFSVOL3 SIZE 10G;
Diskgroup altered.
Exit the SQL*Plus session, then create a file system on the volume.
[oracle@rac1 ~]$ /sbin/mkfs -t acfs -b 4k /dev/asm/acfsvol3-301 -n "ASMVOL3"
mkfs.acfs: version                   = 11.2.0.1.0.0
mkfs.acfs: on-disk version           = 39.0
mkfs.acfs: volume                    = /dev/asm/acfsvol3-301
mkfs.acfs: volume size               = 10737418240
mkfs.acfs: Format complete.
[oracle@rac1 ~]$
Register the filesystem.
[oracle@rac1 ~]$ /sbin/acfsutil registry -f -a /dev/asm/acfsvol3-301 /u01/app/oracle/acfsmounts/data_acfsvol3
acfsutil registry: mount point /u01/app/oracle/acfsmounts/data_acfsvol3 successfully added to Oracle Registry
[oracle@rac1 ~]$
The ASM Cluster File System should now be mounted on all RAC nodes. If it not, then issue the following command on each node to mount it.
/bin/mount -t acfs /dev/asm/acfsvol3-301 /u01/app/oracle/acfsmounts/data_acfsvol3
General Points
You can unmount and mount all the ACFS locations using the following commands from the "root" user on each RAC node.
# /bin/umount -t acfs -a
# /sbin/mount.acfs -o all
Registering mount points, means these file systems will automatically be mounted and unmounted and startup and shutdown respectively.
Probably the easiest interface to use is the ASM Configuration Assistant (ASMCA), but Enterprise Manager does allow you to see all the commands being run to perform each task. The easiest way to learn how to use the command line utilities is to use Enterprise Manager and click the "Show Command" button each step of the way.
For any further Questions/Support on ACFS file system, email me on samiappsdba@gmail.com

Thursday, October 12, 2017

Set Up SSH on EBS Application Nodes & RAC Database Nodes

Set Up Secure Shell SSH on EBS Application 12.2.x Tier Nodes

In a multi-node environment, adop commands are invoked by a user on the primary node. Internally, adop uses Secure Shell (ssh) to automatically execute required patching actions on all secondary nodes. You must set up passwordless SSH connectivity from the primary node to all secondary nodes.
Note: Rapid Install and Rapid Clone set up the SSH key infrastructure.

*Steps to setup SSH manually
The ssh-keygen command is used to generate a private/public key pair.  The private key is for the node from where all the remote nodes will subsequently be accessible by an ssh login that requires no password. The public key must be copied to each remote node's <User_Home_Directory>/.ssh directory.
In essence, the sequence is as follows:
1.The following command initiates creation of the key pair:
# ssh-keygen -t rsa
Note: The <Enter> key should be pressed instead of a passphrase being entered.
Generating public/private rsa key pair.
Enter file in which to save the key (/u01/user2/.ssh/id_rsa):
<Enter>
Enter passphrase:<Enter>
Enter same passphrase again:<Enter>
Your identification has been saved in /u01/user2/.ssh/id_rsa.
Your public key has been saved in /u01/user2/.ssh/id_rsa.pub.
The key fingerprint is: 16:d0:e2:dd:37:2f:8e:d5:59:3e:12:9d:2f:12:1e:5a


2.The private key is saved in <User_Home_Directory>/.ssh/id_rsa
Important: As this read-only file is used to decrypt all correspondence encrypted with the public key, its contents must not be shared with anyone.
3.The public key is saved in <User_Home_Directory>/.ssh/id_rsa.pub
4.The contents of the public key are then copied to the <User_Home_Directory>/.ssh /authorized_keys file on the systems you subsequently wish to SSH to without being prompted for a password.
# scp -pr /u01/user2/.ssh/id_rsa.pub user2@system1:/u01/user2/.ssh/authorized_keys
user2@system1's password:<Enter user2 on system1 OS user password here>
id_rsa.pub 100% 398 0.4KB/s 00:00

# ssh user2@system1
Note: If you receive this message, it can safely be ignored: Warning: untrusted X11 forwarding setup failed: xauth key data not generated Warning: No xauth data; using fake authentication data for X11 forwarding.

 Once this has been done for the relevant operating system account on all nodes - that is, ssh can log in from the primary node to each secondary node without entering a password - so you are ready to run adop on multiple application tier nodes. It must be run on at least the master (admin) node: from there, it will attempt to contact all the other application tier nodes that are part of the same Oracle E-Business Suite instance, and will run the required steps remotely on those nodes.

*Steps to setup SSH using txkRunSSHSetup.pl Script on EBS Application Nodes
Important: If you change the password for the relevant operating system account on one or more nodes, you must regenerate the SSH credentials either using the $AD_TOP/patch/115/bin/txkRunSSHSetup.pl script, or your own native solution if you prefer.

The txkRunSSHSetup.pl script has a -help option that shows relevant usage options. 

For example, a basic command to enable ssh would be:
$ perl $AD_TOP/patch/115/bin/txkRunSSHSetup.pl enablessh -contextfile=<CONTEXT_FILE> -hosts=h1,h2,h3$

To verify ssh operation:
$ perl $AD_TOP/patch/115/bin/txkRunSSHSetup.pl verifyssh -contextfile=<CONTEXT_FILE> -hosts=h1,h2,h3 -invalidnodefile=<filename to report ssh verification failures>

 To disable ssh:
$ perl $AD_TOP/patch/115/bin/txkRunSSHSetup.pl disablessh -contextfile=<CONTEXT_FILE> -hosts=h1,h2,h3 -invalidnodefile=<filename to report ssh verification failures>

Set Up Secure Shell SSH on RAC Database Nodes

With Oracle Database 11g Release 2, Oracle provides an extremely useful script to establish and exchange ssh keys between all the nodes of the cluster called sshUserSetup.sh. This script is available to everyone from the installation media in the grid/sshsetup sub-directory.
This little shell script comes in handy if you are cloning Oracle RAC clusters and do not want to leverage the GUI tools. If you trying to automate Oracle RAC build deployments, it is a must have tool for the DBAs.
Without setting up ssh keys, the cluvfy script will fail and spit out the following error:
? 1 ERROR: User equivalence unavailable on all the nodes. Verification cannot proceed.

The following example demonstrates the sshUserSetup script executed on a 2-node RAC called rac1 and rac2:
[oracle@rac1 sshsetup]$ pwd
/nfs/software/12c/grid/sshsetup
[oracle@rac1 home]$ cd /nfs/software/12c/grid/sshsetup/
[oracle@rac1 sshsetup]$ ./sshUserSetup.sh -user oracle -hosts "rac1 rac2" -noPromptPassphrase -advanced -exverify
The output of this script is also logged into /tmp/sshUserSetup_2017-10-12-15-30-00.log

For any queries related to this article, please email me, samiappsdba@gmail.com

Thursday, September 28, 2017

Reset EBS Weblogic Password that is lost or forgotten

EBS WebLogic domain uses Node Manager to control startup of the AdminServer and Managed Servers. For the EBS WebLogic domain, the Node Manager and WebLogic AdminServer passwords must be same. If the passwords are different, the AD control scripts will not work properly.

If the AdminServer password has been lost or forgotten, it can be reset by carrying out the following steps on the run file system. As described in the final step, an fs_clone operation should then be performed to synchronize the run and patch file systems.

Step 1: Shut down all running services. Since the AdminServer password is not known, the servers cannot be stopped from the console and so must be killed as follows.
i. Connect to the Oracle E-Business Suite instance and source the application tier environment file.
ii. Identify the PIDs of Node Manager, AdminServer, and all running Managed Servers:
$ ps -ef | grep "NodeManager"
$ ps -ef | grep "weblogic.Name=AdminServer"
$ ps -ef | grep "weblogic.Name=forms-c4ws_server"
$ ps -ef | grep "weblogic.Name=forms_server"
$ ps -ef | grep "weblogic.Name=oafm_server"
$ ps -ef | grep "weblogic.Name=oacore_server"

iii. Kill all these processes, starting with Node Manager and followed by the Managed Servers.

Step2: Back up these folders, and then delete them:
$EBS_DOMAIN_HOME/security/DefaultAuthenticatorInit.ldift
$EBS_DOMAIN_HOME/servers/<server_name>/data/ldap
$EBS_DOMAIN_HOME/servers/<server_name>/security/boot.properties
$EBS_DOMAIN_HOME/servers/<server_name>/data/nodemanager/boot.properties

I have 4 managed servers (oacore_server1,oacore_server4,oacore_server6,oacore_server8) so I did all the below,
#echo $EBS_DOMAIN_HOME
/u02/erpt/fs2/FMW_Home/user_projects/domains/EBS_domain_erptmp
# cd /u02/erpt/fs2/FMW_Home/user_projects/domains/EBS_domain_erptmp/security
# mv DefaultAuthenticatorInit.ldift DefaultAuthenticatorInit.ldift_BACKUP

# cd $EBS_DOMAIN_HOME/servers/oacore_server1/data
# mv ldap ldap_BACKUP
# mkdir ldap
# cd $EBS_DOMAIN_HOME/servers/oacore_server1/security
# mv boot.properties boot.properties_BACKUP
# cd $EBS_DOMAIN_HOME/servers/oacore_server1/data/nodemanager
# mv boot.properties boot.properties_BACKUP

# cd $EBS_DOMAIN_HOME/servers/oacore_server4/data
# mv ldap ldap_BACKUP
# mkdir ldap
# cd $EBS_DOMAIN_HOME/servers/oacore_server4/security
# mv boot.properties boot.properties_BACKUP
# cd $EBS_DOMAIN_HOME/servers/oacore_server4/data/nodemanager
# mv boot.properties boot.properties_BACKUP

# cd $EBS_DOMAIN_HOME/servers/oacore_server6/data
# mv ldap ldap_BACKUP
# mkdir ldap
# cd $EBS_DOMAIN_HOME/servers/oacore_server6/security
# mv boot.properties boot.properties_BACKUP
# cd $EBS_DOMAIN_HOME/servers/oacore_server6/data/nodemanager
# mv boot.properties boot.properties_BACKUP

# cd $EBS_DOMAIN_HOME/servers/oacore_server8/data
# mv ldap ldap_BACKUP
# mkdir ldap
# cd $EBS_DOMAIN_HOME/servers/oacore_server8/security
# mv boot.properties boot.properties_BACKUP
# cd $EBS_DOMAIN_HOME/servers/oacore_server8/data/nodemanager
# mv boot.properties boot.properties_BACKUP

If the password is not reset correctly, the backed up files and folders can be restored.
Note: For certain servers, the boot.properties file may be present in only one location of the two specified above. In such a case, back it up and then delete it.

Step  3: Set up a new environment to change the WLS AdminServer password.
i. Start a new session and connect to the Oracle E-Business Suite instance.
ii. Do not source the application tier environment file.
iii. Run the following command to source the WebLogic Server domain environment:
$ cd <EBS_DOMAIN_HOME>/bin
# cd /u02/erptmp/fs2/FMW_Home/user_projects/domains/EBS_domain_erptmp/bin
# . setDomainEnv.sh

iv. Run the following commands:
$ cd <EBS_DOMAIN_HOME>/security
$ java weblogic.security.utils.AdminAccount <wls_adminuser> <wls_admin_new_password> .
Where:
• <wls_adminuser> is the same as the value of context variable s_wls_admin_user
• <wls_admin_new_password> is the new WLS AdminServer password you wish to set.
Note: Do not omit the trailing period ('.') in the above command: it is needed to specify the current domain directory.
# more $CONTEXT_FILE | grep s_wls_admin_user
<wls_admin_user oa_var="s_wls_admin_user">weblogic</wls_admin_user>
# echo $EBS_DOMAIN_HOME
/u02/erptmp/fs2/FMW_Home/user_projects/domains/EBS_domain_erptmp
# cd /u02/erptmp/fs2/FMW_Home/user_projects/domains/EBS_domain_erptmp/security
# java weblogic.security.utils.AdminAccount weblogic welcome123 .


Step 4: Start AdminServer from the command line. You will be prompted for the WebLogic
Server username and password, so that the AdminServer boot.properties file can be generated.
i. Go to the EBS Domain Home:
$ cd <EBS_DOMAIN_HOME>
# cd /u02/erptmp/fs2/FMW_Home/user_projects/domains/EBS_domain_erptmp
ii. Start AdminServer:
$ java <s_nm_jvm_startup_properties> -Dweblogic.system.StoreBootIdentity=true -Dweblogic.Name=AdminServer weblogic.Server
Where:
• <s_nm_jvm_startup_properties> is the same as the value of context variable ss_nm_jvm_startup_properties in CONTEXT_FILE
# more $CONTEXT_FILE | grep s_nm_jvm_startup_properties
         <nm_jvm_startup_properties oa_var="s_nm_jvm_startup_properties" osd="LINUX_X86-64">-XX:PermSize=512m -XX:MaxPermSize=512m -Xms1024m -Xmx1024m -Djava.security.policy=/u02/erptmp/fs2/FMW_Home/wlserver_10.3/server/lib/weblogic.policy -Djava.security.egd=file:/dev/./urandom -Dweblogic.ProductionModeEnabled=true -da -Dplatform.home=/u02/erptmp/fs2/FMW_Home/wlserver_10.3 -Dwls.home=/u02/erptmp/fs2/FMW_Home/wlserver_10.3/server -Dweblogic.home=/u02/erptmp/fs2/FMW_Home/wlserver_10.3/server -Dcommon.components.home=/u02/erptmp/fs2/FMW_Home/oracle_common -Djrf.version=11.1.1 -Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger -Ddomain.home=/u02/erptmp/fs2/FMW_Home/user_projects/domains/EBS_domain_erptmp -Djrockit.optfile=/u02/erptmp/fs2/FMW_Home/oracle_common/modules/oracle.jrf_11.1.1/jrocket_optfile.txt -Doracle.server.config.dir=/u02/erptmp/fs2/FMW_Home/user_projects/domains/EBS_domain_erptmp/config/fmwconfig/servers/AdminServer -Doracle.domain.config.dir=/u02/erptmp/fs2/FMW_Home/user_projects/domains/EBS_domain_erptmp/config/fmwconfig -Digf.arisidbeans.carmlloc=/u02/erptmp/fs2/FMW_Home/user_projects/domains/EBS_domain_erptmp/config/fmwconfig/carml -Digf.arisidstack.home=/u02/erptmp/fs2/FMW_Home/user_projects/domains/EBS_domain_erptmp/config/fmwconfig/arisidprovider -Doracle.security.jps.config=/u02/erptmp/fs2/FMW_Home/user_projects/domains/EBS_domain_erptmp/config/fmwconfig/jps-config.xml -Doracle.deployed.app.dir=/u02/erptmp/fs2/FMW_Home/user_projects/domains/EBS_domain_erptmp/servers/AdminServer/tmp/_WL_user -Doracle.deployed.app.ext=/- -Dweblogic.alternateTypesDirectory=/u02/erptmp/fs2/FMW_Home/oracle_common/modules/oracle.ossoiap_11.1.1,/u02/erptmp/fs2/FMW_Home/oracle_common/modules/oracle.oamprovider_11.1.1 -Djava.protocol.handler.pkgs=oracle.mds.net.protocol -Dweblogic.jdbc.remoteEnabled=false -Dportlet.oracle.home=/u02/erptmp/fs2/FMW_Home/oracle_common -Dem.oracle.home=/u02/erptmp/fs2/FMW_Home/oracle_common -Dweblogic.management.discover=true -Dwlw.iterativeDev=false -Dwlw.testConsole=false -Dwlw.logErrorsToConsole=false -Dweblogic.ext.dirs=/u02/erptmp/fs2/FMW_Home/patch_wls1036/profiles/default/sysext_manifest_classpath</nm_jvm_startup_properties>

#java -XX:PermSize=512m -XX:MaxPermSize=512m -Xms1024m -Xmx1024m -Djava.security.policy=/u02/erptst/fs2/FMW_Home/wlserver_10.3/server/lib/weblogic.policy -Djava.security.egd=file:/dev/./urandom -Dweblogic.ProductionModeEnabled=true -da -Dplatform.home=/u02/erptst/fs2/FMW_Home/wlserver_10.3 -Dwls.home=/u02/erptst/fs2/FMW_Home/wlserver_10.3/server -Dweblogic.home=/u02/erptst/fs2/FMW_Home/wlserver_10.3/server -Dcommon.components.home=/u02/erptst/fs2/FMW_Home/oracle_common -Djrf.version=11.1.1 -Dorg.apache.commons.logging.Log=org.apache.commons.logging.impl.Jdk14Logger -Ddomain.home=/u02/erptst/fs2/FMW_Home/user_projects/domains/EBS_domain_erptst -Djrockit.optfile=/u02/erptst/fs2/FMW_Home/oracle_common/modules/oracle.jrf_11.1.1/jrocket_optfile.txt -Doracle.server.config.dir=/u02/erptst/fs2/FMW_Home/user_projects/domains/EBS_domain_erptst/config/fmwconfig/servers/AdminServer -Doracle.domain.config.dir=/u02/erptst/fs2/FMW_Home/user_projects/domains/EBS_domain_erptst/config/fmwconfig -Digf.arisidbeans.carmlloc=/u02/erptst/fs2/FMW_Home/user_projects/domains/EBS_domain_erptst/config/fmwconfig/carml -Digf.arisidstack.home=/u02/erptst/fs2/FMW_Home/user_projects/domains/EBS_domain_erptst/config/fmwconfig/arisidprovider -Doracle.security.jps.config=/u02/erptst/fs2/FMW_Home/user_projects/domains/EBS_domain_erptst/config/fmwconfig/jps-config.xml -Doracle.deployed.app.dir=/u02/erptst/fs2/FMW_Home/user_projects/domains/EBS_domain_erptst/servers/AdminServer/tmp/_WL_user -Doracle.deployed.app.ext=/- -Dweblogic.alternateTypesDirectory=/u02/erptst/fs2/FMW_Home/oracle_common/modules/oracle.ossoiap_11.1.1,/u02/erptst/fs2/FMW_Home/oracle_common/modules/oracle.oamprovider_11.1.1 -Djava.protocol.handler.pkgs=oracle.mds.net.protocol -Dweblogic.jdbc.remoteEnabled=false -Dportlet.oracle.home=/u02/erptst/fs2/FMW_Home/oracle_common -Dem.oracle.home=/u02/erptst/fs2/FMW_Home/oracle_common -Dweblogic.management.discover=true -Dwlw.iterativeDev=false -Dwlw.testConsole=false -Dwlw.logErrorsToConsole=false -Dweblogic.ext.dirs=/u02/erptst/fs2/FMW_Home/patch_wls1036/profiles/default/sysext_manifest_classpath -Dweblogic.system.StoreBootIdentity=true -Dweblogic.Name=AdminServer weblogic.Server
The above command prompts for the WebLogic Server username and password:
Enter username to boot WebLogic server: weblogic
Enter password to boot WebLogic server: xxxx
Provide the same credentials as you provided in Step 3.

Step 5: Change Node Manager password
i. Log in to the WebLogic Administration console.
ii. Click the 'Lock & Edit' button.
iii. In the left panel, click on the EBS Domain link.
iv. Select the 'Security' tab.
v. Click on the 'Advanced' link.
vi. Edit the 'Node Manager password' field and set it to the new WebLogic Server password. The password should be same as set in Step 3.
vii. Edit the 'Confirm Node Manager Password' field and set it to the new WebLogic Server password. The password should be same as set in Step 3.
viii. Save and activate the changes.

Step 6: The first time, AdminServer has to be stopped from the Admin console. Follow these steps:
i. Log in to the WebLogic Administration console.
ii. Shut down AdminServer.

Step 7: Set up your environment to start AdminServer again. AdminServer should now be
started using the normal AD script, which will also start Node Manager using the
new password.
i. Launch a new session and connect to the Oracle E-Business Suite instance.
ii. Source the application tier environment file.
iii. Start AdminServer with the following command:
$ $ADMIN_SCRIPTS_HOME/adadminsrvctl.sh start

8. Start the Managed Servers. For the first time, all Managed Servers should be started
from the WebLogic Server Admin console. This step will create boot.properties
files for the respective Managed Servers. Follow these steps:
i. Log in to the WebLogic Server Administration Console
ii. Start all Managed Servers, one at a time

9. Shut down all the Managed Servers. This is so the new credentials will be picked up
at the next startup. Follow these steps:
i. Log in to the WebLogic AdminServer console.
ii. Shut down all Managed Servers.
iii. Shut down AdminServer.

10. Shut down Node Manager using the normal AD script.
$ $ADMIN_SCRIPTS_HOME/adnodemgrctl.sh stop

11. Copy the boot.properties file for each Managed Server. WebLogic Server native scripts use the boot.properties file. The above steps have created the boot.properties file under <EBS_DOMAIN_HOME>/servers/<Managed Servername>/data/nodemanager, which is used by Node Manager.
For each Managed Server, copy the newly-generated boot.properties file from <EBS_DOMAIN_HOME>/servers/<Managed Server name>/data/nodemanager TO <EBS_DOMAIN_HOME>/servers/<Managed Server name>/security
The EBS WebLogic Server domain password has now been changed, and all servers can now be started using the normal AD scripts.
To start AdminServer:
$ADMIN_SCRIPTS_HOME/adadminsrvctl.sh start
To start the Managed Servers:
$ $ADMIN_SCRIPTS_HOME/admanagedsrvctl.sh start <managed_server_name>

12. The above steps have changed the Oracle WebLogic AdminServer password on the run file system. You now need to perform an fs_clone operation, to change the WebLogic EBS Domain password on the patch file system:
i. Launch a new session and connect to the Oracle E-Business Suite instance.
ii. Source the application tier environment file.
iii. Run the command:
$ adop phase=fs_clone

For any questions, please email me samiappsdba@gmail.com

Wednesday, August 30, 2017

AccessGate Login Fails with HTTP 500 Error after ADOP Cycle

I was facing the subject problem after successful ADOP cycle, Single Sign On SSO does not work and throws HTTP 500 Error at login.
Below shows that ADOP cycle was successful,
But when we try to access the application we get below error,
EBS 12.2 AccessGate Login Fails with HTTP 500 Error 'Request Failed for : /accessgate/ssologin?, Resp Code : [500]' On Secondary Nodes After A Complete ADOP Patching Cycle As File oaea_wls.properties is Not Updated On Multiple Filesystems (fs2)

In some cases fs_clone can fail to copy the file oaea_wls.properties to the other filesystem fs2 for the additional apps nodes.
For example, on the secondary apps tiers using a different filesystem (fs2), file oaea_wls.properties will have null settings for some parameters:
# +======================================================================+
# |    Copyright (c) 2005, 2014 Oracle and/or its affiliates.           |
# |                         All rights reserved.                         |
# |                           Version 12.0.0                             |
# +======================================================================+
# $Header: oaea_wls_properties.tmp 120.0.12020000.1 2014/07/15 12:36:37 mmanku noship $
#
# This file lists all the  parameters required to start acessgate application
# /u03/eruat/fs1/EBSapps/appl/fnd/12.0.0
# ###############################################################
#
# Do not edit settings in this file manually. They are managed
# automatically and will be overwritten when AutoConfig runs.
# For more information about AutoConfig, refer to the Oracle
# E-Business Suite Setup Guide.
#
# ###############################################################
SSO_SERVER_RELEASE=
SSO_SERVER_URL=
SSO_SERVER_TYPE=
LOG_CONFIG_FILE=DEFAULT
APPL_SERVER_ID=
OAM_LOGOUT_URL=
CONNECTION_REF=
WEBGATE_LOGOUT=
oracle.apps.fnd.sso.WebEntries=DEFAULT

In certain cases for an ONLINE patching cycle, the file $INST_TOP/appl/admin/oaea_wls.properties does not get copied from RUN to PATCH.
The issue is documented in the following unpublished defect:
Bug 25720619 - ERROR 500 INTERNAL SERVER ERROR TRYING TO LOGIN AFTER PATCH CYCLE
Workaround Fix:
To resolve the issue test the following steps in a development instance and then migrate accordingly:
1. Ensure a valid backup exists for the instance where the solution is being testing.
2. Copy file $INST_TOP/appl/admin/oaea_wls.properties from the original filesystem (fs1) over to the additional filesystems (fs2), to migrate the proper node settings to oaea_wls.properties
3. Restart the servers to ensure the file changes are processed.
4. Retest the login and confirm the error is resolved.
Following should be the content of the file oaea_wls.properties
This is the latest update as of 28-August-2017 for this issue, there is a patch for this defect 25720619 that is planned to be released in the next AD / TXK Delta patchset, with no release date information currently available.

For any queries on this article, kindly email me samiappsdba@gmail.com