Oracle Database 12c: Managing Parameters in Container database
Last week I attended the 'Oracle Database 12c New features course' at Oracle University and found that Oracle database 12c new features are really good. Especially the new concept of Container database and Pluggable Databases, restore of tables from RMAN backup, and many such facinating features.
Here I write my first simple article about managing parameters in the contrainer database.
In a container architecture, the parameters for Pluggable Database (PDB) will inherit from the root database. That means if statistics_level=all in the root that will cascade to the PDB databases.
You can over ride this by using Alter system set, if that parameter is pdb modifiable, there is a new column in v$system_parameter for the same.
The inheritance property for some parameters must be true.
For other parameters, you can change the inheritance property by running the ALTER SYSTEM SET statement to set the parameter when the current container is the PDB.
If ISPDB_MODIFIABLE is TRUE for an initialization parameter in the V$SYSTEM_PARAMETER view, then the inheritance property can be false for the parameter.
SQL> select NAME,ISSES_MODIFIABLE,ISSYS_MODIFIABLE,ISINSTANCE_MODIFIABLE from V$SYSTEM_PARAMETER where name='open_cursors';
NAME ISSES ISSYS_MOD ISINS ISPDB
----------------- ----- --------- ----- -------
open_cursors FALSE IMMEDIATE TRUE TRUE
SQL>The above example of open_cursors for ISPDB_MODIFIABLE is true means that this parameter can be inherited for PDB’s from root database
Further the parameters can be changed all together for all databases or for specific container (PDB) by using again a CONTAINER Clause in ALTER SYSTEM
The following changes the open_cursor parameter to 200 in root and all the PDB's
ALTER SYSTEM SET OPEN_CURSORS = 200 CONTAINER = ALL;
If you logged in root, The following changes the open_cursor parameter to 200 in root and all the PDB's as the instance inheritance parameter is true, if you change in root it will applicable to all.
ALTER SYSTEM SET OPEN_CURSORS = 200 CONTAINER = CURRENT;
If you logged in PDB, The following changes the open_cursor parameter to 200 in PDB only.
ALTER SYSTEM SET OPEN_CURSORS = 200 CONTAINER = CURRENT;
Important Note: PDB parameters cannot be changed if you are using PFILE, You must use SPFILE
Oracle Core DBA, Oracle EBS Apps DBA, Microsoft SQL Server DBA, Postgresql DBA, IBM DB2 DBA.
Monday, December 30, 2013
Friday, November 22, 2013
Changing OACORE JVMs in R12 without autoconfig
Following are the steps to increase the number of OACORE JVMs in R12 with minimal downtime (< 1min) and to preserve the changes even after autoconfig is run at a later time.
Steps
1. On the node where Apache is running, go to, cd $ORA_CONFIG_HOME/10.1.3/opmn/conf folder
2. Take backup of opmn.xml file and edit the following section,
<process-type id="oacore" module-id="OC4J" status="enabled" working-dir="$ORACLE_HOME/j2ee/home">
<module-data>
<category id="start-parameters">
<data id="java-options" value="-server -verbose:gc -Xmx1024M -Xms512M -XX:MaxPermSize=256M -XX:NewRatio=2 -XX:+PrintGCTimeStamps -XX:+UseTLAB -XX:+UseParallelGC -XX:ParallelGCThreads=2 -Dcom.sun.management.jmxremote -Djava.security.policy=$ORACLE_HOME/j2ee/oacore/config/java2.policy -Djava.awt.headless=true -Dhttp.webdir.enable=false -Doracle.security.jazn.config=/u01/cell/inst/apps/cell_edolapplin1/ora/10.1.3/j2ee/oacore/config/jazn.xml -Dhttp.cookie.ignoreCommaInCookiesNamed=X_NoMatchingCookies"/>
<data id="java-bin" value="/u01/cell/inst/apps/cell_edolapplin1/admin/scripts/java.sh"/>
<data id="oc4j-options" value="-out /u01/cell/inst/apps/cell_edolapplin1/logs/ora/10.1.3/opmn/oacorestd.out -err /u01/cell/inst/apps/cell_edolapplin1/logs/ora/10.1.3/opmn/oacorestd.err"/>
</category>
<category id="stop-parameters">
<data id="java-options" value="-server -verbose:gc -Xmx512M -Xms128M -XX:MaxPermSize=160M -XX:NewRatio=2 -XX:+PrintGCTimeStamps -XX:+UseTLAB -XX:+UseParallelGC -XX:ParallelGCThreads=2 -Djava.security.policy=$ORACLE_HOME/j2ee/oacore/config/java2.policy -Djava.awt.headless=true -Dhttp.webdir.enable=false"/>
</category>
<category id="security-parameters">
<data id="wallet-file" value="file:/u01/cell/inst/apps/cell_edolapplin1/certs/Apache"/>
</category>
</module-data>
<start timeout="600" retry="2"/>
<stop timeout="120"/>
<restart timeout="720" retry="2"/>
<port id="default-web-site" range="21670-21674" protocol="ajp"/>
<port id="rmi" range="20170-20174"/>
<port id="jms" range="23170-23174"/>
<process-set id="default_group" numprocs="5"/>
</process-type>
In the above opmn.xml file block, change the last tag from,
<process-set id="default_group" numprocs="3"/>to <process-set id="default_group" numprocs="5"/>
3. In the context file $INST_TOP/appl/admin/celldb_edolapplin1.xml, change the following block from
<oacore_nprocs oa_var="s_oacore_nprocs">3</oacore_nprocs> to <oacore_nprocs oa_var="s_oacore_nprocs">5</oacore_nprocs>
4. No need to run autoconfig, just reload the opmn with the below command,
# adopmnctl.sh reload
5. Validate that 5 JVMs have been indeed started by adopmnctl.sh status
# adopmnctl.sh status
Risks and pointers:
1. Make sure the port range for rmi and jms is 5. Using 'netstat' check if those ports are in use or not.
2. If the port range is not specifying 5 ports, assign 5 ports as per the port pool value.
3. Make similar change in context file in the below tags,
<port id="rmi" range="20170-20174"/>
<port id="jms" range="23170-23174"/>
4. You have a choice of customizing your template rather than directly changing your xml file as well.
For any questions on this article please don't hesitate to email me on samiappsdba@gmail.com
Tuesday, October 29, 2013
Parallel Concurrent Processing in Oracle Apps R12
Parallel Concurrent Processing in Oracle Apps R12
In oracle apps, by default, concurrent managers are installed on one of the nodes. However, Oracle provides functionality called parallel concurrent processing -PCP where in, we can install concurrent mangers on multiple nodes. The advantage of PCP is that it provides fail over capability-if one of the nodes having Cm goes down, then CM will run on other nodes. Also, since we can distribute Cm over multiple nodes, so it will take advantage of resources-Ram/CPU of that node and hence processing will be faster.
Steps for Implementing PCP
•Make sure that the new node is added to the system. We will call the node already having CM as primary node and the new node as secondary node.
•Change the parameter APPLDCP to ON in context file of both nodes. For viewing reports log/output make sure that $APPLSCF should point to same directory on both nodes. We can use NFS for this.
•ON the secondary node, change the parameter “s_ isConc” to yes and also”s_ isConcDev” to yes
•Now, shutdown the services and run autoconfig on primary node, then on secondary node and finally on web tier.
•Ensure that tnsnames.ora on both CM nodes have correct entries.
•Now define primary and secondary node for ICM. Goto Concurrent >Manager>Define >Internal manager. There should be 2 ICM-one for each node(say node A and node B). For ICM for node A-define A as primary node and B as secondary node. For ICM for Node B-define B as primary node and A as secondary node.
•Similarly for Internal Monitor process define primary and secondary nodes. Goto Concurrent > Manager > Define. Search “Internal Monitor%”. There should be 2 Internal Monitor -one for each node(say A and B). For Internal Monitor for node A-define A as primary node and B as secondary node. For Internal Monitor for Node B-define B as primary node and A as secondary node. Also define standard workshift for both Internal Monitors. Activate them.
•Now define primary and secondary for other concurrent managers the way you want to distribute them.
•Start the services on all nodes. The cm has to be started on primary node only and not on secondary node.
References-
Note: 388495.1 - How to Set Up Parallel Concurrent Processing (PCP) in Apps
Note: 602899.1 - Some More Facts On How to Activate Parallel Concurrent Processing
Note: 271090.1 - Parallel Concurrent Processing Failover/Failback Expectations
In oracle apps, by default, concurrent managers are installed on one of the nodes. However, Oracle provides functionality called parallel concurrent processing -PCP where in, we can install concurrent mangers on multiple nodes. The advantage of PCP is that it provides fail over capability-if one of the nodes having Cm goes down, then CM will run on other nodes. Also, since we can distribute Cm over multiple nodes, so it will take advantage of resources-Ram/CPU of that node and hence processing will be faster.
Steps for Implementing PCP
•Make sure that the new node is added to the system. We will call the node already having CM as primary node and the new node as secondary node.
•Change the parameter APPLDCP to ON in context file of both nodes. For viewing reports log/output make sure that $APPLSCF should point to same directory on both nodes. We can use NFS for this.
•ON the secondary node, change the parameter “s_ isConc” to yes and also”s_ isConcDev” to yes
•Now, shutdown the services and run autoconfig on primary node, then on secondary node and finally on web tier.
•Ensure that tnsnames.ora on both CM nodes have correct entries.
•Now define primary and secondary node for ICM. Goto Concurrent >Manager>Define >Internal manager. There should be 2 ICM-one for each node(say node A and node B). For ICM for node A-define A as primary node and B as secondary node. For ICM for Node B-define B as primary node and A as secondary node.
•Similarly for Internal Monitor process define primary and secondary nodes. Goto Concurrent > Manager > Define. Search “Internal Monitor%”. There should be 2 Internal Monitor -one for each node(say A and B). For Internal Monitor for node A-define A as primary node and B as secondary node. For Internal Monitor for Node B-define B as primary node and A as secondary node. Also define standard workshift for both Internal Monitors. Activate them.
•Now define primary and secondary for other concurrent managers the way you want to distribute them.
•Start the services on all nodes. The cm has to be started on primary node only and not on secondary node.
References-
Note: 388495.1 - How to Set Up Parallel Concurrent Processing (PCP) in Apps
Note: 602899.1 - Some More Facts On How to Activate Parallel Concurrent Processing
Note: 271090.1 - Parallel Concurrent Processing Failover/Failback Expectations
Labels:
Oracle Applications ERP
Monday, September 23, 2013
Cloning EBS r12 Applications & DB 11gR2
The following are the steps I did while preparing a clone of Oracle E-Business Suite Release 12.1.3 application with Database 11.2.0.3 on Linux 5.7 OS.
STEP 1: Preparing the target server on which clone has to be created.
Following is the target server configuration,
OS= Oracle Enterprise Linux 5.7 Server
ServerName= ed-olsrvlin1
On this target server database software 11.2.0.3 is already installed. If the software is not installed then you can follow below article of mine to clone the database home itself by copying the database home from production server.
http://samiora.blogspot.ae/2012/10/cloning-oracle-database-home.html
Now source the environment,
[oracle@ed-olsrvlin1 ~]$ vi .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin:/u03/research/product/11.2.0/dbhome_1/bin
export PATH
[oracle@ed-olsrvlin1 ~]$ . /u03/research/product/11.2.0/dbhome_1/research_ebsdlxsrv1.env
export ORACLE_SID=research;
export ORACLE_HOME=/u03/research/product/11.2.0/dbhome_1;
[oracle@ed-olsrvlin1 ~]$ . .bash_profile
[oracle@ed-olsrvlin1 ~]. /u01/research/product/11.2.0/dbhome_1/research_ed-olsrvlin1.env
export ORACLE_SID=research
STEP 2: Create initRESEARCH.ora file and add the below parameters,
[oracle@ed-olsrvlin1 dbs]$ vi initresearch.ora
*.sga_target=8589934592
*.aq_tm_processes=1
*.cluster_database=FALSE
*.compatible='11.2.0.3.0'
*.control_files='/u01/research/oradata/control01.ctl'
*.cursor_sharing='EXACT'
*.db_block_checking='FALSE'
*.db_block_checksum='TRUE'
*.db_block_size=8192
*.db_create_file_dest='/u01/research/oradata'
*.db_files=512# Max. no. of database files
*.db_name='research'
*.db_recovery_file_dest_size=40960m
*.db_recovery_file_dest='/u01/research/archives'
*.diagnostic_dest='/u01/research/product/11.2.0/dbhome_1/admin'
*.dml_locks=10000
*.instance_number=1
*.job_queue_processes=0
#*.log_archive_dest_1='LOCATION=/u01/research/archives'
*.log_archive_format='%t_%s_%r.arc'
#*.log_archive_start=TRUE
*.log_buffer=10485760
*.log_checkpoint_interval=100000
*.log_checkpoint_timeout=1200# Checkpoint at least every 20 mins.
*.log_checkpoints_to_alert=TRUE
*.max_dump_file_size='20480'# trace file size
*.nls_comp='binary'# Required 11i setting
*.nls_date_format='DD-MON-RR'
*.nls_length_semantics='BYTE'# Required 11i setting
*.nls_numeric_characters='.,'
*.nls_sort='binary'# Required 11i setting
*.nls_territory='america'
*.o7_dictionary_accessibility=FALSE
*.olap_page_pool_size=4194304
*.open_cursors=2000# Consumes process memory, unless using MTS.
*.optimizer_secure_view_merging=false
*.OS_AUTHENT_PREFIX=''
*.parallel_max_servers=8
*.parallel_min_servers=0
*.pga_aggregate_target=1G
*.plsql_code_type='INTERPRETED'# Default 11i setting
*.plsql_optimize_level=2# Required 11i setting
*.processes=5000# Max. no. of users x 2
*.sec_case_sensitive_logon=FALSE
*.session_cached_cursors=500
*.sessions=10000# 2 X processes
*.sga_target=8G
*.shared_pool_reserved_size=500M
*.shared_pool_size=2000M
*.SQL92_SECURITY=TRUE
undo_management='AUTO'# Required 11i setting
undo_tablespace='UNDOTS1'
*.utl_file_dir='/usr/tmp','/u01/research/product/11.2.0/dbhome_1/appsutil/outbound'
*.workarea_size_policy='AUTO'# Required 11i setting
STEP 3: Create password file,
[oracle@ed-olsrvlin1 dbs]$ orapwd file=pwdresearch.ora password=oracle entries=5
STEP 4: Add the below entry,
[oracle@ed-olsrvlin1 dbs]$ vi /etc/oratab
research:/u01/research/product/11.2.0/dbhome_1:N
STEP 5: Start preparing the database,
[oracle@ed-olsrvlin1 u01]$ echo $ORACLE_HOME
/u01/research/product/11.2.0/dbhome_1
[oracle@ed-olsrvlin1 u01]$ echo $ORACLE_SID
research
[oracle@ed-olsrvlin1 u01]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Thu Jul 11 16:42:09 2013
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup nomount;
[oracle@ed-olsrvlin1 u01]$ rman target /
RMAN> restore controlfile from '/u01/research/backup_dump/s_820506531.7639.820506533';
RMAN> alter database mount;
RMAN> catalog start with '/u01/research/backup_dump' noprompt;
SQL> select file#,name from v$datafile;
file# NAME
--------------------------------------------------------
1 +DATA1/research/datafile/system.697.818755525
2 +DATA1/research/datafile/system.699.818755511
SQL> select 'set newname for datafile ' || file# || ' to ' || '''' || '/u01/research/oradata' || ''';' from v$datafile;
RMAN> RUN
{
set newname for datafile 1 to '/u01/research/oradata/system.697.818755525';
set newname for datafile 2 to '/u01/research/oradata/system.699.818755511';
restore database;
switch datafile all;
recover database;
}
Starting recover at 11-JUL-13
using channel ORA_DISK_1
starting media recovery
Oracle Error:
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/u01/research/oradata/system.697.818755525'
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 07/11/2013 17:52:40
RMAN-06053: unable to perform media recovery because of missing log
RMAN-06025: no backup of archived log for thread 2 with sequence 6203 and starting SCN of 8212807471834 found to restore
RMAN-06025: no backup of archived log for thread 2 with sequence 6202 and starting SCN of 8212807421416 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 6162 and starting SCN of 8212807589317 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 6161 and starting SCN of 8212807516248 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 6160 and starting SCN of 8212807466762 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 6159 and starting SCN of 8212807415098 found to restore
STEP 6: Copy few archive files that are required from production. If you want to skip this step then you can initially take the backup of archive log along with the database backup.
asmcmd>
cp thread_1_seq_6159.7633.820505169 '/tmp/xxaarcs/thread_1_seq_6159.7633.820505169'
cp thread_1_seq_6160.7635.820505485 '/tmp/xxaarcs/thread_1_seq_6160.7635.820505485'
cp thread_1_seq_6161.7636.820505729 '/tmp/xxaarcs/thread_1_seq_6161.7636.820505729'
cp thread_2_seq_6234.7514.820528047 '/tmp/xxaarcs/thread_2_seq_6234.7514.820528047'
Now copy archives to the destination server where we are cloning and where the archive files are missing while doing the recovery,
[root@proddbsrv1 tmp]# scp * oracle@ed-olsrvlin1:/u01/research/archives/
STEP 7: Now on the target server, start the recovery,
RMAN> CATALOG START WITH '/u01/research/archives';
RMAN> RECOVER DATABASE;
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 07/11/2013 20:57:50
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 6252 and starting SCN of 8212809557297
STEP 8: Once RMAN restore/recovery finishes, you will want to rename the online redolog files before opening the database in case the production path of redo log files is not available on the new host.
After renaming the redolog files, the database can be opened with RESETLOGS
SQL> select member from v$logfile;
MEMBER
--------------------------------------------------------------------------------
+DATA1/research/onlinelog/group_1.671.818755739
+DATA1/research/onlinelog/group_2.670.818755739
+DATA1/research/onlinelog/group_3.669.818755739
+DATA1/research/onlinelog/group_4.668.818755739
+DATA1/research/onlinelog/group_5.667.818755739
+DATA1/research/onlinelog/group_6.676.818755743
+DATA1/research/onlinelog/group_7.675.818755743
+DATA1/research/onlinelog/group_8.674.818755743
+DATA1/research/onlinelog/group_9.673.818755743
+DATA1/research/onlinelog/group_10.672.818755743
alter database rename file '+DATA1/research/onlinelog/group_1.671.818755739' TO '/u01/research/onlinelog/group_1.671.818755739';
alter database rename file '+DATA1/research/onlinelog/group_2.670.818755739' TO '/u01/research/onlinelog/group_2.670.818755739';
alter database rename file '+DATA1/research/onlinelog/group_3.669.818755739' TO '/u01/research/onlinelog/group_3.669.818755739';
alter database rename file '+DATA1/research/onlinelog/group_4.668.818755739' TO '/u01/research/onlinelog/group_4.668.818755739';
alter database rename file '+DATA1/research/onlinelog/group_5.667.818755739' TO '/u01/research/onlinelog/group_5.667.818755739';
alter database rename file '+DATA1/research/onlinelog/group_6.676.818755743' TO '/u01/research/onlinelog/group_6.676.818755743';
alter database rename file '+DATA1/research/onlinelog/group_7.675.818755743' TO '/u01/research/onlinelog/group_7.675.818755743';
alter database rename file '+DATA1/research/onlinelog/group_8.674.818755743' TO '/u01/research/onlinelog/group_8.674.818755743';
alter database rename file '+DATA1/research/onlinelog/group_9.673.818755743' TO '/u01/research/onlinelog/group_9.673.818755743';
alter database rename file '+DATA1/research/onlinelog/group_10.672.818755743' TO '/u01/research/onlinelog/group_10.672.818755743';
SQL> select THREAD#, STATUS, ENABLED from v$thread;
THREAD# STATUS ENABLED
---------- ------ --------
1 OPEN PUBLIC
2 CLOSED PUBLIC
SQL> select group# from v$log where THREAD#=2;
GROUP#
----------
6
7
8
9
10
SQL> alter database disable thread 2;
SQL> alter database drop logfile group 6;
alter database drop logfile group 6
*
ERROR at line 1:
ORA-00350: log 6 of instance research2 (thread 2) needs to be archived
ORA-00312: online log 6 thread 2: '/u01/research/onlineloggroup_6.676.818755743'
sql>alter database clear unarchived logfile group 6;
sql> alter database drop logfile group 6;
SQL> alter database drop logfile group 7;
SQL> alter database drop logfile group 8;
SQL> alter database drop logfile group 9;
SQL> alter database drop logfile group 10;
SQL> select THREAD#, STATUS, ENABLED from v$thread;
THREAD# STATUS ENABLED
---------- ------ --------
1 OPEN PUBLIC
STEP 9: Now you can remove the undo tablespaces of other instances and create a new temporary tablespace to complete the activity.
SQL> show parameter undo
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTS1
SQL> select tablespace_name from dba_tablespaces where contents='UNDO';
TABLESPACE_NAME
------------------------------
UNDOTS1
UNDOTBS2
SQL> drop tablespace UNDOTBS2 including contents and datafiles;
SQL> select name from v$tempfile;
NAME
--------------------------------------------------------------------------------
/u01/research/oradata/research/datafile/o1_mf_temp_8xxsm1bt_.tmp
/u01/research/oradata/research/datafile/o1_mf_temp_8xxsm19v_.tmp
/u01/research/oradata/research/datafile/o1_mf_temp_8xxsm18d_.tmp
/u01/research/oradata/research/datafile/o1_mf_temp_8xxsm179_.tmp
/u01/research/oradata/research/datafile/o1_mf_temp_8xxsm15x_.tmp
SQL> select tablespace_name from dba_tablespaces where contents='TEMPORARY';
TABLESPACE_NAME
------------------------------
TEMP
SQL> create temporary tablespace TEMP1 tempfile '/u01/research/oradata/temp01.dbf' size 4096m;
SQL> alter database default temporary tablespace TEMP1;
SQL> ALTER TABLESPACE TEMP1 ADD TEMPFILE '/u01/research/oradata/temp02.dbf' size 4096m;
SQL> ALTER TABLESPACE TEMP1 ADD TEMPFILE '/u01/research/oradata/temp03.dbf' size 4096m;
SQL> drop tablespace temp including contents and datafiles;
STEP 10: So till here the database is configured successfully. You can once shutdown and startup the
database. Now clear the production node entries from this new cloned database. So connect as APPS database user and execute,
conn apps/appspassword
select node_name from fnd_nodes;
select host_name from v$instance;
execute fnd_conc_clone.setup_clean;
commit;
@/tmp/cmclean.sql
commit;
STEP 11: Disable database archiving.
conn / as sysdba
shutdown immediate;
startup mount;
alter database noarchivelog;
alter database open;
archive log list;
STEP 12: Now fix the LISTENER.ORA, TNSNAMES.ORA files in /u01/research/product/11.2.0/dbhome_1/network/admin/research_ed-olsrvlin1 directory.
export TNS_ADMIN=/u01/research/product/11.2.0/dbhome_1/network/admin/research_ed-olsrvlin1
Then bounce the listener.
lsnrctl stop research
lsnrctl start research
[oracle@ed-olsrvlin1 admin]$ cd /u01/research/product/11.2.0/dbhome_1/appsutil/bin/
STEP 13: Now create the database context file,
[oracle@ed-olsrvlin1 bin]$ which perl
/usr/bin/perl
[oracle@ed-olsrvlin1 bin]$ perl adbldxml.pl
Starting context file generation for db tier..
Using JVM from /u01/research/product/11.2.0/dbhome_1/jdk/jre/bin/java to execute java programs..
APPS Password: appspassword
The log file for this adbldxml session is located at:
/u01/research/product/11.2.0/dbhome_1/appsutil/log/adbldxml_07032128.log
AC-20010: Error: File - listener.ora could not be found at the location:
/listener.ora
indicated by TNS_ADMIN. Context file can not be generated.
Could not Connect to the Database with the above parameters, Please answer the Questions below
Enter Hostname of Database server: ed-olsrvlin1
Enter Port of Database server: 1526
Enter SID of Database server: research
Enter the value for Display Variable: 1
The context file has been created at:
/u01/research/product/11.2.0/dbhome_1/appsutil/research_ed-olsrvlin1.xml
STEP 14: Now run AUTOCONFIG on database server.
[oracle@ed-olsrvlin1 bin]$ ./adconfig.sh
Enter the full path to the Context file: /u01/research/product/11.2.0/dbhome_1/appsutil/research_ed-olsrvlin1.xml
STEP 15: Once all above steps are executed successfully, then start cloning the application tier,
On APPLICATION TIER Server ed-olsrvlin1
COPY ALL THE APPLICATION FILES FROM PRODUCTION TO ed-olsrvlin1 SERVER.
FROM PRODUCTION APPLICATION SERVER RUN THE BELOW COPY COMMAND
[root@prodappsrv1 research] pwd
/u01/research
[root@prodappsrv1 research] scp -r * applmgr@ed-olsrvlin1:/u02/research/
STEP 16: Once the application files are copied then with 'root' user, change the ownership of all the folders,
[root@ed-olsrvlin1 u02]# chown -R applmgr:dba research/
[applmgr@ed-olsrvlin1 bin]
export PATH=/u02/research/apps/tech_st/10.1.3/perl/bin:$PATH
export PERLBIN=/u02/research/apps/tech_st/10.1.3/perl/bin
export PERL5LIB=/u02/research/apps/tech_st/10.1.3/perl/lib/5.8.3:/u02/research/apps/tech_st/10.1.3/perl/lib/site_perl/5.8.3:/u02/research/apps/apps_st/appl/au/12.0.0/perl:/u02/research/apps/tech_st/10.1.3/Apache/Apache/mod_perl/lib/site_perl/5.8.3/i686-linux-thread-multi
STEP 17: Now run the config clone and complete the cloning procedure.
[applmgr@ed-olsrvlin1 bin]cd /u02/research/apps/apps_st/comn/clone/bin
[applmgr@ed-olsrvlin1 bin] ./adcfgclone.pl appsTier
Here complete all the prompts that appears, like
basepath=/u01/research
display to null: n
display=ebsdxlsrv1:0.0
These steps will help you prepare a cloned instance of both database and application tier for Oracle E-Business Suite Release 12 R12.1.3.
For any queries or assistance please email me on samiappsdba@gmail.com.
STEP 1: Preparing the target server on which clone has to be created.
Following is the target server configuration,
OS= Oracle Enterprise Linux 5.7 Server
ServerName= ed-olsrvlin1
On this target server database software 11.2.0.3 is already installed. If the software is not installed then you can follow below article of mine to clone the database home itself by copying the database home from production server.
http://samiora.blogspot.ae/2012/10/cloning-oracle-database-home.html
Now source the environment,
[oracle@ed-olsrvlin1 ~]$ vi .bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin:/u03/research/product/11.2.0/dbhome_1/bin
export PATH
[oracle@ed-olsrvlin1 ~]$ . /u03/research/product/11.2.0/dbhome_1/research_ebsdlxsrv1.env
export ORACLE_SID=research;
export ORACLE_HOME=/u03/research/product/11.2.0/dbhome_1;
[oracle@ed-olsrvlin1 ~]$ . .bash_profile
[oracle@ed-olsrvlin1 ~]. /u01/research/product/11.2.0/dbhome_1/research_ed-olsrvlin1.env
export ORACLE_SID=research
STEP 2: Create initRESEARCH.ora file and add the below parameters,
[oracle@ed-olsrvlin1 dbs]$ vi initresearch.ora
*.sga_target=8589934592
*.aq_tm_processes=1
*.cluster_database=FALSE
*.compatible='11.2.0.3.0'
*.control_files='/u01/research/oradata/control01.ctl'
*.cursor_sharing='EXACT'
*.db_block_checking='FALSE'
*.db_block_checksum='TRUE'
*.db_block_size=8192
*.db_create_file_dest='/u01/research/oradata'
*.db_files=512# Max. no. of database files
*.db_name='research'
*.db_recovery_file_dest_size=40960m
*.db_recovery_file_dest='/u01/research/archives'
*.diagnostic_dest='/u01/research/product/11.2.0/dbhome_1/admin'
*.dml_locks=10000
*.instance_number=1
*.job_queue_processes=0
#*.log_archive_dest_1='LOCATION=/u01/research/archives'
*.log_archive_format='%t_%s_%r.arc'
#*.log_archive_start=TRUE
*.log_buffer=10485760
*.log_checkpoint_interval=100000
*.log_checkpoint_timeout=1200# Checkpoint at least every 20 mins.
*.log_checkpoints_to_alert=TRUE
*.max_dump_file_size='20480'# trace file size
*.nls_comp='binary'# Required 11i setting
*.nls_date_format='DD-MON-RR'
*.nls_length_semantics='BYTE'# Required 11i setting
*.nls_numeric_characters='.,'
*.nls_sort='binary'# Required 11i setting
*.nls_territory='america'
*.o7_dictionary_accessibility=FALSE
*.olap_page_pool_size=4194304
*.open_cursors=2000# Consumes process memory, unless using MTS.
*.optimizer_secure_view_merging=false
*.OS_AUTHENT_PREFIX=''
*.parallel_max_servers=8
*.parallel_min_servers=0
*.pga_aggregate_target=1G
*.plsql_code_type='INTERPRETED'# Default 11i setting
*.plsql_optimize_level=2# Required 11i setting
*.processes=5000# Max. no. of users x 2
*.sec_case_sensitive_logon=FALSE
*.session_cached_cursors=500
*.sessions=10000# 2 X processes
*.sga_target=8G
*.shared_pool_reserved_size=500M
*.shared_pool_size=2000M
*.SQL92_SECURITY=TRUE
undo_management='AUTO'# Required 11i setting
undo_tablespace='UNDOTS1'
*.utl_file_dir='/usr/tmp','/u01/research/product/11.2.0/dbhome_1/appsutil/outbound'
*.workarea_size_policy='AUTO'# Required 11i setting
STEP 3: Create password file,
[oracle@ed-olsrvlin1 dbs]$ orapwd file=pwdresearch.ora password=oracle entries=5
STEP 4: Add the below entry,
[oracle@ed-olsrvlin1 dbs]$ vi /etc/oratab
research:/u01/research/product/11.2.0/dbhome_1:N
STEP 5: Start preparing the database,
[oracle@ed-olsrvlin1 u01]$ echo $ORACLE_HOME
/u01/research/product/11.2.0/dbhome_1
[oracle@ed-olsrvlin1 u01]$ echo $ORACLE_SID
research
[oracle@ed-olsrvlin1 u01]$ sqlplus / as sysdba
SQL*Plus: Release 11.2.0.3.0 Production on Thu Jul 11 16:42:09 2013
Copyright (c) 1982, 2011, Oracle. All rights reserved.
Connected to an idle instance.
SQL> startup nomount;
[oracle@ed-olsrvlin1 u01]$ rman target /
RMAN> restore controlfile from '/u01/research/backup_dump/s_820506531.7639.820506533';
RMAN> alter database mount;
RMAN> catalog start with '/u01/research/backup_dump' noprompt;
SQL> select file#,name from v$datafile;
file# NAME
--------------------------------------------------------
1 +DATA1/research/datafile/system.697.818755525
2 +DATA1/research/datafile/system.699.818755511
SQL> select 'set newname for datafile ' || file# || ' to ' || '''' || '/u01/research/oradata' || ''';' from v$datafile;
RMAN> RUN
{
set newname for datafile 1 to '/u01/research/oradata/system.697.818755525';
set newname for datafile 2 to '/u01/research/oradata/system.699.818755511';
restore database;
switch datafile all;
recover database;
}
Starting recover at 11-JUL-13
using channel ORA_DISK_1
starting media recovery
Oracle Error:
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: '/u01/research/oradata/system.697.818755525'
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 07/11/2013 17:52:40
RMAN-06053: unable to perform media recovery because of missing log
RMAN-06025: no backup of archived log for thread 2 with sequence 6203 and starting SCN of 8212807471834 found to restore
RMAN-06025: no backup of archived log for thread 2 with sequence 6202 and starting SCN of 8212807421416 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 6162 and starting SCN of 8212807589317 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 6161 and starting SCN of 8212807516248 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 6160 and starting SCN of 8212807466762 found to restore
RMAN-06025: no backup of archived log for thread 1 with sequence 6159 and starting SCN of 8212807415098 found to restore
STEP 6: Copy few archive files that are required from production. If you want to skip this step then you can initially take the backup of archive log along with the database backup.
asmcmd>
cp thread_1_seq_6159.7633.820505169 '/tmp/xxaarcs/thread_1_seq_6159.7633.820505169'
cp thread_1_seq_6160.7635.820505485 '/tmp/xxaarcs/thread_1_seq_6160.7635.820505485'
cp thread_1_seq_6161.7636.820505729 '/tmp/xxaarcs/thread_1_seq_6161.7636.820505729'
cp thread_2_seq_6234.7514.820528047 '/tmp/xxaarcs/thread_2_seq_6234.7514.820528047'
Now copy archives to the destination server where we are cloning and where the archive files are missing while doing the recovery,
[root@proddbsrv1 tmp]# scp * oracle@ed-olsrvlin1:/u01/research/archives/
STEP 7: Now on the target server, start the recovery,
RMAN> CATALOG START WITH '/u01/research/archives';
RMAN> RECOVER DATABASE;
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 07/11/2013 20:57:50
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 6252 and starting SCN of 8212809557297
STEP 8: Once RMAN restore/recovery finishes, you will want to rename the online redolog files before opening the database in case the production path of redo log files is not available on the new host.
After renaming the redolog files, the database can be opened with RESETLOGS
SQL> select member from v$logfile;
MEMBER
--------------------------------------------------------------------------------
+DATA1/research/onlinelog/group_1.671.818755739
+DATA1/research/onlinelog/group_2.670.818755739
+DATA1/research/onlinelog/group_3.669.818755739
+DATA1/research/onlinelog/group_4.668.818755739
+DATA1/research/onlinelog/group_5.667.818755739
+DATA1/research/onlinelog/group_6.676.818755743
+DATA1/research/onlinelog/group_7.675.818755743
+DATA1/research/onlinelog/group_8.674.818755743
+DATA1/research/onlinelog/group_9.673.818755743
+DATA1/research/onlinelog/group_10.672.818755743
alter database rename file '+DATA1/research/onlinelog/group_1.671.818755739' TO '/u01/research/onlinelog/group_1.671.818755739';
alter database rename file '+DATA1/research/onlinelog/group_2.670.818755739' TO '/u01/research/onlinelog/group_2.670.818755739';
alter database rename file '+DATA1/research/onlinelog/group_3.669.818755739' TO '/u01/research/onlinelog/group_3.669.818755739';
alter database rename file '+DATA1/research/onlinelog/group_4.668.818755739' TO '/u01/research/onlinelog/group_4.668.818755739';
alter database rename file '+DATA1/research/onlinelog/group_5.667.818755739' TO '/u01/research/onlinelog/group_5.667.818755739';
alter database rename file '+DATA1/research/onlinelog/group_6.676.818755743' TO '/u01/research/onlinelog/group_6.676.818755743';
alter database rename file '+DATA1/research/onlinelog/group_7.675.818755743' TO '/u01/research/onlinelog/group_7.675.818755743';
alter database rename file '+DATA1/research/onlinelog/group_8.674.818755743' TO '/u01/research/onlinelog/group_8.674.818755743';
alter database rename file '+DATA1/research/onlinelog/group_9.673.818755743' TO '/u01/research/onlinelog/group_9.673.818755743';
alter database rename file '+DATA1/research/onlinelog/group_10.672.818755743' TO '/u01/research/onlinelog/group_10.672.818755743';
SQL> select THREAD#, STATUS, ENABLED from v$thread;
THREAD# STATUS ENABLED
---------- ------ --------
1 OPEN PUBLIC
2 CLOSED PUBLIC
SQL> select group# from v$log where THREAD#=2;
GROUP#
----------
6
7
8
9
10
SQL> alter database disable thread 2;
SQL> alter database drop logfile group 6;
alter database drop logfile group 6
*
ERROR at line 1:
ORA-00350: log 6 of instance research2 (thread 2) needs to be archived
ORA-00312: online log 6 thread 2: '/u01/research/onlineloggroup_6.676.818755743'
sql>alter database clear unarchived logfile group 6;
sql> alter database drop logfile group 6;
SQL> alter database drop logfile group 7;
SQL> alter database drop logfile group 8;
SQL> alter database drop logfile group 9;
SQL> alter database drop logfile group 10;
SQL> select THREAD#, STATUS, ENABLED from v$thread;
THREAD# STATUS ENABLED
---------- ------ --------
1 OPEN PUBLIC
STEP 9: Now you can remove the undo tablespaces of other instances and create a new temporary tablespace to complete the activity.
SQL> show parameter undo
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTS1
SQL> select tablespace_name from dba_tablespaces where contents='UNDO';
TABLESPACE_NAME
------------------------------
UNDOTS1
UNDOTBS2
SQL> drop tablespace UNDOTBS2 including contents and datafiles;
SQL> select name from v$tempfile;
NAME
--------------------------------------------------------------------------------
/u01/research/oradata/research/datafile/o1_mf_temp_8xxsm1bt_.tmp
/u01/research/oradata/research/datafile/o1_mf_temp_8xxsm19v_.tmp
/u01/research/oradata/research/datafile/o1_mf_temp_8xxsm18d_.tmp
/u01/research/oradata/research/datafile/o1_mf_temp_8xxsm179_.tmp
/u01/research/oradata/research/datafile/o1_mf_temp_8xxsm15x_.tmp
SQL> select tablespace_name from dba_tablespaces where contents='TEMPORARY';
TABLESPACE_NAME
------------------------------
TEMP
SQL> create temporary tablespace TEMP1 tempfile '/u01/research/oradata/temp01.dbf' size 4096m;
SQL> alter database default temporary tablespace TEMP1;
SQL> ALTER TABLESPACE TEMP1 ADD TEMPFILE '/u01/research/oradata/temp02.dbf' size 4096m;
SQL> ALTER TABLESPACE TEMP1 ADD TEMPFILE '/u01/research/oradata/temp03.dbf' size 4096m;
SQL> drop tablespace temp including contents and datafiles;
STEP 10: So till here the database is configured successfully. You can once shutdown and startup the
database. Now clear the production node entries from this new cloned database. So connect as APPS database user and execute,
conn apps/appspassword
select node_name from fnd_nodes;
select host_name from v$instance;
execute fnd_conc_clone.setup_clean;
commit;
@/tmp/cmclean.sql
commit;
STEP 11: Disable database archiving.
conn / as sysdba
shutdown immediate;
startup mount;
alter database noarchivelog;
alter database open;
archive log list;
STEP 12: Now fix the LISTENER.ORA, TNSNAMES.ORA files in /u01/research/product/11.2.0/dbhome_1/network/admin/research_ed-olsrvlin1 directory.
export TNS_ADMIN=/u01/research/product/11.2.0/dbhome_1/network/admin/research_ed-olsrvlin1
Then bounce the listener.
lsnrctl stop research
lsnrctl start research
[oracle@ed-olsrvlin1 admin]$ cd /u01/research/product/11.2.0/dbhome_1/appsutil/bin/
STEP 13: Now create the database context file,
[oracle@ed-olsrvlin1 bin]$ which perl
/usr/bin/perl
[oracle@ed-olsrvlin1 bin]$ perl adbldxml.pl
Starting context file generation for db tier..
Using JVM from /u01/research/product/11.2.0/dbhome_1/jdk/jre/bin/java to execute java programs..
APPS Password: appspassword
The log file for this adbldxml session is located at:
/u01/research/product/11.2.0/dbhome_1/appsutil/log/adbldxml_07032128.log
AC-20010: Error: File - listener.ora could not be found at the location:
/listener.ora
indicated by TNS_ADMIN. Context file can not be generated.
Could not Connect to the Database with the above parameters, Please answer the Questions below
Enter Hostname of Database server: ed-olsrvlin1
Enter Port of Database server: 1526
Enter SID of Database server: research
Enter the value for Display Variable: 1
The context file has been created at:
/u01/research/product/11.2.0/dbhome_1/appsutil/research_ed-olsrvlin1.xml
STEP 14: Now run AUTOCONFIG on database server.
[oracle@ed-olsrvlin1 bin]$ ./adconfig.sh
Enter the full path to the Context file: /u01/research/product/11.2.0/dbhome_1/appsutil/research_ed-olsrvlin1.xml
STEP 15: Once all above steps are executed successfully, then start cloning the application tier,
On APPLICATION TIER Server ed-olsrvlin1
COPY ALL THE APPLICATION FILES FROM PRODUCTION TO ed-olsrvlin1 SERVER.
FROM PRODUCTION APPLICATION SERVER RUN THE BELOW COPY COMMAND
[root@prodappsrv1 research] pwd
/u01/research
[root@prodappsrv1 research] scp -r * applmgr@ed-olsrvlin1:/u02/research/
STEP 16: Once the application files are copied then with 'root' user, change the ownership of all the folders,
[root@ed-olsrvlin1 u02]# chown -R applmgr:dba research/
[applmgr@ed-olsrvlin1 bin]
export PATH=/u02/research/apps/tech_st/10.1.3/perl/bin:$PATH
export PERLBIN=/u02/research/apps/tech_st/10.1.3/perl/bin
export PERL5LIB=/u02/research/apps/tech_st/10.1.3/perl/lib/5.8.3:/u02/research/apps/tech_st/10.1.3/perl/lib/site_perl/5.8.3:/u02/research/apps/apps_st/appl/au/12.0.0/perl:/u02/research/apps/tech_st/10.1.3/Apache/Apache/mod_perl/lib/site_perl/5.8.3/i686-linux-thread-multi
STEP 17: Now run the config clone and complete the cloning procedure.
[applmgr@ed-olsrvlin1 bin]cd /u02/research/apps/apps_st/comn/clone/bin
[applmgr@ed-olsrvlin1 bin] ./adcfgclone.pl appsTier
Here complete all the prompts that appears, like
basepath=/u01/research
display to null: n
display=ebsdxlsrv1:0.0
These steps will help you prepare a cloned instance of both database and application tier for Oracle E-Business Suite Release 12 R12.1.3.
For any queries or assistance please email me on samiappsdba@gmail.com.
Tuesday, August 13, 2013
Limitation of 11g ASMCMD cp command
The below script can be used to automate and overcome the limitation of 11g ASMCMD cp command.
Limitation
The new cp (copy) command introduced in 11g asmcmd tool has limitation that it cannot copy a directory nor multiple files at a time using the cp command from asmcmd prompt.
Solution
I have written the below shell script that can be scheduled to copy multiple files for example from the FRA's autobackup directory from ASM to local disk using the asmcmd utility.
You can schedule the below script in crontab as grid user.
#multiplefilecopyfromasmtodisk.sh
#!/bin/bash
#
# This script copies files from FRA on ASM to local disk
#
PATH=/u01/oragrid/11.2.0.2/bin:/usr/sbin:/usr/bin:/sbin:/bin:$PATH; export PATH
ORACLE_SID=+ASM1; export ORACLE_SID
ORACLE_HOME=/u01/oragrid/11.2.0.3; export ORACLE_HOME
ASMLS=/u01/autobackupControlFile/asm_ls.txt; export ASMLS
FRA=+FRA1/ORCL/AUTOBACKUP/`date +%Y_%m_%d`; export FRA
LOCALBACKUPDIR=/u01/autobackupControlFile; export LOCALBACKUPDIR
LOG=/u01/autobackupControlFile/filecopyfromasm.log; export LOG
#
# Get the list of files
#
cd /u01/oragrid/11.2.0.2/bin/
./asmcmd > $ASMLS <
ls $FRA
exit
EOF
#
# Clean the list by removing "ASMCDM>"
#
sed -i 's/ASMCMD> //g' $ASMLS
cat $ASMLS
echo `date` > $LOG
#
# Copy files one by one
#
for FILENAME in `cat $ASMLS`
do
./asmcmd >> $LOG <
cp $FRA/$FILENAME $LOCALBACKUPDIR
EOF
done
cat $LOG
Reference
I recently attended 'Advanced Shell Scripting' course in Hyderabad that helped me to write the above script. I hope you find it useful.
For any further help on this topic please email me, samiora@gmail.com
Limitation
The new cp (copy) command introduced in 11g asmcmd tool has limitation that it cannot copy a directory nor multiple files at a time using the cp command from asmcmd prompt.
Solution
I have written the below shell script that can be scheduled to copy multiple files for example from the FRA's autobackup directory from ASM to local disk using the asmcmd utility.
You can schedule the below script in crontab as grid user.
#multiplefilecopyfromasmtodisk.sh
#!/bin/bash
#
# This script copies files from FRA on ASM to local disk
#
PATH=/u01/oragrid/11.2.0.2/bin:/usr/sbin:/usr/bin:/sbin:/bin:$PATH; export PATH
ORACLE_SID=+ASM1; export ORACLE_SID
ORACLE_HOME=/u01/oragrid/11.2.0.3; export ORACLE_HOME
ASMLS=/u01/autobackupControlFile/asm_ls.txt; export ASMLS
FRA=+FRA1/ORCL/AUTOBACKUP/`date +%Y_%m_%d`; export FRA
LOCALBACKUPDIR=/u01/autobackupControlFile; export LOCALBACKUPDIR
LOG=/u01/autobackupControlFile/filecopyfromasm.log; export LOG
#
# Get the list of files
#
cd /u01/oragrid/11.2.0.2/bin/
./asmcmd > $ASMLS <
ls $FRA
exit
EOF
#
# Clean the list by removing "ASMCDM>"
#
sed -i 's/ASMCMD> //g' $ASMLS
cat $ASMLS
echo `date` > $LOG
#
# Copy files one by one
#
for FILENAME in `cat $ASMLS`
do
./asmcmd >> $LOG <
cp $FRA/$FILENAME $LOCALBACKUPDIR
EOF
done
cat $LOG
Reference
I recently attended 'Advanced Shell Scripting' course in Hyderabad that helped me to write the above script. I hope you find it useful.
For any further help on this topic please email me, samiora@gmail.com
Monday, July 15, 2013
Oracle 11g Active Database Duplication
Oracle 11g introduced new feature called 'Active Database Duplication' using which we can create a duplicate database of the target database without any backups.
Duplication is performed over the network.
Overview:
On the source Server:
- Create Pfile from source database
- Create an entry in tnsnames.ora for duplictae database on target host on port 1522
On the target Server:
- Add a line in the file /etc/oratab to reflect the database instance you are going to copy
- create folders
- Copy the initialization parameter file from the source database add edit it.
- Copy the password file
- Create a listener in database home on port 1522 and register duplicate database statically with it
- Startup the target database in nomount mode using modified parameter file
- Using RMAN connect to the source database(orcl) as target database and duplicate database (orclt) as auxiliary instance
- duplicate the target database
********************************
Source database: orcl
Duplicate database: orclt
***********************************
Implementation
– On source host
– CREATE PFILE FROM SOURCE DATABASE
SQL> CREATE PFILE=’/u01/app/oracle/oradata/orcl/initsource.ora’ FROM SPFILE;
– On source database, create a service for orclt on target host on port 1522
The rest of the steps occur on the target host.
– Add a line in the file /etc/oratab to reflect the database instance you are going to copy
orclt:/u01/app/oracle/product/11.2.0/db1:N
– Now set the Oracle SID as the duplicated database SID:
– create folders
mkdir -p /u01/app/oracle/oradata/orclt
mkdir -p /u01/app/oracle/flash_recovery_area/orclt
mkdir -p /u01/app/oracle/admin/orclt/adump
mkdir -p /u01/app/oracle/admin/orclt/dpdump
– Copy the initialization parameter file from the main database.
$cp /u01/app/oracle/oradata/orcl/initsource.ora /u01/app/oracle/oradata/orclt/inittarget.ora
– Edit the initialization parameter file
$vi /u01/app/oracle/oradata/orclt/inittarget.ora
– Change db_name = orclt
– Edit it to reflect the new locations that might be appropriate such as control file locations,audit dump destinations, datafile locations, etc.
– add these lines –
db_file_name_convert = (“/u01/app/oracle/oradata/orcl”,“/u01/app/oracle/oradata/orclt”)
log_file_name_convert = (“/u01/app/oracle/oradata/orcl”,"/u01/app/oracle/oradata/orclt”)
In case source and destination databases ae ASM, following lines can be added :
db_file_name_convert = (“+DATA/orcl”,”+DATA/orclt”)
log_file_name_convert = (“+DATA/orcl”,”+DATA/orclt”, “+FRA/orcl”,”+FRA/orclt”)
– Copy the password file as well
$cp /u01/app/oracle/product/11.2.0/db_1/dbs/orapworcl /u01/app/oracle/product/11.2.0/db_1/dbs/orapworclt
– Startup the target database in nomount mode using modified parameter file
$ . oraenv
ORACLE_SID = [orclt]
$sqlplus sys/oracle as sysdba
SQL> startup nomount pfile=’/u01/app/oracle/oradata/orclt/inittarget.ora’;
create spfile from pfile=’/u01/app/oracle/oradata/orclt/inittarget.ora’;
– create a listener on port 1522 in database home on target host and statically register service orclt with it.
– connect to the auxiliary instance
$. oraenv
orclt
$rman target sys/oracle@orcl auxiliary sys/oracle@orclt
– duplicate the database orcl to orclt from active database
– the command performs the following steps:
* Creates an SPFILE
* Shuts down the instance and restarts it with the new spfile
* Restores the controlfile from the backup
* Mounts the database
* Performs restore of the datafiles. In this stage it creates the files in the converted names.
* Recovers the datafiles up to the time specified and opens the database
RMAN> duplicate target database to orclt from active database;
– check that duplicate database is up
$sqlplus / as sysdba
sql> conn hr/hr
select * from tab;
SQL> select dbid from v$database;
For any further queries email me samiora@gmail.com.
Duplication is performed over the network.
Overview:
On the source Server:
- Create Pfile from source database
- Create an entry in tnsnames.ora for duplictae database on target host on port 1522
On the target Server:
- Add a line in the file /etc/oratab to reflect the database instance you are going to copy
- create folders
- Copy the initialization parameter file from the source database add edit it.
- Copy the password file
- Create a listener in database home on port 1522 and register duplicate database statically with it
- Startup the target database in nomount mode using modified parameter file
- Using RMAN connect to the source database(orcl) as target database and duplicate database (orclt) as auxiliary instance
- duplicate the target database
********************************
Source database: orcl
Duplicate database: orclt
***********************************
Implementation
– On source host
– CREATE PFILE FROM SOURCE DATABASE
SQL> CREATE PFILE=’/u01/app/oracle/oradata/orcl/initsource.ora’ FROM SPFILE;
– On source database, create a service for orclt on target host on port 1522
The rest of the steps occur on the target host.
– Add a line in the file /etc/oratab to reflect the database instance you are going to copy
orclt:/u01/app/oracle/product/11.2.0/db1:N
– Now set the Oracle SID as the duplicated database SID:
– create folders
mkdir -p /u01/app/oracle/oradata/orclt
mkdir -p /u01/app/oracle/flash_recovery_area/orclt
mkdir -p /u01/app/oracle/admin/orclt/adump
mkdir -p /u01/app/oracle/admin/orclt/dpdump
– Copy the initialization parameter file from the main database.
$cp /u01/app/oracle/oradata/orcl/initsource.ora /u01/app/oracle/oradata/orclt/inittarget.ora
– Edit the initialization parameter file
$vi /u01/app/oracle/oradata/orclt/inittarget.ora
– Change db_name = orclt
– Edit it to reflect the new locations that might be appropriate such as control file locations,audit dump destinations, datafile locations, etc.
– add these lines –
db_file_name_convert = (“/u01/app/oracle/oradata/orcl”,“/u01/app/oracle/oradata/orclt”)
log_file_name_convert = (“/u01/app/oracle/oradata/orcl”,"/u01/app/oracle/oradata/orclt”)
In case source and destination databases ae ASM, following lines can be added :
db_file_name_convert = (“+DATA/orcl”,”+DATA/orclt”)
log_file_name_convert = (“+DATA/orcl”,”+DATA/orclt”, “+FRA/orcl”,”+FRA/orclt”)
– Copy the password file as well
$cp /u01/app/oracle/product/11.2.0/db_1/dbs/orapworcl /u01/app/oracle/product/11.2.0/db_1/dbs/orapworclt
– Startup the target database in nomount mode using modified parameter file
$ . oraenv
ORACLE_SID = [orclt]
$sqlplus sys/oracle as sysdba
SQL> startup nomount pfile=’/u01/app/oracle/oradata/orclt/inittarget.ora’;
create spfile from pfile=’/u01/app/oracle/oradata/orclt/inittarget.ora’;
– create a listener on port 1522 in database home on target host and statically register service orclt with it.
– connect to the auxiliary instance
$. oraenv
orclt
$rman target sys/oracle@orcl auxiliary sys/oracle@orclt
– duplicate the database orcl to orclt from active database
– the command performs the following steps:
* Creates an SPFILE
* Shuts down the instance and restarts it with the new spfile
* Restores the controlfile from the backup
* Mounts the database
* Performs restore of the datafiles. In this stage it creates the files in the converted names.
* Recovers the datafiles up to the time specified and opens the database
RMAN> duplicate target database to orclt from active database;
– check that duplicate database is up
$sqlplus / as sysdba
sql> conn hr/hr
select * from tab;
SQL> select dbid from v$database;
For any further queries email me samiora@gmail.com.
Sunday, June 16, 2013
Convert Single Instance to RAC using RCONFIG
The following oracle supported methods are available to convert a single-instance database to a RAC database as long as the RAC and the standalone environments are running on the same OS and using the same oracle release:
1. RCONFIG
2. DBCA
3. Oracle Enterprise Manager (grid control)
4. Manual method
Here we will see how to convert single intance database to RAC using RCONFIG.
During the conversion, rconfig performs the following steps automatically:
• Migrating the database to ASM, if specified
• Creating RAC database instances on all specified nodes in the cluster
• Configuring the Listener and NetService entries
• Registering services with CRS
• Starting up the instances and listener on all nodes
In Oracle 11g R2, a single-instance database can either be converted to an administrator-managed cluster database or a policy-managed cluster database.
The difference between administrator managed and policy managed cluster database is given below in green,
Server pools are logical divisions of a cluster into pools of servers, which are allocated to host databases or other applications. Server pools are managed using crsctl and srvctl commands.
Caution:
By default, any named user may create a server pool. To restrict the operating system users that have this privilege, Oracle strongly recommends that you add specific users to the CRS Administrators list.
Each server pool name must be unique within the cluster. Two server pools cannot use the same name.
There are two types of server pool management:
Administrator-managed: Database administrators define the servers on which databases resource run, and place resources manually as needed. This is the management strategy used in previous releases.
Policy managed: Database administrators specify in which server pool (excluding generic or free) the database resource will run. Oracle Clusterware is responsible for placing the database resource on a server.
The server pool name is a required attribute. You can also provide values for the following attributes; if you do not specify them, then they are set to the default value:
MIN_SIZE: Minimum number of servers on which you want a resource to run. The default is 0.
MAX_SIZE: Maximum number of servers on which you want a resource to run. The default is -1, which indicates that resources can run on all available nodes in the cluster.
IMPORTANCE: The relative importance of the resource pool, used to determine how to reconfigure servers when a node joins or leaves the cluster. The default is 0.
Note: Clients using Oracle Database 11g release 2 and later databases using policy-managed server pools must access the database using the Single Client Access Name (SCAN). This is required because policy-managed databases can run on different servers at different times, so connecting to a particular node virtual IP address for a policy-managed database is not possible.
When you navigate through the $ORACLE_HOME/assistants/rconfig/sampleXMLS, you will find two sample XML input files.
- ConvertToRAC_AdminManaged.xml
- ConvertToRAC_PolicyManaged.xml
While converting a single-instance database, with filesystem storage, to an RAC database with Automatic Storage Management (ASM), rconfig invokes RMAN internally to back up the database to proceed with converting non-ASM to ASM.
CURRENT SCENARIO:-
2 node RAC setup
- Names of nodes : egtodb01, egtodb02
- Name of single instance database with filesystem storage : ebsuat
- Source Oracle home : /u01/app/oracle/product/11.2.0/dbhome_1
- Target Oracle home : /u01/app/oracle/product/11.2.0/dbhome_1
OBJECTIVE
- convert ebsuat to a Admin managed RAC database running on two nodes egtodb01 and egtodb02.
- change storage to ASM with
. Datafiles on +DATA diskgroup
. Flash recovery area on +FRA diskgroup
IMPLEMENTATION:– copy ConvertToRAC_AdminManaged.xml to another file my.xml
egtodb01$ cd $ORACLE_HOME/assistants/rconfig/sampleXMLs
egtodb01$ cp ConvertToRAC_AdminManaged.xml my.xml
– Edit my.xml and make following changes :
. Specify current OracleHome of non-rac database for SourceDBHome
. Specify OracleHome where the rac database should be configured. It can be same as SourceDBHome
. Specify SID of non-rac database and credential. User with sysdba role is required to perform conversion
. Specify the list of nodes that should have rac instances running for the Admin Managed Cluster Database
. LocalNode should be the first node in this nodelist.
. Instance Prefix tag is optional starting with 11.2. If left empty, it is derived from db_unique_name
. Specify the type of storage to be used by rac database. Allowable values are CFS|ASM
. Specify Database Area Location to be configured for rac database.
. Specify Flash Recovery Area to be configured for rac database.
– Run rconfig to convert ebsuat from single instance database to 2 instance RAC database
egtodb01$ rconfig my.xml
– Check the log file for rconfig while conversion is going on
oracle@egtodb01$ ls -lrt $ORACLE_BASE/cfgtoollogs/rconfig/*.log
– check that the database has been converted successfully
egtodb01$srvctl status database -d ebsuat
Instance ebsuat1 is running on node egtodb01
Instance ebsuat2 is running on node egtodb02
– Note that rconfig adds password file to all the nodes but entry to tnsnames.ora needs to be modified (to reflect scan name instead of host-ip) on the local node and added to rest of the nodes.
– For all other nodes, copy the entry for the database ebsuat from tnsnames.ora on local node to tnsnames.ora on remote nodes.
– Following is the entry I modified on the local node and copied to rest of the nodes :
ebsuat =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ebsdb-scan.mydomain.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ebsuat)
)
)
– check that database can be connected remotely from remote node.
egtodb02$sqlplus system/manager@ebsuat
– check that datafiles have converted to ASM
SQL>select name from v$datafile;
NAME
——————————————————————————–
+DATA/ebsuat/datafile/system.326.79483827
+DATA/ebsuat/datafile/sysaux.325.79483834
+DATA/ebsuat/datafile/undotbs1.305.79483805
+DATA/ebsuat/datafile/users.342.79483841
+DATA/ebsuat/datafile/undotbs2.348.79483
For any further questions regarding the subject topic, please don't hesitate to email me samiappsdba@gmail.com
1. RCONFIG
2. DBCA
3. Oracle Enterprise Manager (grid control)
4. Manual method
Here we will see how to convert single intance database to RAC using RCONFIG.
During the conversion, rconfig performs the following steps automatically:
• Migrating the database to ASM, if specified
• Creating RAC database instances on all specified nodes in the cluster
• Configuring the Listener and NetService entries
• Registering services with CRS
• Starting up the instances and listener on all nodes
In Oracle 11g R2, a single-instance database can either be converted to an administrator-managed cluster database or a policy-managed cluster database.
The difference between administrator managed and policy managed cluster database is given below in green,
Server pools are logical divisions of a cluster into pools of servers, which are allocated to host databases or other applications. Server pools are managed using crsctl and srvctl commands.
Caution:
By default, any named user may create a server pool. To restrict the operating system users that have this privilege, Oracle strongly recommends that you add specific users to the CRS Administrators list.
Each server pool name must be unique within the cluster. Two server pools cannot use the same name.
There are two types of server pool management:
Administrator-managed: Database administrators define the servers on which databases resource run, and place resources manually as needed. This is the management strategy used in previous releases.
Policy managed: Database administrators specify in which server pool (excluding generic or free) the database resource will run. Oracle Clusterware is responsible for placing the database resource on a server.
The server pool name is a required attribute. You can also provide values for the following attributes; if you do not specify them, then they are set to the default value:
MIN_SIZE: Minimum number of servers on which you want a resource to run. The default is 0.
MAX_SIZE: Maximum number of servers on which you want a resource to run. The default is -1, which indicates that resources can run on all available nodes in the cluster.
IMPORTANCE: The relative importance of the resource pool, used to determine how to reconfigure servers when a node joins or leaves the cluster. The default is 0.
Note: Clients using Oracle Database 11g release 2 and later databases using policy-managed server pools must access the database using the Single Client Access Name (SCAN). This is required because policy-managed databases can run on different servers at different times, so connecting to a particular node virtual IP address for a policy-managed database is not possible.
When you navigate through the $ORACLE_HOME/assistants/rconfig/sampleXMLS, you will find two sample XML input files.
- ConvertToRAC_AdminManaged.xml
- ConvertToRAC_PolicyManaged.xml
While converting a single-instance database, with filesystem storage, to an RAC database with Automatic Storage Management (ASM), rconfig invokes RMAN internally to back up the database to proceed with converting non-ASM to ASM.
CURRENT SCENARIO:-
2 node RAC setup
- Names of nodes : egtodb01, egtodb02
- Name of single instance database with filesystem storage : ebsuat
- Source Oracle home : /u01/app/oracle/product/11.2.0/dbhome_1
- Target Oracle home : /u01/app/oracle/product/11.2.0/dbhome_1
OBJECTIVE
- convert ebsuat to a Admin managed RAC database running on two nodes egtodb01 and egtodb02.
- change storage to ASM with
. Datafiles on +DATA diskgroup
. Flash recovery area on +FRA diskgroup
IMPLEMENTATION:– copy ConvertToRAC_AdminManaged.xml to another file my.xml
egtodb01$ cd $ORACLE_HOME/assistants/rconfig/sampleXMLs
egtodb01$ cp ConvertToRAC_AdminManaged.xml my.xml
– Edit my.xml and make following changes :
. Specify current OracleHome of non-rac database for SourceDBHome
. Specify OracleHome where the rac database should be configured. It can be same as SourceDBHome
. Specify SID of non-rac database and credential. User with sysdba role is required to perform conversion
. Specify the list of nodes that should have rac instances running for the Admin Managed Cluster Database
. LocalNode should be the first node in this nodelist.
. Instance Prefix tag is optional starting with 11.2. If left empty, it is derived from db_unique_name
. Specify the type of storage to be used by rac database. Allowable values are CFS|ASM
. Specify Database Area Location to be configured for rac database.
. Specify Flash Recovery Area to be configured for rac database.
– Run rconfig to convert ebsuat from single instance database to 2 instance RAC database
egtodb01$ rconfig my.xml
– Check the log file for rconfig while conversion is going on
oracle@egtodb01$ ls -lrt $ORACLE_BASE/cfgtoollogs/rconfig/*.log
– check that the database has been converted successfully
egtodb01$srvctl status database -d ebsuat
Instance ebsuat1 is running on node egtodb01
Instance ebsuat2 is running on node egtodb02
– Note that rconfig adds password file to all the nodes but entry to tnsnames.ora needs to be modified (to reflect scan name instead of host-ip) on the local node and added to rest of the nodes.
– For all other nodes, copy the entry for the database ebsuat from tnsnames.ora on local node to tnsnames.ora on remote nodes.
– Following is the entry I modified on the local node and copied to rest of the nodes :
ebsuat =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = ebsdb-scan.mydomain.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ebsuat)
)
)
– check that database can be connected remotely from remote node.
egtodb02$sqlplus system/manager@ebsuat
– check that datafiles have converted to ASM
SQL>select name from v$datafile;
NAME
——————————————————————————–
+DATA/ebsuat/datafile/system.326.79483827
+DATA/ebsuat/datafile/sysaux.325.79483834
+DATA/ebsuat/datafile/undotbs1.305.79483805
+DATA/ebsuat/datafile/users.342.79483841
+DATA/ebsuat/datafile/undotbs2.348.79483
For any further questions regarding the subject topic, please don't hesitate to email me samiappsdba@gmail.com
Thursday, May 23, 2013
Allow non-root user to run commands as root
This is a simple but useful security setting on Linux operating System.
To allow user to run some commands which need root/sudo permission we need to add them in /etc/sudoers file.
Lets see an example to give permission to start/stop/restart CUPS (Common Unix Printing System) service.
Open /etc/sudoers file with sudo/root permission using any text editor (Eg: vim) and following line at end of the file. But it is higly recommended to edit the sudoers file using the visudo command, which is a special vim utility for editing sudoer file.
testuser1 ALL=/etc/init.d/cups restart,/etc/init.d/cups stop,/etc/init.d/cups start
In above line 'testuser1' is the user name for which we are giving permission tor start/stop/restart cups service.
Now save and exit from /etc/sudoers file.
To test above command login into testuser1, and run any of the following command, it will ask for sudo password. We need to enter testuser1's password for as sudo password that will run the command.
$ sudo /etc/init.d/cups restart
$ sudo /etc/init.d/cups stop
$ sudo /etc/init.d/cups start
Similarly you can add other command as per your requirement.
For more syntaxes and references of sudoer file please visit the below links,
www.sudo.ws/sudoers.man.html
http://aplawrence.com/Basics/sudo.html
Labels:
Oracle Linux,
Oracle Unbreakable Linux,
Redhat Linux
Tuesday, April 9, 2013
Oracle RAC Commands
CLSCFG: -- Oracle cluster configuration tool, in Oracle Real Application Cluster(RAC)
clscfg -help or clscfg -h
clscfg -install -- creates a new configuration
clscfg -add -- adds a node to the configuration
clscfg -delete -- deletes a node from the configuration
clscfg -upgrade -- upgrades an existing configuration
clscfg -downgrade -- downgrades an existing configuration
clscfg -local -- creates a special single-node configuration for ASM
clscfg -concepts -- brief listing of terminology used in the other modes
clscfg -trace -- used in conjunction with any mode above for tracing
CLUVFY: Cluster Verification Utility
cluvfy [-help] or cluvfy -h
cluvfy stage {-pre|-post} stage_name stage_specific_options [-verbose]
Valid stage options and stage names are:
-post hwos : post-check for hardware and operating system
-pre cfs : pre-check for CFS setup
-post cfs : post-check for CFS setup
-pre crsinst : pre-check for CRS installation
-post crsinst : post-check for CRS installation
-pre hacfg : pre-check for HA configuration
-post hacfg : post-check for HA configuration
-pre dbinst : pre-check for database installation
-pre acfscfg : pre-check for ACFS Configuration.
-post acfscfg : post-check for ACFS Configuration.
-pre dbcfg : pre-check for database configuration
-pre nodeadd : pre-check for node addition.
-post nodeadd : post-check for node addition.
-post nodedel : post-check for node deletion.
cluvfy stage -post hwos -n node_list [-verbose]
./runcluvfy.sh stage -post hwos -n node1,node2 -verbose
-- Installation checks after hwos - Hardware and Operating system installation
cluvfy stage -pre cfs -n node_list [-verbose]
cluvfy stage -post cfs -n node_list [-verbose]
-- Installation checks before/after Cluster File System
cluvfy stage -pre crsinst -n node_list [-c ocr_location] [-r {10gR1|10gR2|11gR1|11gR2}] [-q voting_disk] [-osdba osdba_group] [-orainv orainventory_group] [-verbose]
cluvfy stage -pre crsinst -n node1,node2,node3
./runcluvfy.sh stage -pre crsinst -n all -verbose
cluvfy stage -post crsinst -n node_list [-verbose]
-- Installation checks before/after CRS installation
cluvfy stage -pre dbinst -n node_list [-r {10gR1|10gR2|11gR1|11gR2}] [-osdba osdba_group] [-orainv orainventory_group] [-verbose]
cluvfy stage -pre dbcfg -n node_list -d oracle_home [-verbose]
-- Installation checks before/after DB installation/configuration
cluvfy comp component_name component_specific_options [-verbose]
Valid components are:
nodereach : checks reachability between nodes
nodecon : checks node connectivity
cfs : checks CFS integrity
ssa : checks shared storage accessibility
space : checks space availability
sys : checks minimum system requirements
clu : checks cluster integrity
clumgr : checks cluster manager integrity
ocr : checks OCR integrity
olr : checks OLR integrity
ha : checks HA integrity
crs : checks CRS integrity
nodeapp : checks node applications existence
admprv : checks administrative privileges
peer : compares properties with peers
software : checks software distribution
asm : checks ASM integrity
acfs : checks ACFS integrity
gpnp : checks GPnP integrity
gns : checks GNS integrity
scan : checks SCAN configuration
ohasd : checks OHASD integrity
clocksync : checks Clock Synchronization
vdisk : check Voting Disk Udev settings
cluvfy comp nodereach -n node_list [-srcnode node] [-verbose]
cluvfy comp nodecon -n node_list [-i interface_list] [-verbose]
cluvfy comp nodecon -n node1,node2,node3 –i eth0 -verbose
cluvfy comp nodeapp [-n node_list] [-verbose]
cluvfy comp peer [-refnode node] -n node_list [-r {10gR1|10gR2|11gR1|11gR2}] [-orainv orainventory_group] [-osdba osdba_group] [-verbose]
cluvfy comp peer -n node1,node2 -r 10gR2 -verbose
cluvfy comp crs [-n node_list] [-verbose]
cluvfy comp cfs [-n node_list] -f file_system [-verbose]
cluvfy comp cfs -f /oradbshare –n all -verbose
cluvfy comp ocr [-n node_list] [-verbose]
cluvfy comp clu -n node_list -verbose
cluvfy comp clumgr [-n node_list] [-verbose]
cluvfy comp sys [-n node_list] -p {crs|database} [-r {10gR1|10gR2|11gR1|11gR2}] [-osdba osdba_group] [-orainv orainventory_group] [-verbose]
cluvfy comp sys -n node1,node2 -p crs -verbose
cluvfy comp admprv [-n node_list] [-verbose] |-o user_equiv [-sshonly] |-o crs_inst [-orainv orainventory_group] |-o db_inst [-orainv orainventory_group] [-osdba osdba_group] |-o db_config -d oracle_home
cluvfy comp ssa [-n node_list] [-s storageID_list] [-verbose]
cluvfy comp space [-n node_list] -l storage_location -z disk_space{B|K|M|G} [-verbose]
cluvfy comp space -n all -l /home/dbadmin/products –z 2G -verbose
cluvfy comp olr
CRSCTL - Cluster Ready Service Control
$crsctl -- to get help
$crsctl query crs activeversion
$crsctl query crs softwareversion [node_name]
#crsctl start crs
#crsctl stop crs
(or)
#/etc/init.d/init.crs start
#/etc/init.d/init.crs stop
#crsctl enable crs
#crsctl disable crs
(or)
#/etc/init.d/init.crs enable
#/etc/init.d/init.crs disable
$crsctl check crs
$crsctl check cluster [-node node_name] -- Oracle RAC 11g command, checks the viability of CSS across nodes
#crsctl start cluster -n HostName -- 11g R2
#crsctl stop cluster -n HostName -- 11g R2
#crsctl stop cluster -all -- 11g R2
$crsctl check cssd
$crsctl check crsd
$crsctl check evmd
$crsctl check oprocd
$crsctl check ctss
#/etc/init.d/init.cssd stop
#/etc/init.d/init.cssd start
#/etc/rc.d/init.d/init.evmd
#/etc/rc.d/init.d/init.cssd
#/etc/rc.d/init.d/init.crsd
#mv /etc/rc3.d/S96init.cssd /etc/rc3.d/_S96init.cssd -- to stop cssd from autostarting after reboot
#crsctl check css votedisk
#crsctl query css votedisk -- lists the voting disks used by CSS
#crsctl add css votedisk PATH
#crsctl add css votedisk PATH -force -- if Clusterware is not running
#crsctl delete css votedisk PATH
#crsctl delete css votedisk PATH -force -- if Clusterware is not running
#crsctl set css parameter_name value -- set parameters on OCR
#crsctl set css misscount 100
#crsctl unset css parameter_name -- sets CSS parameter to its default
#crsctl unset css misscount
#crsctl get css parameter_name -- gets the value of a CSS parameter
#crsctl get css disktimeout
#crsctl get css misscount
#crsctl get css reboottime
#crsctl start resources -- starts Clusterware resources
./crsctl start resource ora.DATA.dg
#crsctl stop resources -- stops Clusterware resources
$crsctl status resource
$crsctl status resource -t
$crsctl stat resource -t
$crsctl lsmodules crs -- lists CRS modules that can be used for debugging
CRSUI
CRSCOMM
CRSRTI
CRSMAIN
CRSPLACE
CRSAPP
CRSRES
CRSCOMM
CRSOCR
CRSTIMER
CRSEVT
CRSD
CLUCLS
CSSCLNT
COMMCRS
COMMNS
$crsctl lsmodules css -- lists CSS modules that can be used for debugging
CSSD
COMMCRS
COMMNS
$crsctl lsmodules evm -- lists EVM modules that can be used for debugging
EVMD
EVMDMAIN
EVMCOMM
EVMEVT
EVMAPP
EVMAGENT
CRSOCR
CLUCLS
CSSCLNT
COMMCRS
COMMNS
$crsctl start has (HAS - High Availability Services)
$crsctl stop has
$crsctl check has
OCR Modules -- cannot be listed with crsctl lsmodules command
OCRAPI
OCRCLI
OCRSRV
OCRMAS
OCRMSG
OCRCAC
OCRRAW
OCRUTL
OCROSD
#crsctl debug statedump crs -- dumps state info for crs objects
#crsctl debug statedump css -- dumps state info for css objects
#crsctl debug statedump evm -- dumps state info for evm objects
#crsctl debug log crs [module:level]{,module:level} ...
-- Turns on debugging for CRS
#crsctl debug log crs CRSEVT:5,CRSAPP:5,CRSTIMER:5,CRSRES:5,CRSRTI:1,CRSCOMM:2
#crsctl debug log css [module:level]{,module:level} ...
-- Turns on debugging for CSS
#crsctl debug log css CSSD:1
#crsctl debug log evm [module:level]{,module:level} ...
-- Turns on debugging for EVM
#crsctl debug log evm EVMCOMM:1
#crsctl debug trace crs -- dumps CRS in-memory tracing cache
#crsctl debug trace css -- dumps CSS in-memory tracing cache
#crsctl debug trace evm -- dumps EVM in-memory tracing cache
#crsctl debug log res resource_name:level -- turns on debugging for resources
#crsctl debug log res "ora.lnx04.vip:1"
#crsctl trace all_the_above_commands -- tracing by adding a "trace" argument.
#crsctl trace check css
#crsctl backup -h
#crsctl backup css votedisk
Here is the list of the options for CRSCTL in 11gR2:
crsctl add - add a resource, type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service, resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource, type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource, type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource, server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource, server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource, server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value, restoring its default
crsctl add resource resource_name -type resource_type [-file file_path | -attr "attribute_name=attribute_value,attribute_name=attribute_value,..."] [-i] [-f]
crsctl add resource r1 -type test_type1 -attr "PATH_NAME=/tmp/r1.txt"
crsctl add resource app.appvip -type app.appvip.type -attr "RESTART_ATTEMPTS=2, START_TIMEOUT=100,STOP_TIMEOUT=100,CHECK_INTERVAL=10,USR_ORA_VIP=172.16.0.0, START_DEPENDENCIES=hard(ora.net1.network)pullup(ora.net1.network), STOP_DEPENDENCIES=hard(ora.net1.network)"
crsctl add type type_name -basetype base_type_name {-attr "ATTRIBUTE=attribute_name | -file file_path,TYPE={string | int} [,DEFAULT_VALUE=default_value][,FLAGS=[READONLY][|REQUIRED]]"}
crsctl add type test_type1 -basetype cluster_resource -attr "ATTRIBUTE=FOO,TYPE=integer,DEFAULT_VALUE=0"
crsctl add crs administrator -u user_name [-f]
crsctl add crs administrator -u scott
crsctl add css votedisk path_to_voting_disk [path_to_voting_disk ...] [-purge]
crsctl add css votedisk /stor/grid/ -purge
crsctl add serverpool server_pool_name {-file file_path | -attr "attr_name=attr_value[,attr_name=attr_value[,...]]"} [-i] [-f]
crsctl add serverpool testsp -attr "MAX_SIZE=5"
crsctl add serverpool sp1 -file /tmp/sp1_attr
crsctl check cluster [-all | [-n server_name [...]]
crsctl check cluster -all
crsctl check crs
crsctl check css
crsctl check ctss -- Cluster Time Synchronization services
crsctl check evm
crsctl check resource {resource_name [...] | -w "filter" } [-n node_name] [-k cardinality_id] [-d degree_id] }
crsctl check resource appsvip
crsctl config crs
crsctl delete crs administrator -u user_name [-f]
crsctl delete crs administrator -u scott
crsctl delete resource resource_name [-i] [-f]
crsctl delete resource myResource
crsctl delete type type_name [-i]
crsctl delete type app.appvip.type
crsctl delete css votedisk voting_disk_GUID [voting_disk_GUID [...]]
crsctl delete css votedisk 61f4273ca8b34fd0bfadc2531605581d
crsctl delete node -n node_name
crsctl delete node -n node06
crsctl delete serverpool server_pool_name [server_pool_name [...]] [-i]
crsctl delete serverpool sp1
crsctl disable crs
crsctl discover dhcp -clientid clientid [-port port]
crsctl discover dhcp -clientid dsmjk252clr-dtmk01-vip
crsctl enable crs
crsctl get hostname
crsctl get clientid dhcp -cluname cluster_name -viptype vip_type [-vip vip_res_name] [-n node_name]
crsctl get clientid dhcp -cluname dsmjk252clr -viptype HOSTVIP -n tmjk01
crsctl get css parameter
crsctl get css disktimeout
crsctl get css ipmiaddr
crsctl get nodename
crsctl getperm resource resource_name [ {-u user_name | -g group_name} ]
crsctl getperm resource app.appvip
crsctl getperm resource app.appvip -u oracle
crsctl getperm resource app.appvip -g dba
crsctl getperm type resource_type [-u user_name] | [-g group_name]
crsctl getperm type app.appvip.type
crsctl getperm serverpool server_pool_name [-u user_name | -g group_name]
crsctl getperm serverpool sp1
crsctl lsmodules {mdns | gpnp | css | crf | crs | ctss | evm | gipc}
crsctl lsmodules evm
mdns: Multicast domain name server
gpnp: Grid Plug and Play service
css: Cluster Synchronization Services
crf: Cluster Health Monitor
crs: Cluster Ready Services
ctss: Cluster Time Synchronization Service
evm: Event Manager
gipc: Grid Interprocess Communication
crsctl modify resource resource_name -attr "attribute_name=attribute_value" [-i] [-f] [-delete]
crsctl modify resource appsvip -attr USR_ORA_VIP=10.1.220.17 -i
crsctl modify type type_name -attr "ATTRIBUTE=attribute_name,TYPE={string | int} [,DEFAULT_VALUE=default_value [,FLAGS=[READONLY][| REQUIRED]]" [-i] [-f]]
crsctl modify type myType.type -attr "ATTRIBUTE=FOO,DEFAULT_VALUE=0 ATTRIBUTE=BAR,DEFAULT_VALUE=baz"
crsctl modify serverpool server_pool_name -attr "attr_name=attr_value [,attr_name=attr_value[, ...]]" [-i] [-f]
crsctl modify serverpool sp1 -attr "MAX_SIZE=7"
crsctl pin css -n node_name [ node_name [..]]
crsctl pin css -n node2
crsctl query crs administrator
crsctl query crs activeversion
crsctl query crs releaseversion
crsctl query crs softwareversion node_name
crsctl query css ipmiconfig
crsctl query css ipmidevice
crsctl query css votedisk
crsctl query dns {-servers | -name name [-dnsserver DNS_server_address] [-port port] [-attempts number_of_attempts] [-timeout timeout_in_seconds] [-v]}
crsctl release dhcp -clientid clientid [-port port]
crsctl release dhcp -clientid spmjk662clr-spmjk03-vip
crsctl relocate resource {resource_name | resource_name | -all -s source_server | -w "filter"} [-n destination_server] [-k cid] [-env "env1=val1,env2=val2,..."] [-i] [-f]
crsctl relocate resource myResource1 -s node1 -n node3
crsctl relocate server server_name [...] -c server_pool_name [-i] [-f]
crsctl relocate server node6 node7 -c sp1
crsctl replace discoverystring 'absolute_path[,...]'
crsctl replace discoverystring "/oracle/css1/*,/oracle/css2/*"
crsctl replace votedisk [+asm_disk_group | path_to_voting_disk [...]]
crsctl replace votedisk +diskgroup1
crsctl replace votedisk /mnt/nfs/disk1 /mnt/nfs/disk2
crsctl request dhcp -clientid clientid [-port port]
crsctl request dhcp -clientid tmj0462clr-tmjk01-vip
crsctl set css parameter value
crsctl set css ipmiaddr ip_address
crsctl set css ipmiaddr 192.0.2.244
crsctl set css ipmiadmin ipmi_administrator_name
crsctl set css ipmiadmin scott
crsctl set log {[crs | css | evm "component_name=log_level, [...]"] | [all=log_level]}
crsctl set log crs "CRSRTI=1,CRSCOMM=2"
crsctl set log evm all=2
crsctl set log res "myResource1=3"
crsctl set {log | trace} module_name "component:debugging_level [,component:debugging_level][,...]"
crsctl set log crs "CRSRTI:1,CRSCOMM:2"
crsctl set log crs "CRSRTI:1,CRSCOMM:2,OCRSRV:4"
crsctl set log evm "EVMCOMM:1"
crsctl set log res "resname:1"
crsctl set log res "resource_name=debugging_level"
crsctl set log res "ora.node1.vip:1"
crsctl set log crs "CRSRTI:1,CRSCOMM:2" -nodelist node1,node2
crsctl set trace "component_name=tracing_level,..."
crsctl set trace "css=3"
crsctl setperm resource resource_name {-u acl_string | -x acl_string | -o user_name | -g group_name}
crsctl setperm resource myResource -u user:scott:rwx
crsctl setperm type resource_type_name {-u acl_string | -x acl_string | -o user_name | -g group_name}
crsctl setperm type resType -u user:scott:rwx
crsctl setperm serverpool server_pool_name {-u acl_string | -x acl_string | -o user_name | -g group_name}
crsctl setperm serverpool sp3 -u user:scott.tiger:rwx
crsctl start cluster [-all | -n server_name [...]]
crsctl start cluster -n node1 node2
crsctl start crs
crsctl start ip -A {IP_name | IP_address}/netmask/interface_name
crsctl start ip -A 192.168.29.220/255.255.252.0/eth0
crsctl start resource {resource_name [...] | -w filter | -all} [-n server_name] [-k cid] [-d did] [-env "env1=val1,env2=val2,..."] [-i] [-f]
crsctl start resource myResource -n server1
crsctl start testdns [-address address [-port port]] [-once] [-v]
crsctl start testdns -address 192.168.29.218 -port 63 -v
crsctl status resource {resource_name [...] | -w "filter"} [-p | -v [-e]] | [-f | -l | -g] [[-k cid | -n server_name] [-d did]] | [-s -k cid [-d did]] [-t]
crsctl status resource ora.stai14.vip
crsctl stat res -w "TYPE = ora.scan_listner.type"
crsctl status type resource_type_name [...]] [-g] [-p] [-f]
crsctl status type ora.network.type
crsctl status ip -A {IP_name | IP_address}
crsctl status server [-p | -v | -f]
crsctl status server { server_name [...] | -w "filter"} [-g | -p | -v | -f]
crsctl status server node2 -f
crsctl status serverpool [-p | -v | -f]
crsctl status serverpool [server_pool_name [...]] [-w] [-g | -p | -v | -f]
crsctl status serverpool sp1 -f
crsctl status serverpool
crsctl status serverpool -p
crsctl status serverpool -w "MAX_SIZE > 1"
crsctl status testdns [-server DNS_server_address] [-port port] [-v]
crsctl stop cluster [-all | -n server_name [...]] [-f]
crsctl stop cluster -n node1
crsctl stop crs [-f]
crsctl stop crs
crsctl stop resource {resource_name [...] | -w "filter" | -all} [-n server_name] [-k cid] [-d did] [-env "env1=val1,env2=val2,..."] [-i] [-f]
crsctl stop resource -n node1 -k 2
crsctl stop ip -A {IP_name | IP_address}/interface_name
crsctl stop ip -A MyIP.domain.com/eth0
crsctl stop testdns [-address address [-port port]] [-domain GNS_domain] [-v]
crsctl unpin css -n node_name [node_name [...exit]]
crsctl unpin css -n node1 node4
crsctl unset css parameter
crsctl unset css reboottime
crsctl unset css ipmiconfig
HAS (High Availability Service)
crsctl check has
crsctl config has
crsctl disable has
crsctl enable has
crsctl query has releaseversion
crsctl query has softwareversion
crsctl start has
crsctl stop has [-f]
How do I identify the voting disk/file location?
#crsctl query css votedisk
How to take backup of voting file/disk?
crsctl backup css votedisk
dd -- dataset definition, useful for taking backup of votedisks/Voting Disks
Arguments:
if -- input file
of -- output file
bs -- block size
dd if=/u02/ocfs2/vote/VDFile_0 of=$ORACLE_BASE/bkp/VDFile_0
dd if=/dev/hdb2 of=/dev/sda5 bs=8192
dd if=/dev/sda2 of=/dev/sdb2 bs=4096 conv=notrunc,noerror
dd if=/dev/zero of=/abc bs=1 count=1 conv=notrunc,ucase
dd if=/dev/cdrom of=/home/satya/myCD.iso bs=2048 conv=sync,notrunc,lcase
dd if=/dev/hda3 skip=9200 of=/home/satya/backup_set_3.img bs=1M count=4600
dd if=/home/satya/1Gb.file bs=64k | dd of=/dev/null
DIAGCOLLECTION:Tool to collect diagnosis information, from Oracle cluster home, Oracle home and Oracle base.
#$ORA_CRS_HOME/bin/diagcollection.pl
#$ORA_CRS_HOME/bin/diagcollection.pl --collect
#$ORA_CRS_HOME/bin/diagcollection.pl --collect --crs $ORA_CRS_HOME
#$ORA_CRS_HOME/bin/diagcollection.pl --collect --oh $ORACLE_HOME
#$ORA_CRS_HOME/bin/diagcollection.pl --collect --ob $ORACLE_BASE
#$ORA_CRS_HOME/bin/diagcollection.pl --collect --all
#$ORA_CRS_HOME/bin/diagcollection.pl --coreanalysis
--- analyze all the trace & log files and summarizes in a text file
#$ORA_CRS_HOME/bin/diagcollection.pl --clean --- cleans gathered info
GSDCTL: - Global Service Daemon Control
gsdctl start -- start the GSD service
gsdctl stop -- stop the GSD service
gsdctl stat -- To obtain the status of the GSD service
$ gsdctl start
$ gsdctl stop
$ gsdctl stat
Log file will be at $ORACLE_HOME/srvm/log/gsdaemon_node_name.log
Oracle RAC O2CB Cluster Service Commands
/etc/init.d/o2cb start
/etc/init.d/o2cb start ocfs2
/etc/init.d/o2cb stop
/etc/init.d/o2cb stop ocfs2
/etc/init.d/o2cb restart
/etc/init.d/o2cb restart ocfs2
/etc/init.d/o2cb status -- Check status
/etc/init.d/o2cb load -- Loads all OCFS2 modules
/etc/init.d/o2cb unload -- Unloads all OCFS2 modules
/etc/init.d/o2cb online ocfs2 -- Brings the cluster online
/etc/init.d/o2cb offline ocfs2 -- Takes the cluster offline
/etc/init.d/o2cb configure -- Configuring the O2CB driver
/etc/init.d/o2cb enable
#chkconfig --add o2cb
#chkconfig --list o2cb
o2cb 0:off 1:off 2:on 3:on 4:on 5:on 6:off
#chkconfig --del o2cb
service o2cb status
OCRCHECK: -- Displays health of OCR (Oracle Cluster Registry).
$ocrcheck -help or ocrcheck -h
$ocrcheck
Version : 2
Total space (kbytes) : 262144
Used space (kbytes) : 16256
Available space (kbytes) : 245888
ID : 1918913332
Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded
Device/File Name : /dev/raw/raw2 Device/File integrity check succeeded
Cluster registry integrity check succeeded
#ocrcheck -local
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262132
Used space (kbytes) : 9200
Available space (kbytes) : 252932
ID : 604793089
Device/File Name : /u02/crs/cdata/localhost/lnx6.olr Device/File integrity check succeeded
Local OCR integrity check succeeded
$ocrcheck -local -config
Log file will be $ORACLE_HOME/log/node_name/client/ocrcheck_pid.log
Debugging can be controlled through $ORA_CRS_HOME/srvm/admin/ocrlog.ini
How do I identify the OCR file location?
#ocrcheck
OCRCONFIG: -- OCR (Oracle Cluster Registry) CONFIGuration tool#ocrconfig -help or ocrconfig -h
#ocrconfig -showbackup [auto|manual]
-- default location is $ORA_CRS_HOME/cdata/cluster_name
#ocrconfig -showbackup
#ocrconfig -backuploc dir_name -- change OCR autobackup directory location
#ocrconfig -backuploc /u02/backups
#ocrconfig -manualbackup -- Oracle RAC 11g command, to perform OCR backup manually
#ocrconfig -restore backup_file.ocr -- recovering from autobackup file
#ocrconfig -restore /u02/backups/backup00.ocr
#ocrconfig -export file_name.dmp [-s online]
-- exports OCR content to a file
#ocrconfig -export /tmp/ocr_exp
#ocrconfig -import file_name.dmp
-- recover OCR logically, must be done on all nodes
#ocrconfig -import /tmp/ocr_exp
#ocrconfig -replace ocr [file_name] -- adding/replacing an OCR file
#ocrconfig -replace ocrmirror [file_name]
#ocrconfig -repair ocr file_name
#ocrconfig -repair ocrmirror file_name
#ocrconfig -repair -replace current_OCR_location -replacement target_OCR_location
#ocrconfig -upgrade [user [group]] -- upgrades OCR
#ocrconfig -downgrade [-version version_string] -- downgrades OCR
#ocrconfig -overwrite
#ocrconfig –local –import file_name
#ocrconfig –local –manualbackup
#ocrconfig -local -backuploc new_olr_backup_path
#ocrconfig -local -restore file_name
#ocrconfig -add +new_disk_group
#ocrconfig -delete +unused_disk_group
#ocrconfig -add file_location
#ocrconfig -add /dev/sdd1
#ocrconfig -delete old_storage_location
#ocrconfig -delete /dev/raw/raw2
Log file will be $ORACLE_HOME/log/node_name/client/ocrconfig_pid.log
Debugging can be controlled through $ORA_CRS_HOME/srvm/admin/ocrlog.ini
How to take backup of OCR file?
#ocrconfig -manualbackup
#ocrconfig -export file_name.dmp
How to recover OCR file?
#ocrconfig -restore backup_file.ocr
#ocrconfig -import file_name.dmp
OCRDUMP: -- dumps OCR (Oracle Cluster Registry) contents to a file
#ocrdump -help or ocrdump -h
#ocrdump [file_name|-stdout] [-backupfile backup_filename] [-keyname key_name] [-xml] [-noheader]
#ocrdump -- default filename is OCRDUMPFILE
#ocrdump MYFILE
#ocrdump ${HOST}_OCRDUMP
#ocrdump -backupfile my_file
#ocrdump -stdout -keyname SYSTEM
#ocrdump -stdout -xml
$ocrdump -local olr.lst --> Normal Text Format
$ocrdump -local -xml olr_xml.lst --> XML format
$ocrdump -local -backupfile olr_backup_file_name
Log file will be $ORACLE_HOME/log/node_name/client/ocrdump_pid.log
Debugging can be controlled through $ORA_CRS_HOME/srvm/admin/ocrlog.ini
How to take backup of OCR file?
#ocrdump -backupfile my_file
OIFCFG: -- Oracle Interface Configuration tool
A command line tool for both single instance Oracle databases and RAC databases that enables us to allocate and deallocate network interfaces to components, direct components to use specific network interfaces, and retrieve component configuration information.
oifcfg [-help] -- will give help
$ ./oifcfg -help
$ ./oifcfg
oifcfg iflist [-p [-n]]
-p includes description of the subnet, -n includes subnet mask
oifcfg iflist -- display a list of current subnets
etho 147.43.1.60
ethl 192.168.1.150
oifcfg iflist -p -n
etho 147.43.1.60 UNKNOWN 255.255.255.0 (public interfaces are UNKNOWN)
ethl 192.168.1.150 PRIVATE 255.255.255.0
oifcfg getif [-node node_name|-global] [-if if_name[/subnet] [-type {cluster_interconnect|public|storage}]]
-- To display a list of networks
oifcfg getif
eth1 192.168.1.150 global cluster_interconnect
eth0 192.168.0.150 global public
oifcfg setif {-node node_name|-global} {if_name/subnet:{cluster_interconnect|public|storage}}...
oifcfg setif -global eth0/10.50.99.0:public
oifcfg setif -global eth0/172.19.141.0:cluster_interconnect
oifcfg delif [-node node_name|-global] [if_name[/subnet]]
oifcfg delif -global
oifcfg delif -global eth0
oifcfg delif -global eth1/172.21.65.0
olsnodes commands in Oracle RAC OLSNODES:
Provides the list of nodes and other information for all nodes participating in the cluster
#olsnodes [node_name] [-g] [-i] [-l] [-n] [-p] [-s] [-t] [-v]
node_name -- displays information for the particular node
g -- more details
i -- with VIP
l -- local node name
n -- with node number
p -- private interconnect
s -- status of the node (ACTIVE or INACTIVE)
t -- type of the node (PINNED or UNPINNED)
v -- verbose
How to find out the nodes in Oracle RAC cluster?
#olsnodes -- will list the nodes in the cluster
#olsnodes -n
#olsnodes node44 -v
#olsnodes -n -p -i
node1-pub 1 node1-prv node1-vip
node2-pub 2 node2-prv node2-vip
#olsnodes -i
node1 178.192.1.1
node2 178.192.2.1
node3 178.192.3.1
node4 178.192.4.1
ONSCTL: -- to manage ONS (Oracle Notification Service)
ONS - A publish and subscribe service for communicating information about all FAN events.
onsctl or onsctl help or onsctl -h
onsctl start -- to start ONS
onsctl stop -- to stop ONS
onsctl ping -- to find out the status of ONS
onsctl debug -- to display debug information for the ons daemon
onsctl reconfig -- to reload the ons configuration
onsctl detailed -- to print a verbose syntax description
from 11g release 2
onsctl command [options]
onsctl or onsctl help or onsctl -h
onsctl start -- to start ONS
onsctl shutdown -- to shutdown ONS
onsctl ping [max-retry] -- to ping local ons
onsctl debug [attr=val ...] -- to display ons server debug information
onsctl reload -- to trigger ons to reread its configuration file
onsctl set [attr=val ...] -- to set ons log parameters
onsctl query [attr=val] -- to query ons log parameters
onsctl usage [command] -- to print detailed usage description
srvconfig [options]srvconfig or srvconfig -help or srvconfig -?
srvconfig -exp file_name
-- exports the contents of the configuration information/cluster registry
srvconfig -imp file_name
-- imports the configuration information/cluster registry
srvconfig -init -- initialize cluster registry (if not already initialized)
srvconfig -init -f -- force initialization of configuration even if initialized
srvconfig -upgrade -dbname db_name -orahome ORACLE_HOME
-- upgrade the database configuration
srvconfig -downgrade -dbname db_name -orahome ORACLE_HOME -version ver_str
-- downgrade the database configuration
SRVCTL: (Server Control utility)srvctl command target [options]
commands: enable|disable|start|stop|relocate|status|add|remove|modify|getenv|setenv|unsetenv|config
targets: database/db|instance/inst|service/serv|nodeapps|asm|listener
targets: database/db|instance/inst|service/serv|nodeapps|asm|listener |diskgroup|home|ons|eons|filesystem|gns|oc4j|scan|scan_listener |srvpool|server|VIP -- From Oracle 11g R2
srvctl -help or srvctl -v
srvctl -V -- prints version
srvctl version: 10.2.0.0.0 (or) srvctl version: 11.2.0.1.0
srvctl -h -- print usage
srvctl status service –h
Database:
--------------------------------------------------------------------------------
srvctl add database -d db_name -o ORACLE_HOME [-m domain_name][-p spfile] [-A name|ip/netmask]
[-r {PRIMARY|PHYSICAL_STANDBY|LOGICAL_STANDBY|SNAPSHOT_STANDBY}]
[-s start_options] [-n db_name] [-y {AUTOMATIC|MANUAL}]
srvctl add database -d prod -o /u01/oracle/product/102/prod
srvctl remove database -d db_name [-f]
srvctl remove database -d prod
srvctl start database -d db_name [-o start_options] [-c connect_str|-q]
srvctl start database -d db_name [-o open]
srvctl start database -d db_name -o nomount
srvctl start database -d db_name -o mount
srvctl start db -d prod
srvctl start database -d apps -o open
srvctl stop database -d db_name [-o stop_options] [-c connect_str|-q]
srvctl stop database -d db_name [-o normal]
srvctl stop database -d db_name -o transactional
srvctl stop database -d db_name -o immediate
srvctl stop database -d db_name -o abort
srvctl stop db -d crm -o immediate
srvctl status database -d db_name [-f] [-v] [-S level]
srvctl status database -d db_name -v service_name
srvctl status database -d hrms
srvctl enable database -d db_name
srvctl enable database -d vis
srvctl disable database -d db_name
srvctl disable db -d vis
srvctl config database
srvctl config database -d db_name [-a] [-t]
srvctl config database
srvctl config database -d HYD -a
srvctl modify database -d db_name [-n db_name] [-o ORACLE_HOME] [-m domain_name] [-p spfile]
[-r {PRIMARY|PHYSICAL_STANDBY|LOGICAL_STANDBY|SNAPSHOT_STANDBY}] [-s start_options] [-y {AUTOMATIC|MANUAL}]
srvctl modify database -d hrms -r physical_standby
srvctl modify db -d RAC -p /u03/oradata/RAC/spfileRAC.ora -- moves p file
srvctl modify database –d HYD –o /u01/app/oracle/product/11.1/db –s open
srvctl getenv database -d db_name [-t name_list]
srvctl getenv database -d prod
srvctl setenv database -d db_name {-t name=val[,name=val,...]|-T name=val}
srvctl setenv database –d HYD –t “TNS_ADMIN=/u01/app/oracle/product/11.1/asm/network/admin”
srvctl setenv db -d prod -t LANG=en
srvctl unsetenv database -d db_name [-t name_list]
srvctl unsetenv database -d prod -t CLASSPATH
In 11g Release 2, some command's syntax has been changed:
srvctl add database -d db_unique_name -o ORACLE_HOME [-x node_name] [-m domain_name] [-p spfile] [-r {PRIMARY|PHYSICAL_STANDBY|LOGICAL_STANDBY|SNAPSHOT_STANDBY}] [-s start_options] [-t stop_options] [-n db_name] [-y {AUTOMATIC|MANUAL}] [-g server_pool_list] [-a "diskgroup_list"]
srvctl add database -d prod -o /u01/oracle/product/112/prod -m foo.com -p +dg1/prod/spfileprod.ora -r PRIMARY -s open -t normal -n db2 -y AUTOMATIC -g svrpool1,svrpool2 -a "dg1,dg2"
srvctl remove database -d db_unique_name [-f] [-y] [-v]
srvctl remove database -d prod -y
srvctl stop database -d db_unique_name [-o stop_options] [-f]
srvctl stop database -d dev -f
srvctl status database -d db_unique_name [-f] [-v]
srvctl status db -d sat -v
srvctl enable database -d db_unique_name [-n node_name]
srvctl enable database -d vis -n lnx01
srvctl disable database -d db_unique_name [-n node_name]
srvctl disable db -d vis -n lnx03
srvctl config database [-d db_unique_name [-a]]
srvctl config db -d db_erp -a
srvctl modify database -d db_unique_name [-n db_name] [-o ORACLE_HOME] [-u oracle_user] [-m domain] [-p spfile] [-r {PRIMARY|PHYSICAL_STANDBY|LOGICAL_STANDBY|SNAPSHOT_STANDBY}] [-s start_options] [-t stop_options] [-y {AUTOMATIC|MANUAL}] [-g "server_pool_list"] [-a "diskgroup_list"|-z]
srvctl modify db -d prod -r logical_standby
srvctl modify database -d racTest -a "SYSFILES,LOGS,OLTP"
srvctl modify database -d ronedb -e rac1,rac2
srvctl relocate database -d db_unique_name {[-n target_node] [-w timeout] | -a [-r]} [-v]
srvctl relocate database -d rontest -n node2
srvctl relocate database -d rone2db -n lnxrac2 -w 120 -v
srvctl convert database -d ....
srvctl convert database -d ronedb -c RAC -n rac1
srvctl convert database -d ronedb -c RACONENODE -i RoneDB
Instance:
-----------
srvctl add instance –d db_name –i inst_name -n node_name
srvctl add instance -d prod -i prod01 -n linux01
StartDatabaseLevelingTargets
srvctl remove instance –d db_name –i inst_name [-f]
srvctl remove instance -d prod -i prod01
srvctl start instance -d db_name -i inst_names [-o start_options] [-c connect_str|-q]
srvctl start instance –d db_name –i inst_names [-o open]
srvctl start instance –d db_name –i inst_names -o nomount
srvctl start instance –d db_name –i inst_names -o mount
srvctl start instance –d dev -i dev2
srvctl stop instance -d db_name -i inst_names [-o stop_options] [-c connect_str|-q]
srvctl stop instance –d db_name –i inst_names [-o normal]
srvctl stop instance –d db_name –i inst_names -o transactional
srvctl stop instance –d db_name –i inst_names -o immediate
srvctl stop instance –d db_name –i inst_names -o abort
srvctl stop inst –d vis -i vis
srvctl status instance –d db_name –i inst_names [-f] [-v] [-S level]
srvctl status inst –d racdb -i racdb2
srvctl enable instance –d db_name –i inst_names
srvctl enable instance -d prod -i "prod1,prod2"
srvctl disable instance –d db_name –i inst_names
srvctl disable inst -d prod -i "prod1,prod3"
srvctl modify instance -d db_name -i inst_name {-s asm_inst_name|-r} -- set dependency of instance to ASM
srvctl modify instance -d db_name -i inst_name -n node_name -- move the instance
srvctl modify instance -d db_name -i inst_name -r -- remove the instance
srvctl getenv instance –d db_name –i inst_name [-t name_list]
srvctl setenv instance –d db_name [–i inst_name] {-t "name=val[,name=val,...]" | -T "name=val"}
srvctl unsetenv instance –d db_name [–i inst_name] [-t name_list]
In 11g Release 2, some command's syntax has been changed:
srvctl start instance -d db_unique_name {-n node_name -i "instance_name_list"} [-o start_options]
srvctl start instance -d prod -n node2
srvctl start inst -d prod -i "prod2,prod3"
srvctl stop instance -d db_unique_name {[-n node_name]|[-i "instance_name_list"]} [-o stop_options] [-f]
srvctl stop inst -d prod -n node1
srvctl stop instance -d prod -i prod1
srvctl status instance -d db_unique_name {-n node_name | -i "instance_name_list"} [-f] [-v]
srvctl status instance -d prod -i "prod1,prod2" -v
srvctl modify instance -d db_unique_name -i instance_name {-n node_name|-z}
srvctl modify instance -d prod -i prod1 -n mynode
srvctl modify inst -d prod -i prod1 -z
Service:
--------
srvctl add service -d db_name -s service_name -r pref_insts [-a avail_insts] [-P TAF_policy]
srvctl add service -d db_name -s service_name -u {-r "new_pref_inst" | -a "new_avail_inst"}
srvctl add service -d RAC -s PRD -r RAC01,RAC02 -a RAC03,RAC04
srvctl add serv -d CRM -s CRM -r CRM1 -a CRM3 -P basic
srvctl remove service -d db_name -s service_name [-i inst_name] [-f]
srvctl remove serv -d dev -s sales
srvctl remove service -d dev -s sales -i dev01,dev02
srvctl start service -d db_name [-s service_names [-i inst_name]] [-o start_options]
srvctl start service -d db_name -s service_names [-o open]
srvctl start service -d db_name -s service_names -o nomount
srvctl start service -d db_name -s service_names -o mount
srvctl start serv -d dwh -s dwh
srvctl stop service -d db_name [-s service_names [-i inst_name]] [-f]
srvctl stop serv -d dwh -s dwh
srvctl status service -d db_name [-s service_names] [-f] [-v] [-S level]
srvctl status service -d dev -s dev
srvctl enable service -d db_name -s service_names [–i inst_name]
srvctl enable service -d apps -s apps1
srvctl disable service -d db_name -s service_names [–i inst_name]
srvctl disable serv -d dev -s dev -i dev1
srvctl config service -d db_name [-s service_name] [-a] [-S level]
srvctl config service -d db_name -a -- -a shows TAF configuration
srvctl config service -d TEST -s test PREF:TST1 AVAIL:TST2
srvctl modify service -d db_name -s service_name -i old_inst_name -t new_inst_name [-f]
srvctl modify service -d db_name -s service_name -i avail_inst_name -r [-f]
srvctl modify service -d db_name -s service_name -n -i preferred_list [-a available_list] [-f]
srvctl modify service -d db_name -s service_name -i old_inst_name -a avail_inst -P TAF_policy
srvctl modify serv -d PROD -s DWH -n -i I1,I2,I3,I4 -a I5,I6
srvctl relocate service -d db_name -s service_name –i old_inst_name -t target_inst [-f]
srvctl getenv service -d db_name -s service_name -t name_list
srvctl setenv service -d db_name [-s service_name] {-t "name=val[,name=val,...]" | -T "name=val"}
srvctl unsetenv service -d db_name -s service_name -t name_list
In 11g Release 2, some command's syntax has been changed:srvctl add service -d db_unique_name -s service_name [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC|MANUAL}] [-q {true|false}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}][-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}][-z failover_retries] [-w failover_delay]
srvctl add service -d rac -s rac1 -q TRUE -m BASIC -e SELECT -z 180 -w 5 -j LONG
srvctl add service -d db_unique_name -s service_name -u {-r preferred_list | -a available_list}
srvctl add service -d db_unique_name -s service_name
-g server_pool [-c {UNIFORM|SINGLETON}] [-k network_number]
[-l [PRIMARY|PHYSICAL_STANDBY|LOGICAL_STANDBY|SNAPSHOT_STANDBY]
[-y {AUTOMATIC|MANUAL}] [-q {TRUE|FALSE}] [-j {SHORT|LONG}]
[-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}]
[-m {NONE|BASIC}] [-P {BASIC|NONE|PRECONNECT}] [-x {TRUE|FALSE}]
[-z failover_retries] [-w failover_delay]
srvctl add service -d db_unique_name -s service_name -r preferred_list [-a available_list] [-P {BASIC|NONE|PRECONNECT}]
[-l [PRIMARY|PHYSICAL_STANDBY|LOGICAL_STANDBY|SNAPSHOT_STANDBY]
[-y {AUTOMATIC|MANUAL}] [-q {TRUE|FALSE}] [-j {SHORT|LONG}]
[-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}]
[-m {NONE|BASIC}] [-x {TRUE|FALSE}] [-z failover_retries] [-w failover_delay]
srvctl add serv -d dev -s sales -r dev01,dev02 -a dev03 -P PRECONNECT
srvctl start service -d db_unique_name [-s "service_name_list" [-n node_name | -i instance_name]] [-o start_options]
srvctl start serv -d dev -s dev
srvctl start service -d dev -s dev -i dev2
srvctl stop service -d db_unique_name [-s "service_name_list"] [-n node_name | -i instance_name] [-f]
srvctl stop service -d dev -s dev
srvctl stop serv -d dev -s dev -i dev2
srvctl status service -d db_unique_name [-s "service_name_list"] [-f] [-v]
srvctl status service -d dev -s dev -v
srvctl enable service -d db_unique_name -s "service_name_list" [-i instance_name | -n node_name]
srvctl enable service -d dev -s dev
srvctl enable serv -d dev -s dev -i dev1
srvctl disable service -d db_unique_name -s "service_name_list" [-i instance_name | -n node_name]
srvctl disable service -d dev -s "dev,marketing"
srvctl disable serv -d dev -s dev -i dev1
srvctl config service -d db_unique_name [-s service_name] [-a]
srvctl config service -d dev -s dev
srvctl modify service -d db_unique_name -s service_name
[-c {UNIFORM|SINGLETON}] [-P {BASIC|PRECONNECT|NONE}]
[-l {[PRIMARY]|[PHYSICAL_STANDBY]|[LOGICAL_STANDBY]|[SNAPSHOT_STANDBY]} [-q {TRUE|FALSE}] [-x {TRUE|FALSE}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z failover_retries] [-w failover_delay] [-y {AUTOMATIC|MANUAL}]
srvctl modify service -d db_unique_name -s service_name -i old_instance_name -t new_instance_name [-f]
srvctl modify service -d db_unique_name -s service_name -i avail_inst_name -r [-f]
srvctl modify service -d db_unique_name -s service_name -n -i preferred_list [-a available_list] [-f]
srvctl modify service -d dev -s dev -i dev1 -t dev2
srvctl modify serv -d dev -s dev -i dev1 -r
srvctl modify service -d dev -s dev -n -i dev1 -a dev2
srvctl relocate service -d db_unique_name -s service_name {-c source_node -n target_node|-i old_instance_name -t new_instance_name} [-f]
srvctl relocate service -d dev -s dev -i dev1 -t dev3
Nodeapps:
------------
#srvctl add nodeapps -n node_name -o ORACLE_HOME -A name|ip/netmask[/if1[|if2|...]]
#srvctl add nodeapps -n lnx02 -o $ORACLE_HOME -A 192.168.0.151/255.255.0.0/eth0
#srvctl remove nodeapps -n node_names [-f]
#srvctl start nodeapps -n node_name -- Starts GSD, VIP, listener & ONS
#srvctl stop nodeapps -n node_name [-r] -- Stops GSD, VIP, listener & ONS
#srvctl status nodeapps -n node_name
#srvctl config nodeapps -n node_name [-a] [-g] [-o] [-s] [-l]
-a Display VIP configuration
-g Display GSD configuration
-s Display ONS daemon configuration
-l Display listener configuration
#srvctl modify nodeapps -n node_name [-A new_vip_address]
#srvctl modify nodeapps -n lnx06 -A 10.50.99.43/255.255.252.0/eth0
#srvctl getenv nodeapps -n node_name [-t name_list]
#srvctl setenv nodeapps -n node_name {-t "name=val[,name=val,...]"|-T "name=val"}
#srvctl setenv nodeapps –n adcracdbq3 –t “TNS_ADMIN=/u01/app/oracle/product/11.1/asm/network/admin”
#srvctl unsetenv nodeapps -n node_name [-t name_list]
In 11g Release 2, some command's syntax has been changed:srvctl add nodeapps -n node_name -A {name|ip}/netmask[/if1[|if2|...]] [-m multicast_ip_address] [-p multicast_port_number] [-l ons_local_port] [-r ons_remote-port] [-t host[:port][,host[:port],...]] [-v]
srvctl add nodeapps -S subnet/netmask[/if1[|if2|...]] [-d dhcp_server_type] [-m multicast_ip_address] [-p multicast_port_number] [-l ons_local_port] [-r ons_remote-port] [-t host[:port][,host[:port],...]] [-v]
#srvctl add nodeapps -n devnode1 -A 1.2.3.4/255.255.255.0
srvctl remove nodeapps [-f] [-y] [-v]
srvctl remove nodeapps
srvctl start nodeapps [-n node_name] [-v]
srvctl start nodeapps
srvctl stop nodeapps [-n node_name] [-r] [-v]
srvctl stop nodeapps
srvctl status nodeapps
srvctl enable nodeapps [-g] [-v]
srvctl enable nodeapps -g -v
srvctl disable nodeapps [-g] [-v]
srvctl disable nodeapps -g -v
srvctl config nodeapps [-a] [-g] [-s] [-e]
srvctl config nodeapps -a -g -s -e
srvctl modify nodeapps [-n node_name -A new_vip_address] [-S subnet/netmask[/if1[|if2|...]] [-m multicast_ip_address] [-p multicast_port_number] [-e eons_listen_port] [-l ons_local_port] [-r ons_remote_port] [-t host[:port][,host:port,...]] [-v]
srvctl modify nodeapps -n mynode1 -A 100.200.300.40/255.255.255.0/eth0
srvctl getenv nodeapps [-a] [-g] [-s] [-e] [-t "name_list"] [-v]
srvctl getenv nodeapps -a
srvctl setenv nodeapps {-t "name=val[,name=val][...]" | -T "name=val"} [-v]
srvctl setenv nodeapps -T "CLASSPATH=/usr/local/jdk/jre/rt.jar" -v
srvctl unsetenv nodeapps -t "name_list" [-v]
srvctl unsetenv nodeapps -t "test_var1,test_var2"
ASM:
------
srvctl add asm -n node_name -i asminstance -o ORACLE_HOME [-p spfile]
srvctl remove asm -n node_name [-i asminstance] [-f]
srvctl remove asm -n db6
srvctl start asm -n node_name [-i asminstance] [-o start_options] [-c connect_str|-q]
srvctl start asm -n node_name [-i asminstance] [-o open]
srvctl start asm -n node_name [-i asminstance] -o nomount
srvctl start asm -n node_name [-i asminstance] -o mount
srvctl start asm -n linux01
srvctl stop asm -n node_name [-i asminstance] [-o stop_options] [-c connect_str|-q]
srvctl stop asm -n node_name [-i asminstance] [-o normal]
srvctl stop asm -n node_name [-i asminstance] -o transactional
srvctl stop asm -n node_name [-i asminstance] -o immediate
srvctl stop asm -n node_name [-i asminstance]-o abort
srvctl stop asm -n racnode1
srvctl stop asm -n devnode1 -i +asm1
srvctl status asm -n node_name
srvctl status asm -n racnode1
srvctl enable asm -n node_name [-i asminstance]
srvctl enable asm -n lnx03 -i +asm3
srvctl disable asm -n node_name [-i asminstance]
srvctl disable asm -n lnx02 -i +asm2
srvctl config asm -n node_name
srvctl config asm -n lnx08
srvctl modify asm -n node_name -i asminstance [-o ORACLE_HOME] [-p spfile]
srvctl modify asm –n rac6 -i +asm6 –o /u01/app/oracle/product/11.1/asm
In 11g Release 2, some command's syntax has been changed:
srvctl add asm [-l lsnr_name] [-p spfile] [-d asm_diskstring]
srvctl add asm
srvctl add asm -l LISTENERASM -p +dg_data/spfile.ora
srvctl remove asm [-f]
srvctl remove asm -f
srvctl start asm [-n node_name] [-o start_options]
srvctl start asm -n devnode1
srvctl stop asm [-n node_name] [-o stop_options] [-f]
srvctl stop asm -n devnode1 -f
srvctl status asm [-n node_name] [-a]
srvctl status asm -n devnode1 -a
srvctl enable asm [-n node_name]
srvctl enable asm -n devnode1
srvctl disable asm [-n node_name]
srvctl disable asm -n devnode1
srvctl config asm [-a]
srvctl config asm -a
srvctl modify asm [-l lsnr_name] [-p spfile] [-d asm_diskstring]
srvctl modify asm [-n node_name] [-l listener_name] [-d asm_diskstring] [-p spfile_path_name]
srvctl modify asm -l lsnr1
srvctl getenv asm [-t name[, ...]]
srvctl getenv asm
srvctl setenv asm {-t "name=val [,...]" | -T "name=value"}
srvctl setenv asm -t LANG=en
srvctl unsetenv asm -t "name[, ...]"
srvctl unsetenv asm -t CLASSPATH
Listener:
--------------------------------------------------------------------------------
srvctl add listener -n node_name -o ORACLE_HOME [-l listener_name] -- 11g R1 command
srvctl remove listener -n node_name [-l listener_name] -- 11g R1 command
srvctl start listener -n node_name [-l listener_names]
srvctl start listener -n node1
srvctl stop listener -n node_name [-l listener_names]
srvctl stop listener -n node1
srvctl status listener [-n node_name] [-l listener_names] -- 11g R1 command
srvctl status listener -n node2
srvctl config listener -n node_name
srvctl modify listener -n node_name [-l listener_names] -o ORACLE_HOME -- 11g R1 command
srvctl modify listener -n racdb4 -o /u01/app/oracle/product/11.1/asm -l "LISTENER_RACDB4"
In 11g Release 2, some command's syntax has been changed:
srvctl add listener [-l lsnr_name] [-s] [-p "[TCP:]port[, ...][/IPC:key][/NMP:pipe_name][/TCPS:s_port] [/SDP:port]"] [-k network_number] [-o ORACLE_HOME]
srvctl add listener -l LISTENERASM -p "TCP:1522" -o $ORACLE_HOME
srvctl add listener -l listener112 -p 1341 -o /ora/ora112
srvctl remove listener [-l lsnr_name|-a] [-f]
srvctl remove listener -l lsnr01
srvctl stop listener [-n node_name] [-l lsnr_name] [-f]
srvctl enable listener [-l lsnr_name] [-n node_name]
srvctl enable listener -l listener_dev -n node5
srvctl disable listener [-l lsnr_name] [-n node_name]
srvctl disable listener -l listener_dev -n node5
srvctl config listener [-l lsnr_name] [-a]
srvctl config listener
srvctl modify listener [-l listener_name] [-o oracle_home] [-u user_name] [-p "[TCP:]port_list[/IPC:key][/NMP:pipe_name][/TCPS:s_port][/SDP:port]"] [-k network_number]
srvctl modify listener -n node1 -p "TCP:1521,1522"
srvctl getenv listener [-l lsnr_name] [-t name[, ...]]
srvctl getenv listener
srvctl setenv listener [-l lsnr_name] {-t "name=val [,...]" | -T "name=value"}
srvctl setenv listener -t LANG=en
srvctl unsetenv listener [-l lsnr_name] -t "name[, ...]"
srvctl unsetenv listener -t "TNS_ADMIN"
New srvctl commands in 11g Release 2
Diskgroup:
--------------------------------------------------------------------------------
srvctl remove diskgroup -g diskgroup_name [-n node_list] [-f]
srvctl remove diskgroup -g DG1 -f
srvctl start diskgroup -g diskgroup_name [-n node_list]
srvctl start diskgroup -g diskgroup1 -n node1,node2
srvctl stop diskgroup -g diskgroup_name [-n node_list] [-f]
srvctl stop diskgroup -g ASM_FRA_DG
srvctl stop diskgroup -g dg1 -n node1,node2 -f
srvctl status diskgroup -g diskgroup_name [-n node_list] [-a]
srvctl status diskgroup -g dg_data -n node1,node2 -a
srvctl enable diskgroup -g diskgroup_name [-n node_list]
srvctl enable diskgroup -g diskgroup1 -n node1,node2
srvctl disable diskgroup -g diskgroup_name [-n node_list]
srvctl disable diskgroup -g dg_fra -n node1, node2
Home:
-------
srvctl start home -o ORACLE_HOME -s state_file [-n node_name]
srvctl start home -o /u01/app/oracle/product/11.2.0/db_1 -s ~/state.txt
srvctl stop home -o ORACLE_HOME -s state_file [-t stop_options] [-n node_name] [-f]
srvctl stop home -o /u01/app/oracle/product/11.2.0/db_1 -s ~/state.txt
srvctl status home -o ORACLE_HOME -s state_file [-n node_name]
srvctl status home -o /u01/app/oracle/product/11.2.0/db_1 -s ~/state.txt
ONS (Oracle Notification Service):
-------------------------------------
srvctl add ons [-l ons-local-port] [-r ons-remote-port] [-t host[:port][,host[:port]...]] [-v]
srvctl add ons -l 6200
srvctl remove ons [-f] [-v]
srvctl remove ons -f
srvctl start ons [-v]
srvctl start ons -v
srvctl stop ons [-v]
srvctl stop ons -v
srvctl status ons
srvctl enable ons [-v]
srvctl enable ons
srvctl disable ons [-v]
srvctl disable ons
srvctl config ons
srvctl modify ons [-l ons-local-port] [-r ons-remote-port] [-t host[:port][,host[:port]...]] [-v]
srvctl modify ons
EONS (E Oracle Notification Service):
---------------------------------------
srvctl add eons [-p portnum] [-m multicast-ip-address] [-e eons-listen-port] [-v]
#srvctl add eons -p 2018
srvctl remove eons [-f] [-v]
srvctl remove eons -f
srvctl start eons [-v]
srvctl start eons
srvctl stop eons [-f] [-v]
srvctl stop eons -f
srvctl status eons
srvctl enable eons [-v]
srvctl enable eons
srvctl disable eons [-v]
srvctl disable eons
srvctl config eons
srvctl modify eons [-m multicast_ip_address] [-p multicast_port_number] [-e eons_listen_port] [-v]
srvctl modify eons -p 2018
FileSystem:
--------------------------------------------------------------------------------
srvctl add filesystem -d volume_device -v volume_name -g diskgroup_name [-m mountpoint_path] [-u user_name]
srvctl add filesystem -d /dev/asm/d1volume1 -v VOLUME1 -d RAC_DATA -m /oracle/cluster1/acfs1
srvctl remove filesystem -d volume_device_name [-f]
srvctl remove filesystem -d /dev/asm/racvol1
srvctl start filesystem -d volume_device_name [-n node_name]
srvctl start filesystem -d /dev/asm/racvol3
srvctl stop filesystem -d volume_device_name [-n node_name] [-f]
srvctl stop filesystem -d /dev/asm/racvol1 -f
srvctl status filesystem -d volume_device_name
srvctl status filesystem -d /dev/asm/racvol2
srvctl enable filesystem -d volume_device_name
srvctl enable filesystem -d /dev/asm/racvol9
srvctl disable filesystem -d volume_device_name
srvctl disable filesystem -d /dev/asm/racvol1
srvctl config filesystem -d volume_device_path
srvctl modify filesystem -d volume_device_name -u user_name
srvctl modify filesystem -d /dev/asm/racvol1 -u sysadmin
SrvPool (Server Pool):
--------------------------------------------------------------------------------
srvctl add srvpool -g server_pool [-i importance] [-l min_size] [-u max_size] [-n node_list] [-f]
srvctl add srvpool -g SP1 -i 1 -l 3 -u 7 -n node1,node2
srvctl remove srvpool -g server_pool
srvctl remove srvpool -g srvpool1
srvctl status srvpool [-g server_pool] [-a]
srvctl status srvpool -g srvpool2 -a
srvctl config srvpool [-g server_pool]
srvctl config srvpool -g dbpool
srvctl modify srvpool -g server_pool [-i importance] [-l min_size] [-u max_size] [-n node_name_list] [-f]
srvctl modify srvpool -g srvpool4 -i 0 -l 2 -u 4 -n node3, node4
Server:
--------------------------------------------------------------------------------
srvctl status server -n "server_name_list" [-a]
srvctl status server -n server11 -a
srvctl relocate server -n "server_name_list" -g server_pool_name [-f]
srvctl relocate server -n "linux1, linux2" -g sp2
Scan (Single Client Access Name):
----------------------------------
srvctl add scan -n scan_name [-k network_number] [-S subnet/netmask[/if1[|if2|...]]]
#srvctl add scan -n scan.mycluster.example.com
srvctl remove scan [-f]
srvctl remove scan
srvctl remove scan -f
srvctl start scan [-i ordinal_number] [-n node_name]
srvctl start scan
srvctl start scan -i 1 -n node1
srvctl stop scan [-i ordinal_number] [-f]
srvctl stop scan
srvctl stop scan -i 1
srvctl status scan [-i ordinal_number]
srvctl status scan
srvctl status scan -i 1
srvctl enable scan [-i ordinal_number]
srvctl enable scan
srvctl enable scan -i 1
srvctl disable scan [-i ordinal_number]
srvctl disable scan
srvctl disable scan -i 3
srvctl config scan [-i ordinal_number]
srvctl config scan
srvctl config scan -i 2
srvctl modify scan -n scan_name
srvctl modify scan
srvctl modify scan -n scan1
srvctl relocate scan -i ordinal_number [-n node_name]
srvctl relocate scan -i 2 -n node2
ordinal_number=1,2,3
Scan_listener:
--------------
srvctl add scan_listener [-l lsnr_name_prefix] [-s] [-p "[TCP:]port_list[/IPC:key][/NMP:pipe_name][/TCPS:s_port] [/SDP:port]"]
#srvctl add scan_listener -l myscanlistener
srvctl remove scan_listener [-f]
srvctl remove scan_listener
srvctl remove scan_listener -f
srvctl start scan_listener [-n node_name] [-i ordinal_number]
srvctl start scan_listener
srvctl start scan_listener -i 1
srvctl stop scan_listener [-i ordinal_number] [-f]
srvctl stop scan_listener -i 3
srvctl status scan_listener [-i ordinal_number]
srvctl status scan_listener
srvctl status scan_listener -i 1
srvctl enable scan_listener [-i ordinal_number]
srvctl enable scan_listener
srvctl enable scan_listener -i 2
srvctl disable scan_listener [-i ordinal_number]
srvctl disable scan_listener
srvctl disable scan_listener -i 1
srvctl config scan_listener [-i ordinal_number]
srvctl config scan_listener
srvctl config scan_listener -i 3
srvctl modify scan_listener {-p [TCP:]port[/IPC:key][/NMP:pipe_name] [/TCPS:s_port][/SDP:port] | -u }
srvctl modify scan_listener -u
srvctl relocate scan_listener -i ordinal_number [-n node_name]
srvctl relocate scan_listener -i 1
ordinal_number=1,2,3
GNS (Grid Naming Service):
------------------------------
srvctl add gns -i ip_address -d domain
srvctl add gns -i 192.124.16.96 -d cluster.mycompany.com
srvctl remove gns [-f]
srvctl remove gns
srvctl start gns [-l log_level] [-n node_name]
srvctl start gns
srvctl stop gns [-n node_name [-v] [-f]
srvctl stop gns
srvctl status gns [-n node_name]
srvctl status gns
srvctl enable gns [-n node_name]
srvctl enable gns
srvctl disable gns [-n node_name]
srvctl disable gns -n devnode2
srvctl config gns [-a] [-d] [-k] [-m] [-n node_name] [-p] [-s] [-V] [-q name] [-l] [-v]
srvctl config gns -n lnx03
srvctl modify gns [-i ip_address] [-d domain]
srvctl modify gns -i 192.000.000.007
srvctl relocate gns [-n node_name]
srvctl relocate gns -n node2
VIP (Virtual Internet Protocol):
--------------------------------
srvctl add vip -n node_name -A {name|ip}/netmask[/if1[if2|...]] [-k network_number] [-v]
#srvctl add vip -n node96 -A 192.124.16.96/255.255.255.0 -k 2
srvctl remove vip -i "vip_name_list" [-f] [-y] [-v]
srvctl remove vip -i "vip1,vip2,vip3" -f -y -v
srvctl start vip {-n node_name|-i vip_name} [-v]
srvctl start vip -i dev1-vip -v
srvctl stop vip {-n node_name|-i vip_name} [-r] [-v]
srvctl stop vip -n node1 -v
srvctl status vip {-n node_name|-i vip_name}
srvctl status vip -i node1-vip
srvctl enable vip -i vip_name [-v]
srvctl enable vip -i prod-vip -v
srvctl disable vip -i vip_name [-v]
srvctl disable vip -i vip3 -v
srvctl config vip {-n node_name|-i vip_name}
srvctl config vip -n devnode2
srvctl getenv vip -i vip_name [-t "name_list"] [-v]
srvctl getenv vip -i node1-vip
srvctl setenv vip -i vip_name {-t "name=val[,name=val,...]" | -T "name=val"}
srvctl setenv vip -i dev1-vip -t LANG=en
srvctl unsetenv vip -i vip_name -t "name_list" [-v]
srvctl unsetenv vip -i myvip -t CLASSPATH
OC4J (Oracle Container for Java):
-----------------------------------
srvctl add oc4j [-v]
srvctl add oc4j
srvctl remove oc4j [-f] [-v]
srvctl remove oc4j
srvctl start ocj4 [-v]
srvctl start ocj4 -v
srvctl stop oc4j [-f] [-v]
srvctl stop oc4j -f -v
srvctl status oc4j [-n node_name]
srvctl status oc4j -n lnx01
srvctl enable oc4j [-n node_name] [-v]
srvctl enable oc4j -n dev3
srvctl disable oc4j [-n node_name] [-v]
srvctl disable oc4j -n dev1
srvctl config oc4j
srvctl modify oc4j -p oc4j_rmi_port [-v]
srvctl modify oc4j -p 5385
srvctl relocate oc4j [-n node_name] [-v]
srvctl relocate oc4j -n lxn06 -v
clscfg -help or clscfg -h
clscfg -install -- creates a new configuration
clscfg -add -- adds a node to the configuration
clscfg -delete -- deletes a node from the configuration
clscfg -upgrade -- upgrades an existing configuration
clscfg -downgrade -- downgrades an existing configuration
clscfg -local -- creates a special single-node configuration for ASM
clscfg -concepts -- brief listing of terminology used in the other modes
clscfg -trace -- used in conjunction with any mode above for tracing
CLUVFY: Cluster Verification Utility
cluvfy [-help] or cluvfy -h
cluvfy stage {-pre|-post} stage_name stage_specific_options [-verbose]
Valid stage options and stage names are:
-post hwos : post-check for hardware and operating system
-pre cfs : pre-check for CFS setup
-post cfs : post-check for CFS setup
-pre crsinst : pre-check for CRS installation
-post crsinst : post-check for CRS installation
-pre hacfg : pre-check for HA configuration
-post hacfg : post-check for HA configuration
-pre dbinst : pre-check for database installation
-pre acfscfg : pre-check for ACFS Configuration.
-post acfscfg : post-check for ACFS Configuration.
-pre dbcfg : pre-check for database configuration
-pre nodeadd : pre-check for node addition.
-post nodeadd : post-check for node addition.
-post nodedel : post-check for node deletion.
cluvfy stage -post hwos -n node_list [-verbose]
./runcluvfy.sh stage -post hwos -n node1,node2 -verbose
-- Installation checks after hwos - Hardware and Operating system installation
cluvfy stage -pre cfs -n node_list [-verbose]
cluvfy stage -post cfs -n node_list [-verbose]
-- Installation checks before/after Cluster File System
cluvfy stage -pre crsinst -n node_list [-c ocr_location] [-r {10gR1|10gR2|11gR1|11gR2}] [-q voting_disk] [-osdba osdba_group] [-orainv orainventory_group] [-verbose]
cluvfy stage -pre crsinst -n node1,node2,node3
./runcluvfy.sh stage -pre crsinst -n all -verbose
cluvfy stage -post crsinst -n node_list [-verbose]
-- Installation checks before/after CRS installation
cluvfy stage -pre dbinst -n node_list [-r {10gR1|10gR2|11gR1|11gR2}] [-osdba osdba_group] [-orainv orainventory_group] [-verbose]
cluvfy stage -pre dbcfg -n node_list -d oracle_home [-verbose]
-- Installation checks before/after DB installation/configuration
cluvfy comp component_name component_specific_options [-verbose]
Valid components are:
nodereach : checks reachability between nodes
nodecon : checks node connectivity
cfs : checks CFS integrity
ssa : checks shared storage accessibility
space : checks space availability
sys : checks minimum system requirements
clu : checks cluster integrity
clumgr : checks cluster manager integrity
ocr : checks OCR integrity
olr : checks OLR integrity
ha : checks HA integrity
crs : checks CRS integrity
nodeapp : checks node applications existence
admprv : checks administrative privileges
peer : compares properties with peers
software : checks software distribution
asm : checks ASM integrity
acfs : checks ACFS integrity
gpnp : checks GPnP integrity
gns : checks GNS integrity
scan : checks SCAN configuration
ohasd : checks OHASD integrity
clocksync : checks Clock Synchronization
vdisk : check Voting Disk Udev settings
cluvfy comp nodereach -n node_list [-srcnode node] [-verbose]
cluvfy comp nodecon -n node_list [-i interface_list] [-verbose]
cluvfy comp nodecon -n node1,node2,node3 –i eth0 -verbose
cluvfy comp nodeapp [-n node_list] [-verbose]
cluvfy comp peer [-refnode node] -n node_list [-r {10gR1|10gR2|11gR1|11gR2}] [-orainv orainventory_group] [-osdba osdba_group] [-verbose]
cluvfy comp peer -n node1,node2 -r 10gR2 -verbose
cluvfy comp crs [-n node_list] [-verbose]
cluvfy comp cfs [-n node_list] -f file_system [-verbose]
cluvfy comp cfs -f /oradbshare –n all -verbose
cluvfy comp ocr [-n node_list] [-verbose]
cluvfy comp clu -n node_list -verbose
cluvfy comp clumgr [-n node_list] [-verbose]
cluvfy comp sys [-n node_list] -p {crs|database} [-r {10gR1|10gR2|11gR1|11gR2}] [-osdba osdba_group] [-orainv orainventory_group] [-verbose]
cluvfy comp sys -n node1,node2 -p crs -verbose
cluvfy comp admprv [-n node_list] [-verbose] |-o user_equiv [-sshonly] |-o crs_inst [-orainv orainventory_group] |-o db_inst [-orainv orainventory_group] [-osdba osdba_group] |-o db_config -d oracle_home
cluvfy comp ssa [-n node_list] [-s storageID_list] [-verbose]
cluvfy comp space [-n node_list] -l storage_location -z disk_space{B|K|M|G} [-verbose]
cluvfy comp space -n all -l /home/dbadmin/products –z 2G -verbose
cluvfy comp olr
CRSCTL - Cluster Ready Service Control
$crsctl -- to get help
$crsctl query crs activeversion
$crsctl query crs softwareversion [node_name]
#crsctl start crs
#crsctl stop crs
(or)
#/etc/init.d/init.crs start
#/etc/init.d/init.crs stop
#crsctl enable crs
#crsctl disable crs
(or)
#/etc/init.d/init.crs enable
#/etc/init.d/init.crs disable
$crsctl check crs
$crsctl check cluster [-node node_name] -- Oracle RAC 11g command, checks the viability of CSS across nodes
#crsctl start cluster -n HostName -- 11g R2
#crsctl stop cluster -n HostName -- 11g R2
#crsctl stop cluster -all -- 11g R2
$crsctl check cssd
$crsctl check crsd
$crsctl check evmd
$crsctl check oprocd
$crsctl check ctss
#/etc/init.d/init.cssd stop
#/etc/init.d/init.cssd start
#/etc/rc.d/init.d/init.evmd
#/etc/rc.d/init.d/init.cssd
#/etc/rc.d/init.d/init.crsd
#mv /etc/rc3.d/S96init.cssd /etc/rc3.d/_S96init.cssd -- to stop cssd from autostarting after reboot
#crsctl check css votedisk
#crsctl query css votedisk -- lists the voting disks used by CSS
#crsctl add css votedisk PATH
#crsctl add css votedisk PATH -force -- if Clusterware is not running
#crsctl delete css votedisk PATH
#crsctl delete css votedisk PATH -force -- if Clusterware is not running
#crsctl set css parameter_name value -- set parameters on OCR
#crsctl set css misscount 100
#crsctl unset css parameter_name -- sets CSS parameter to its default
#crsctl unset css misscount
#crsctl get css parameter_name -- gets the value of a CSS parameter
#crsctl get css disktimeout
#crsctl get css misscount
#crsctl get css reboottime
#crsctl start resources -- starts Clusterware resources
./crsctl start resource ora.DATA.dg
#crsctl stop resources -- stops Clusterware resources
$crsctl status resource
$crsctl status resource -t
$crsctl stat resource -t
$crsctl lsmodules crs -- lists CRS modules that can be used for debugging
CRSUI
CRSCOMM
CRSRTI
CRSMAIN
CRSPLACE
CRSAPP
CRSRES
CRSCOMM
CRSOCR
CRSTIMER
CRSEVT
CRSD
CLUCLS
CSSCLNT
COMMCRS
COMMNS
$crsctl lsmodules css -- lists CSS modules that can be used for debugging
CSSD
COMMCRS
COMMNS
$crsctl lsmodules evm -- lists EVM modules that can be used for debugging
EVMD
EVMDMAIN
EVMCOMM
EVMEVT
EVMAPP
EVMAGENT
CRSOCR
CLUCLS
CSSCLNT
COMMCRS
COMMNS
$crsctl start has (HAS - High Availability Services)
$crsctl stop has
$crsctl check has
OCR Modules -- cannot be listed with crsctl lsmodules command
OCRAPI
OCRCLI
OCRSRV
OCRMAS
OCRMSG
OCRCAC
OCRRAW
OCRUTL
OCROSD
#crsctl debug statedump crs -- dumps state info for crs objects
#crsctl debug statedump css -- dumps state info for css objects
#crsctl debug statedump evm -- dumps state info for evm objects
#crsctl debug log crs [module:level]{,module:level} ...
-- Turns on debugging for CRS
#crsctl debug log crs CRSEVT:5,CRSAPP:5,CRSTIMER:5,CRSRES:5,CRSRTI:1,CRSCOMM:2
#crsctl debug log css [module:level]{,module:level} ...
-- Turns on debugging for CSS
#crsctl debug log css CSSD:1
#crsctl debug log evm [module:level]{,module:level} ...
-- Turns on debugging for EVM
#crsctl debug log evm EVMCOMM:1
#crsctl debug trace crs -- dumps CRS in-memory tracing cache
#crsctl debug trace css -- dumps CSS in-memory tracing cache
#crsctl debug trace evm -- dumps EVM in-memory tracing cache
#crsctl debug log res resource_name:level -- turns on debugging for resources
#crsctl debug log res "ora.lnx04.vip:1"
#crsctl trace all_the_above_commands -- tracing by adding a "trace" argument.
#crsctl trace check css
#crsctl backup -h
#crsctl backup css votedisk
Here is the list of the options for CRSCTL in 11gR2:
crsctl add - add a resource, type or other entity
crsctl backup - back up voting disk for CSS
crsctl check - check a service, resource or other entity
crsctl config - output autostart configuration
crsctl debug - obtain or modify debug state
crsctl delete - delete a resource, type or other entity
crsctl disable - disable autostart
crsctl discover - discover DHCP server
crsctl enable - enable autostart
crsctl get - get an entity value
crsctl getperm - get entity permissions
crsctl lsmodules - list debug modules
crsctl modify - modify a resource, type or other entity
crsctl query - query service state
crsctl pin - Pin the nodes in the nodelist
crsctl relocate - relocate a resource, server or other entity
crsctl replace - replaces the location of voting files
crsctl release - release a DHCP lease
crsctl request - request a DHCP lease
crsctl setperm - set entity permissions
crsctl set - set an entity value
crsctl start - start a resource, server or other entity
crsctl status - get status of a resource or other entity
crsctl stop - stop a resource, server or other entity
crsctl unpin - unpin the nodes in the nodelist
crsctl unset - unset a entity value, restoring its default
crsctl add resource resource_name -type resource_type [-file file_path | -attr "attribute_name=attribute_value,attribute_name=attribute_value,..."] [-i] [-f]
crsctl add resource r1 -type test_type1 -attr "PATH_NAME=/tmp/r1.txt"
crsctl add resource app.appvip -type app.appvip.type -attr "RESTART_ATTEMPTS=2, START_TIMEOUT=100,STOP_TIMEOUT=100,CHECK_INTERVAL=10,USR_ORA_VIP=172.16.0.0, START_DEPENDENCIES=hard(ora.net1.network)pullup(ora.net1.network), STOP_DEPENDENCIES=hard(ora.net1.network)"
crsctl add type type_name -basetype base_type_name {-attr "ATTRIBUTE=attribute_name | -file file_path,TYPE={string | int} [,DEFAULT_VALUE=default_value][,FLAGS=[READONLY][|REQUIRED]]"}
crsctl add type test_type1 -basetype cluster_resource -attr "ATTRIBUTE=FOO,TYPE=integer,DEFAULT_VALUE=0"
crsctl add crs administrator -u user_name [-f]
crsctl add crs administrator -u scott
crsctl add css votedisk path_to_voting_disk [path_to_voting_disk ...] [-purge]
crsctl add css votedisk /stor/grid/ -purge
crsctl add serverpool server_pool_name {-file file_path | -attr "attr_name=attr_value[,attr_name=attr_value[,...]]"} [-i] [-f]
crsctl add serverpool testsp -attr "MAX_SIZE=5"
crsctl add serverpool sp1 -file /tmp/sp1_attr
crsctl check cluster [-all | [-n server_name [...]]
crsctl check cluster -all
crsctl check crs
crsctl check css
crsctl check ctss -- Cluster Time Synchronization services
crsctl check evm
crsctl check resource {resource_name [...] | -w "filter" } [-n node_name] [-k cardinality_id] [-d degree_id] }
crsctl check resource appsvip
crsctl config crs
crsctl delete crs administrator -u user_name [-f]
crsctl delete crs administrator -u scott
crsctl delete resource resource_name [-i] [-f]
crsctl delete resource myResource
crsctl delete type type_name [-i]
crsctl delete type app.appvip.type
crsctl delete css votedisk voting_disk_GUID [voting_disk_GUID [...]]
crsctl delete css votedisk 61f4273ca8b34fd0bfadc2531605581d
crsctl delete node -n node_name
crsctl delete node -n node06
crsctl delete serverpool server_pool_name [server_pool_name [...]] [-i]
crsctl delete serverpool sp1
crsctl disable crs
crsctl discover dhcp -clientid clientid [-port port]
crsctl discover dhcp -clientid dsmjk252clr-dtmk01-vip
crsctl enable crs
crsctl get hostname
crsctl get clientid dhcp -cluname cluster_name -viptype vip_type [-vip vip_res_name] [-n node_name]
crsctl get clientid dhcp -cluname dsmjk252clr -viptype HOSTVIP -n tmjk01
crsctl get css parameter
crsctl get css disktimeout
crsctl get css ipmiaddr
crsctl get nodename
crsctl getperm resource resource_name [ {-u user_name | -g group_name} ]
crsctl getperm resource app.appvip
crsctl getperm resource app.appvip -u oracle
crsctl getperm resource app.appvip -g dba
crsctl getperm type resource_type [-u user_name] | [-g group_name]
crsctl getperm type app.appvip.type
crsctl getperm serverpool server_pool_name [-u user_name | -g group_name]
crsctl getperm serverpool sp1
crsctl lsmodules {mdns | gpnp | css | crf | crs | ctss | evm | gipc}
crsctl lsmodules evm
mdns: Multicast domain name server
gpnp: Grid Plug and Play service
css: Cluster Synchronization Services
crf: Cluster Health Monitor
crs: Cluster Ready Services
ctss: Cluster Time Synchronization Service
evm: Event Manager
gipc: Grid Interprocess Communication
crsctl modify resource resource_name -attr "attribute_name=attribute_value" [-i] [-f] [-delete]
crsctl modify resource appsvip -attr USR_ORA_VIP=10.1.220.17 -i
crsctl modify type type_name -attr "ATTRIBUTE=attribute_name,TYPE={string | int} [,DEFAULT_VALUE=default_value [,FLAGS=[READONLY][| REQUIRED]]" [-i] [-f]]
crsctl modify type myType.type -attr "ATTRIBUTE=FOO,DEFAULT_VALUE=0 ATTRIBUTE=BAR,DEFAULT_VALUE=baz"
crsctl modify serverpool server_pool_name -attr "attr_name=attr_value [,attr_name=attr_value[, ...]]" [-i] [-f]
crsctl modify serverpool sp1 -attr "MAX_SIZE=7"
crsctl pin css -n node_name [ node_name [..]]
crsctl pin css -n node2
crsctl query crs administrator
crsctl query crs activeversion
crsctl query crs releaseversion
crsctl query crs softwareversion node_name
crsctl query css ipmiconfig
crsctl query css ipmidevice
crsctl query css votedisk
crsctl query dns {-servers | -name name [-dnsserver DNS_server_address] [-port port] [-attempts number_of_attempts] [-timeout timeout_in_seconds] [-v]}
crsctl release dhcp -clientid clientid [-port port]
crsctl release dhcp -clientid spmjk662clr-spmjk03-vip
crsctl relocate resource {resource_name | resource_name | -all -s source_server | -w "filter"} [-n destination_server] [-k cid] [-env "env1=val1,env2=val2,..."] [-i] [-f]
crsctl relocate resource myResource1 -s node1 -n node3
crsctl relocate server server_name [...] -c server_pool_name [-i] [-f]
crsctl relocate server node6 node7 -c sp1
crsctl replace discoverystring 'absolute_path[,...]'
crsctl replace discoverystring "/oracle/css1/*,/oracle/css2/*"
crsctl replace votedisk [+asm_disk_group | path_to_voting_disk [...]]
crsctl replace votedisk +diskgroup1
crsctl replace votedisk /mnt/nfs/disk1 /mnt/nfs/disk2
crsctl request dhcp -clientid clientid [-port port]
crsctl request dhcp -clientid tmj0462clr-tmjk01-vip
crsctl set css parameter value
crsctl set css ipmiaddr ip_address
crsctl set css ipmiaddr 192.0.2.244
crsctl set css ipmiadmin ipmi_administrator_name
crsctl set css ipmiadmin scott
crsctl set log {[crs | css | evm "component_name=log_level, [...]"] | [all=log_level]}
crsctl set log crs "CRSRTI=1,CRSCOMM=2"
crsctl set log evm all=2
crsctl set log res "myResource1=3"
crsctl set {log | trace} module_name "component:debugging_level [,component:debugging_level][,...]"
crsctl set log crs "CRSRTI:1,CRSCOMM:2"
crsctl set log crs "CRSRTI:1,CRSCOMM:2,OCRSRV:4"
crsctl set log evm "EVMCOMM:1"
crsctl set log res "resname:1"
crsctl set log res "resource_name=debugging_level"
crsctl set log res "ora.node1.vip:1"
crsctl set log crs "CRSRTI:1,CRSCOMM:2" -nodelist node1,node2
crsctl set trace "component_name=tracing_level,..."
crsctl set trace "css=3"
crsctl setperm resource resource_name {-u acl_string | -x acl_string | -o user_name | -g group_name}
crsctl setperm resource myResource -u user:scott:rwx
crsctl setperm type resource_type_name {-u acl_string | -x acl_string | -o user_name | -g group_name}
crsctl setperm type resType -u user:scott:rwx
crsctl setperm serverpool server_pool_name {-u acl_string | -x acl_string | -o user_name | -g group_name}
crsctl setperm serverpool sp3 -u user:scott.tiger:rwx
crsctl start cluster [-all | -n server_name [...]]
crsctl start cluster -n node1 node2
crsctl start crs
crsctl start ip -A {IP_name | IP_address}/netmask/interface_name
crsctl start ip -A 192.168.29.220/255.255.252.0/eth0
crsctl start resource {resource_name [...] | -w filter | -all} [-n server_name] [-k cid] [-d did] [-env "env1=val1,env2=val2,..."] [-i] [-f]
crsctl start resource myResource -n server1
crsctl start testdns [-address address [-port port]] [-once] [-v]
crsctl start testdns -address 192.168.29.218 -port 63 -v
crsctl status resource {resource_name [...] | -w "filter"} [-p | -v [-e]] | [-f | -l | -g] [[-k cid | -n server_name] [-d did]] | [-s -k cid [-d did]] [-t]
crsctl status resource ora.stai14.vip
crsctl stat res -w "TYPE = ora.scan_listner.type"
crsctl status type resource_type_name [...]] [-g] [-p] [-f]
crsctl status type ora.network.type
crsctl status ip -A {IP_name | IP_address}
crsctl status server [-p | -v | -f]
crsctl status server { server_name [...] | -w "filter"} [-g | -p | -v | -f]
crsctl status server node2 -f
crsctl status serverpool [-p | -v | -f]
crsctl status serverpool [server_pool_name [...]] [-w] [-g | -p | -v | -f]
crsctl status serverpool sp1 -f
crsctl status serverpool
crsctl status serverpool -p
crsctl status serverpool -w "MAX_SIZE > 1"
crsctl status testdns [-server DNS_server_address] [-port port] [-v]
crsctl stop cluster [-all | -n server_name [...]] [-f]
crsctl stop cluster -n node1
crsctl stop crs [-f]
crsctl stop crs
crsctl stop resource {resource_name [...] | -w "filter" | -all} [-n server_name] [-k cid] [-d did] [-env "env1=val1,env2=val2,..."] [-i] [-f]
crsctl stop resource -n node1 -k 2
crsctl stop ip -A {IP_name | IP_address}/interface_name
crsctl stop ip -A MyIP.domain.com/eth0
crsctl stop testdns [-address address [-port port]] [-domain GNS_domain] [-v]
crsctl unpin css -n node_name [node_name [...exit]]
crsctl unpin css -n node1 node4
crsctl unset css parameter
crsctl unset css reboottime
crsctl unset css ipmiconfig
HAS (High Availability Service)
crsctl check has
crsctl config has
crsctl disable has
crsctl enable has
crsctl query has releaseversion
crsctl query has softwareversion
crsctl start has
crsctl stop has [-f]
How do I identify the voting disk/file location?
#crsctl query css votedisk
How to take backup of voting file/disk?
crsctl backup css votedisk
dd -- dataset definition, useful for taking backup of votedisks/Voting Disks
Arguments:
if -- input file
of -- output file
bs -- block size
dd if=/u02/ocfs2/vote/VDFile_0 of=$ORACLE_BASE/bkp/VDFile_0
dd if=/dev/hdb2 of=/dev/sda5 bs=8192
dd if=/dev/sda2 of=/dev/sdb2 bs=4096 conv=notrunc,noerror
dd if=/dev/zero of=/abc bs=1 count=1 conv=notrunc,ucase
dd if=/dev/cdrom of=/home/satya/myCD.iso bs=2048 conv=sync,notrunc,lcase
dd if=/dev/hda3 skip=9200 of=/home/satya/backup_set_3.img bs=1M count=4600
dd if=/home/satya/1Gb.file bs=64k | dd of=/dev/null
DIAGCOLLECTION:Tool to collect diagnosis information, from Oracle cluster home, Oracle home and Oracle base.
#$ORA_CRS_HOME/bin/diagcollection.pl
#$ORA_CRS_HOME/bin/diagcollection.pl --collect
#$ORA_CRS_HOME/bin/diagcollection.pl --collect --crs $ORA_CRS_HOME
#$ORA_CRS_HOME/bin/diagcollection.pl --collect --oh $ORACLE_HOME
#$ORA_CRS_HOME/bin/diagcollection.pl --collect --ob $ORACLE_BASE
#$ORA_CRS_HOME/bin/diagcollection.pl --collect --all
#$ORA_CRS_HOME/bin/diagcollection.pl --coreanalysis
--- analyze all the trace & log files and summarizes in a text file
#$ORA_CRS_HOME/bin/diagcollection.pl --clean --- cleans gathered info
GSDCTL: - Global Service Daemon Control
gsdctl start -- start the GSD service
gsdctl stop -- stop the GSD service
gsdctl stat -- To obtain the status of the GSD service
$ gsdctl start
$ gsdctl stop
$ gsdctl stat
Log file will be at $ORACLE_HOME/srvm/log/gsdaemon_node_name.log
Oracle RAC O2CB Cluster Service Commands
/etc/init.d/o2cb start
/etc/init.d/o2cb start ocfs2
/etc/init.d/o2cb stop
/etc/init.d/o2cb stop ocfs2
/etc/init.d/o2cb restart
/etc/init.d/o2cb restart ocfs2
/etc/init.d/o2cb status -- Check status
/etc/init.d/o2cb load -- Loads all OCFS2 modules
/etc/init.d/o2cb unload -- Unloads all OCFS2 modules
/etc/init.d/o2cb online ocfs2 -- Brings the cluster online
/etc/init.d/o2cb offline ocfs2 -- Takes the cluster offline
/etc/init.d/o2cb configure -- Configuring the O2CB driver
/etc/init.d/o2cb enable
#chkconfig --add o2cb
#chkconfig --list o2cb
o2cb 0:off 1:off 2:on 3:on 4:on 5:on 6:off
#chkconfig --del o2cb
service o2cb status
OCRCHECK: -- Displays health of OCR (Oracle Cluster Registry).
$ocrcheck -help or ocrcheck -h
$ocrcheck
Version : 2
Total space (kbytes) : 262144
Used space (kbytes) : 16256
Available space (kbytes) : 245888
ID : 1918913332
Device/File Name : /dev/raw/raw1 Device/File integrity check succeeded
Device/File Name : /dev/raw/raw2 Device/File integrity check succeeded
Cluster registry integrity check succeeded
#ocrcheck -local
Status of Oracle Cluster Registry is as follows :
Version : 3
Total space (kbytes) : 262132
Used space (kbytes) : 9200
Available space (kbytes) : 252932
ID : 604793089
Device/File Name : /u02/crs/cdata/localhost/lnx6.olr Device/File integrity check succeeded
Local OCR integrity check succeeded
$ocrcheck -local -config
Log file will be $ORACLE_HOME/log/node_name/client/ocrcheck_pid.log
Debugging can be controlled through $ORA_CRS_HOME/srvm/admin/ocrlog.ini
How do I identify the OCR file location?
#ocrcheck
OCRCONFIG: -- OCR (Oracle Cluster Registry) CONFIGuration tool#ocrconfig -help or ocrconfig -h
#ocrconfig -showbackup [auto|manual]
-- default location is $ORA_CRS_HOME/cdata/cluster_name
#ocrconfig -showbackup
#ocrconfig -backuploc dir_name -- change OCR autobackup directory location
#ocrconfig -backuploc /u02/backups
#ocrconfig -manualbackup -- Oracle RAC 11g command, to perform OCR backup manually
#ocrconfig -restore backup_file.ocr -- recovering from autobackup file
#ocrconfig -restore /u02/backups/backup00.ocr
#ocrconfig -export file_name.dmp [-s online]
-- exports OCR content to a file
#ocrconfig -export /tmp/ocr_exp
#ocrconfig -import file_name.dmp
-- recover OCR logically, must be done on all nodes
#ocrconfig -import /tmp/ocr_exp
#ocrconfig -replace ocr [file_name] -- adding/replacing an OCR file
#ocrconfig -replace ocrmirror [file_name]
#ocrconfig -repair ocr file_name
#ocrconfig -repair ocrmirror file_name
#ocrconfig -repair -replace current_OCR_location -replacement target_OCR_location
#ocrconfig -upgrade [user [group]] -- upgrades OCR
#ocrconfig -downgrade [-version version_string] -- downgrades OCR
#ocrconfig -overwrite
#ocrconfig –local –import file_name
#ocrconfig –local –manualbackup
#ocrconfig -local -backuploc new_olr_backup_path
#ocrconfig -local -restore file_name
#ocrconfig -add +new_disk_group
#ocrconfig -delete +unused_disk_group
#ocrconfig -add file_location
#ocrconfig -add /dev/sdd1
#ocrconfig -delete old_storage_location
#ocrconfig -delete /dev/raw/raw2
Log file will be $ORACLE_HOME/log/node_name/client/ocrconfig_pid.log
Debugging can be controlled through $ORA_CRS_HOME/srvm/admin/ocrlog.ini
How to take backup of OCR file?
#ocrconfig -manualbackup
#ocrconfig -export file_name.dmp
How to recover OCR file?
#ocrconfig -restore backup_file.ocr
#ocrconfig -import file_name.dmp
OCRDUMP: -- dumps OCR (Oracle Cluster Registry) contents to a file
#ocrdump -help or ocrdump -h
#ocrdump [file_name|-stdout] [-backupfile backup_filename] [-keyname key_name] [-xml] [-noheader]
#ocrdump -- default filename is OCRDUMPFILE
#ocrdump MYFILE
#ocrdump ${HOST}_OCRDUMP
#ocrdump -backupfile my_file
#ocrdump -stdout -keyname SYSTEM
#ocrdump -stdout -xml
$ocrdump -local olr.lst --> Normal Text Format
$ocrdump -local -xml olr_xml.lst --> XML format
$ocrdump -local -backupfile olr_backup_file_name
Log file will be $ORACLE_HOME/log/node_name/client/ocrdump_pid.log
Debugging can be controlled through $ORA_CRS_HOME/srvm/admin/ocrlog.ini
How to take backup of OCR file?
#ocrdump -backupfile my_file
OIFCFG: -- Oracle Interface Configuration tool
A command line tool for both single instance Oracle databases and RAC databases that enables us to allocate and deallocate network interfaces to components, direct components to use specific network interfaces, and retrieve component configuration information.
oifcfg [-help] -- will give help
$ ./oifcfg -help
$ ./oifcfg
oifcfg iflist [-p [-n]]
-p includes description of the subnet, -n includes subnet mask
oifcfg iflist -- display a list of current subnets
etho 147.43.1.60
ethl 192.168.1.150
oifcfg iflist -p -n
etho 147.43.1.60 UNKNOWN 255.255.255.0 (public interfaces are UNKNOWN)
ethl 192.168.1.150 PRIVATE 255.255.255.0
oifcfg getif [-node node_name|-global] [-if if_name[/subnet] [-type {cluster_interconnect|public|storage}]]
-- To display a list of networks
oifcfg getif
eth1 192.168.1.150 global cluster_interconnect
eth0 192.168.0.150 global public
oifcfg setif {-node node_name|-global} {if_name/subnet:{cluster_interconnect|public|storage}}...
oifcfg setif -global eth0/10.50.99.0:public
oifcfg setif -global eth0/172.19.141.0:cluster_interconnect
oifcfg delif [-node node_name|-global] [if_name[/subnet]]
oifcfg delif -global
oifcfg delif -global eth0
oifcfg delif -global eth1/172.21.65.0
olsnodes commands in Oracle RAC OLSNODES:
Provides the list of nodes and other information for all nodes participating in the cluster
#olsnodes [node_name] [-g] [-i] [-l] [-n] [-p] [-s] [-t] [-v]
node_name -- displays information for the particular node
g -- more details
i -- with VIP
l -- local node name
n -- with node number
p -- private interconnect
s -- status of the node (ACTIVE or INACTIVE)
t -- type of the node (PINNED or UNPINNED)
v -- verbose
How to find out the nodes in Oracle RAC cluster?
#olsnodes -- will list the nodes in the cluster
#olsnodes -n
#olsnodes node44 -v
#olsnodes -n -p -i
node1-pub 1 node1-prv node1-vip
node2-pub 2 node2-prv node2-vip
#olsnodes -i
node1 178.192.1.1
node2 178.192.2.1
node3 178.192.3.1
node4 178.192.4.1
ONSCTL: -- to manage ONS (Oracle Notification Service)
ONS - A publish and subscribe service for communicating information about all FAN events.
onsctl or onsctl help or onsctl -h
onsctl start -- to start ONS
onsctl stop -- to stop ONS
onsctl ping -- to find out the status of ONS
onsctl debug -- to display debug information for the ons daemon
onsctl reconfig -- to reload the ons configuration
onsctl detailed -- to print a verbose syntax description
from 11g release 2
onsctl command [options]
onsctl or onsctl help or onsctl -h
onsctl start -- to start ONS
onsctl shutdown -- to shutdown ONS
onsctl ping [max-retry] -- to ping local ons
onsctl debug [attr=val ...] -- to display ons server debug information
onsctl reload -- to trigger ons to reread its configuration file
onsctl set [attr=val ...] -- to set ons log parameters
onsctl query [attr=val] -- to query ons log parameters
onsctl usage [command] -- to print detailed usage description
srvconfig [options]srvconfig or srvconfig -help or srvconfig -?
srvconfig -exp file_name
-- exports the contents of the configuration information/cluster registry
srvconfig -imp file_name
-- imports the configuration information/cluster registry
srvconfig -init -- initialize cluster registry (if not already initialized)
srvconfig -init -f -- force initialization of configuration even if initialized
srvconfig -upgrade -dbname db_name -orahome ORACLE_HOME
-- upgrade the database configuration
srvconfig -downgrade -dbname db_name -orahome ORACLE_HOME -version ver_str
-- downgrade the database configuration
SRVCTL: (Server Control utility)srvctl command target [options]
commands: enable|disable|start|stop|relocate|status|add|remove|modify|getenv|setenv|unsetenv|config
targets: database/db|instance/inst|service/serv|nodeapps|asm|listener
targets: database/db|instance/inst|service/serv|nodeapps|asm|listener |diskgroup|home|ons|eons|filesystem|gns|oc4j|scan|scan_listener |srvpool|server|VIP -- From Oracle 11g R2
srvctl -help or srvctl -v
srvctl -V -- prints version
srvctl version: 10.2.0.0.0 (or) srvctl version: 11.2.0.1.0
srvctl -h -- print usage
srvctl status service –h
Database:
--------------------------------------------------------------------------------
srvctl add database -d db_name -o ORACLE_HOME [-m domain_name][-p spfile] [-A name|ip/netmask]
[-r {PRIMARY|PHYSICAL_STANDBY|LOGICAL_STANDBY|SNAPSHOT_STANDBY}]
[-s start_options] [-n db_name] [-y {AUTOMATIC|MANUAL}]
srvctl add database -d prod -o /u01/oracle/product/102/prod
srvctl remove database -d db_name [-f]
srvctl remove database -d prod
srvctl start database -d db_name [-o start_options] [-c connect_str|-q]
srvctl start database -d db_name [-o open]
srvctl start database -d db_name -o nomount
srvctl start database -d db_name -o mount
srvctl start db -d prod
srvctl start database -d apps -o open
srvctl stop database -d db_name [-o stop_options] [-c connect_str|-q]
srvctl stop database -d db_name [-o normal]
srvctl stop database -d db_name -o transactional
srvctl stop database -d db_name -o immediate
srvctl stop database -d db_name -o abort
srvctl stop db -d crm -o immediate
srvctl status database -d db_name [-f] [-v] [-S level]
srvctl status database -d db_name -v service_name
srvctl status database -d hrms
srvctl enable database -d db_name
srvctl enable database -d vis
srvctl disable database -d db_name
srvctl disable db -d vis
srvctl config database
srvctl config database -d db_name [-a] [-t]
srvctl config database
srvctl config database -d HYD -a
srvctl modify database -d db_name [-n db_name] [-o ORACLE_HOME] [-m domain_name] [-p spfile]
[-r {PRIMARY|PHYSICAL_STANDBY|LOGICAL_STANDBY|SNAPSHOT_STANDBY}] [-s start_options] [-y {AUTOMATIC|MANUAL}]
srvctl modify database -d hrms -r physical_standby
srvctl modify db -d RAC -p /u03/oradata/RAC/spfileRAC.ora -- moves p file
srvctl modify database –d HYD –o /u01/app/oracle/product/11.1/db –s open
srvctl getenv database -d db_name [-t name_list]
srvctl getenv database -d prod
srvctl setenv database -d db_name {-t name=val[,name=val,...]|-T name=val}
srvctl setenv database –d HYD –t “TNS_ADMIN=/u01/app/oracle/product/11.1/asm/network/admin”
srvctl setenv db -d prod -t LANG=en
srvctl unsetenv database -d db_name [-t name_list]
srvctl unsetenv database -d prod -t CLASSPATH
In 11g Release 2, some command's syntax has been changed:
srvctl add database -d db_unique_name -o ORACLE_HOME [-x node_name] [-m domain_name] [-p spfile] [-r {PRIMARY|PHYSICAL_STANDBY|LOGICAL_STANDBY|SNAPSHOT_STANDBY}] [-s start_options] [-t stop_options] [-n db_name] [-y {AUTOMATIC|MANUAL}] [-g server_pool_list] [-a "diskgroup_list"]
srvctl add database -d prod -o /u01/oracle/product/112/prod -m foo.com -p +dg1/prod/spfileprod.ora -r PRIMARY -s open -t normal -n db2 -y AUTOMATIC -g svrpool1,svrpool2 -a "dg1,dg2"
srvctl remove database -d db_unique_name [-f] [-y] [-v]
srvctl remove database -d prod -y
srvctl stop database -d db_unique_name [-o stop_options] [-f]
srvctl stop database -d dev -f
srvctl status database -d db_unique_name [-f] [-v]
srvctl status db -d sat -v
srvctl enable database -d db_unique_name [-n node_name]
srvctl enable database -d vis -n lnx01
srvctl disable database -d db_unique_name [-n node_name]
srvctl disable db -d vis -n lnx03
srvctl config database [-d db_unique_name [-a]]
srvctl config db -d db_erp -a
srvctl modify database -d db_unique_name [-n db_name] [-o ORACLE_HOME] [-u oracle_user] [-m domain] [-p spfile] [-r {PRIMARY|PHYSICAL_STANDBY|LOGICAL_STANDBY|SNAPSHOT_STANDBY}] [-s start_options] [-t stop_options] [-y {AUTOMATIC|MANUAL}] [-g "server_pool_list"] [-a "diskgroup_list"|-z]
srvctl modify db -d prod -r logical_standby
srvctl modify database -d racTest -a "SYSFILES,LOGS,OLTP"
srvctl modify database -d ronedb -e rac1,rac2
srvctl relocate database -d db_unique_name {[-n target_node] [-w timeout] | -a [-r]} [-v]
srvctl relocate database -d rontest -n node2
srvctl relocate database -d rone2db -n lnxrac2 -w 120 -v
srvctl convert database -d ....
srvctl convert database -d ronedb -c RAC -n rac1
srvctl convert database -d ronedb -c RACONENODE -i RoneDB
Instance:
-----------
srvctl add instance –d db_name –i inst_name -n node_name
srvctl add instance -d prod -i prod01 -n linux01
StartDatabaseLevelingTargets
srvctl remove instance –d db_name –i inst_name [-f]
srvctl remove instance -d prod -i prod01
srvctl start instance -d db_name -i inst_names [-o start_options] [-c connect_str|-q]
srvctl start instance –d db_name –i inst_names [-o open]
srvctl start instance –d db_name –i inst_names -o nomount
srvctl start instance –d db_name –i inst_names -o mount
srvctl start instance –d dev -i dev2
srvctl stop instance -d db_name -i inst_names [-o stop_options] [-c connect_str|-q]
srvctl stop instance –d db_name –i inst_names [-o normal]
srvctl stop instance –d db_name –i inst_names -o transactional
srvctl stop instance –d db_name –i inst_names -o immediate
srvctl stop instance –d db_name –i inst_names -o abort
srvctl stop inst –d vis -i vis
srvctl status instance –d db_name –i inst_names [-f] [-v] [-S level]
srvctl status inst –d racdb -i racdb2
srvctl enable instance –d db_name –i inst_names
srvctl enable instance -d prod -i "prod1,prod2"
srvctl disable instance –d db_name –i inst_names
srvctl disable inst -d prod -i "prod1,prod3"
srvctl modify instance -d db_name -i inst_name {-s asm_inst_name|-r} -- set dependency of instance to ASM
srvctl modify instance -d db_name -i inst_name -n node_name -- move the instance
srvctl modify instance -d db_name -i inst_name -r -- remove the instance
srvctl getenv instance –d db_name –i inst_name [-t name_list]
srvctl setenv instance –d db_name [–i inst_name] {-t "name=val[,name=val,...]" | -T "name=val"}
srvctl unsetenv instance –d db_name [–i inst_name] [-t name_list]
In 11g Release 2, some command's syntax has been changed:
srvctl start instance -d db_unique_name {-n node_name -i "instance_name_list"} [-o start_options]
srvctl start instance -d prod -n node2
srvctl start inst -d prod -i "prod2,prod3"
srvctl stop instance -d db_unique_name {[-n node_name]|[-i "instance_name_list"]} [-o stop_options] [-f]
srvctl stop inst -d prod -n node1
srvctl stop instance -d prod -i prod1
srvctl status instance -d db_unique_name {-n node_name | -i "instance_name_list"} [-f] [-v]
srvctl status instance -d prod -i "prod1,prod2" -v
srvctl modify instance -d db_unique_name -i instance_name {-n node_name|-z}
srvctl modify instance -d prod -i prod1 -n mynode
srvctl modify inst -d prod -i prod1 -z
Service:
--------
srvctl add service -d db_name -s service_name -r pref_insts [-a avail_insts] [-P TAF_policy]
srvctl add service -d db_name -s service_name -u {-r "new_pref_inst" | -a "new_avail_inst"}
srvctl add service -d RAC -s PRD -r RAC01,RAC02 -a RAC03,RAC04
srvctl add serv -d CRM -s CRM -r CRM1 -a CRM3 -P basic
srvctl remove service -d db_name -s service_name [-i inst_name] [-f]
srvctl remove serv -d dev -s sales
srvctl remove service -d dev -s sales -i dev01,dev02
srvctl start service -d db_name [-s service_names [-i inst_name]] [-o start_options]
srvctl start service -d db_name -s service_names [-o open]
srvctl start service -d db_name -s service_names -o nomount
srvctl start service -d db_name -s service_names -o mount
srvctl start serv -d dwh -s dwh
srvctl stop service -d db_name [-s service_names [-i inst_name]] [-f]
srvctl stop serv -d dwh -s dwh
srvctl status service -d db_name [-s service_names] [-f] [-v] [-S level]
srvctl status service -d dev -s dev
srvctl enable service -d db_name -s service_names [–i inst_name]
srvctl enable service -d apps -s apps1
srvctl disable service -d db_name -s service_names [–i inst_name]
srvctl disable serv -d dev -s dev -i dev1
srvctl config service -d db_name [-s service_name] [-a] [-S level]
srvctl config service -d db_name -a -- -a shows TAF configuration
srvctl config service -d TEST -s test PREF:TST1 AVAIL:TST2
srvctl modify service -d db_name -s service_name -i old_inst_name -t new_inst_name [-f]
srvctl modify service -d db_name -s service_name -i avail_inst_name -r [-f]
srvctl modify service -d db_name -s service_name -n -i preferred_list [-a available_list] [-f]
srvctl modify service -d db_name -s service_name -i old_inst_name -a avail_inst -P TAF_policy
srvctl modify serv -d PROD -s DWH -n -i I1,I2,I3,I4 -a I5,I6
srvctl relocate service -d db_name -s service_name –i old_inst_name -t target_inst [-f]
srvctl getenv service -d db_name -s service_name -t name_list
srvctl setenv service -d db_name [-s service_name] {-t "name=val[,name=val,...]" | -T "name=val"}
srvctl unsetenv service -d db_name -s service_name -t name_list
In 11g Release 2, some command's syntax has been changed:srvctl add service -d db_unique_name -s service_name [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC|MANUAL}] [-q {true|false}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}][-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}][-z failover_retries] [-w failover_delay]
srvctl add service -d rac -s rac1 -q TRUE -m BASIC -e SELECT -z 180 -w 5 -j LONG
srvctl add service -d db_unique_name -s service_name -u {-r preferred_list | -a available_list}
srvctl add service -d db_unique_name -s service_name
-g server_pool [-c {UNIFORM|SINGLETON}] [-k network_number]
[-l [PRIMARY|PHYSICAL_STANDBY|LOGICAL_STANDBY|SNAPSHOT_STANDBY]
[-y {AUTOMATIC|MANUAL}] [-q {TRUE|FALSE}] [-j {SHORT|LONG}]
[-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}]
[-m {NONE|BASIC}] [-P {BASIC|NONE|PRECONNECT}] [-x {TRUE|FALSE}]
[-z failover_retries] [-w failover_delay]
srvctl add service -d db_unique_name -s service_name -r preferred_list [-a available_list] [-P {BASIC|NONE|PRECONNECT}]
[-l [PRIMARY|PHYSICAL_STANDBY|LOGICAL_STANDBY|SNAPSHOT_STANDBY]
[-y {AUTOMATIC|MANUAL}] [-q {TRUE|FALSE}] [-j {SHORT|LONG}]
[-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}]
[-m {NONE|BASIC}] [-x {TRUE|FALSE}] [-z failover_retries] [-w failover_delay]
srvctl add serv -d dev -s sales -r dev01,dev02 -a dev03 -P PRECONNECT
srvctl start service -d db_unique_name [-s "service_name_list" [-n node_name | -i instance_name]] [-o start_options]
srvctl start serv -d dev -s dev
srvctl start service -d dev -s dev -i dev2
srvctl stop service -d db_unique_name [-s "service_name_list"] [-n node_name | -i instance_name] [-f]
srvctl stop service -d dev -s dev
srvctl stop serv -d dev -s dev -i dev2
srvctl status service -d db_unique_name [-s "service_name_list"] [-f] [-v]
srvctl status service -d dev -s dev -v
srvctl enable service -d db_unique_name -s "service_name_list" [-i instance_name | -n node_name]
srvctl enable service -d dev -s dev
srvctl enable serv -d dev -s dev -i dev1
srvctl disable service -d db_unique_name -s "service_name_list" [-i instance_name | -n node_name]
srvctl disable service -d dev -s "dev,marketing"
srvctl disable serv -d dev -s dev -i dev1
srvctl config service -d db_unique_name [-s service_name] [-a]
srvctl config service -d dev -s dev
srvctl modify service -d db_unique_name -s service_name
[-c {UNIFORM|SINGLETON}] [-P {BASIC|PRECONNECT|NONE}]
[-l {[PRIMARY]|[PHYSICAL_STANDBY]|[LOGICAL_STANDBY]|[SNAPSHOT_STANDBY]} [-q {TRUE|FALSE}] [-x {TRUE|FALSE}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z failover_retries] [-w failover_delay] [-y {AUTOMATIC|MANUAL}]
srvctl modify service -d db_unique_name -s service_name -i old_instance_name -t new_instance_name [-f]
srvctl modify service -d db_unique_name -s service_name -i avail_inst_name -r [-f]
srvctl modify service -d db_unique_name -s service_name -n -i preferred_list [-a available_list] [-f]
srvctl modify service -d dev -s dev -i dev1 -t dev2
srvctl modify serv -d dev -s dev -i dev1 -r
srvctl modify service -d dev -s dev -n -i dev1 -a dev2
srvctl relocate service -d db_unique_name -s service_name {-c source_node -n target_node|-i old_instance_name -t new_instance_name} [-f]
srvctl relocate service -d dev -s dev -i dev1 -t dev3
Nodeapps:
------------
#srvctl add nodeapps -n node_name -o ORACLE_HOME -A name|ip/netmask[/if1[|if2|...]]
#srvctl add nodeapps -n lnx02 -o $ORACLE_HOME -A 192.168.0.151/255.255.0.0/eth0
#srvctl remove nodeapps -n node_names [-f]
#srvctl start nodeapps -n node_name -- Starts GSD, VIP, listener & ONS
#srvctl stop nodeapps -n node_name [-r] -- Stops GSD, VIP, listener & ONS
#srvctl status nodeapps -n node_name
#srvctl config nodeapps -n node_name [-a] [-g] [-o] [-s] [-l]
-a Display VIP configuration
-g Display GSD configuration
-s Display ONS daemon configuration
-l Display listener configuration
#srvctl modify nodeapps -n node_name [-A new_vip_address]
#srvctl modify nodeapps -n lnx06 -A 10.50.99.43/255.255.252.0/eth0
#srvctl getenv nodeapps -n node_name [-t name_list]
#srvctl setenv nodeapps -n node_name {-t "name=val[,name=val,...]"|-T "name=val"}
#srvctl setenv nodeapps –n adcracdbq3 –t “TNS_ADMIN=/u01/app/oracle/product/11.1/asm/network/admin”
#srvctl unsetenv nodeapps -n node_name [-t name_list]
In 11g Release 2, some command's syntax has been changed:srvctl add nodeapps -n node_name -A {name|ip}/netmask[/if1[|if2|...]] [-m multicast_ip_address] [-p multicast_port_number] [-l ons_local_port] [-r ons_remote-port] [-t host[:port][,host[:port],...]] [-v]
srvctl add nodeapps -S subnet/netmask[/if1[|if2|...]] [-d dhcp_server_type] [-m multicast_ip_address] [-p multicast_port_number] [-l ons_local_port] [-r ons_remote-port] [-t host[:port][,host[:port],...]] [-v]
#srvctl add nodeapps -n devnode1 -A 1.2.3.4/255.255.255.0
srvctl remove nodeapps [-f] [-y] [-v]
srvctl remove nodeapps
srvctl start nodeapps [-n node_name] [-v]
srvctl start nodeapps
srvctl stop nodeapps [-n node_name] [-r] [-v]
srvctl stop nodeapps
srvctl status nodeapps
srvctl enable nodeapps [-g] [-v]
srvctl enable nodeapps -g -v
srvctl disable nodeapps [-g] [-v]
srvctl disable nodeapps -g -v
srvctl config nodeapps [-a] [-g] [-s] [-e]
srvctl config nodeapps -a -g -s -e
srvctl modify nodeapps [-n node_name -A new_vip_address] [-S subnet/netmask[/if1[|if2|...]] [-m multicast_ip_address] [-p multicast_port_number] [-e eons_listen_port] [-l ons_local_port] [-r ons_remote_port] [-t host[:port][,host:port,...]] [-v]
srvctl modify nodeapps -n mynode1 -A 100.200.300.40/255.255.255.0/eth0
srvctl getenv nodeapps [-a] [-g] [-s] [-e] [-t "name_list"] [-v]
srvctl getenv nodeapps -a
srvctl setenv nodeapps {-t "name=val[,name=val][...]" | -T "name=val"} [-v]
srvctl setenv nodeapps -T "CLASSPATH=/usr/local/jdk/jre/rt.jar" -v
srvctl unsetenv nodeapps -t "name_list" [-v]
srvctl unsetenv nodeapps -t "test_var1,test_var2"
ASM:
------
srvctl add asm -n node_name -i asminstance -o ORACLE_HOME [-p spfile]
srvctl remove asm -n node_name [-i asminstance] [-f]
srvctl remove asm -n db6
srvctl start asm -n node_name [-i asminstance] [-o start_options] [-c connect_str|-q]
srvctl start asm -n node_name [-i asminstance] [-o open]
srvctl start asm -n node_name [-i asminstance] -o nomount
srvctl start asm -n node_name [-i asminstance] -o mount
srvctl start asm -n linux01
srvctl stop asm -n node_name [-i asminstance] [-o stop_options] [-c connect_str|-q]
srvctl stop asm -n node_name [-i asminstance] [-o normal]
srvctl stop asm -n node_name [-i asminstance] -o transactional
srvctl stop asm -n node_name [-i asminstance] -o immediate
srvctl stop asm -n node_name [-i asminstance]-o abort
srvctl stop asm -n racnode1
srvctl stop asm -n devnode1 -i +asm1
srvctl status asm -n node_name
srvctl status asm -n racnode1
srvctl enable asm -n node_name [-i asminstance]
srvctl enable asm -n lnx03 -i +asm3
srvctl disable asm -n node_name [-i asminstance]
srvctl disable asm -n lnx02 -i +asm2
srvctl config asm -n node_name
srvctl config asm -n lnx08
srvctl modify asm -n node_name -i asminstance [-o ORACLE_HOME] [-p spfile]
srvctl modify asm –n rac6 -i +asm6 –o /u01/app/oracle/product/11.1/asm
In 11g Release 2, some command's syntax has been changed:
srvctl add asm [-l lsnr_name] [-p spfile] [-d asm_diskstring]
srvctl add asm
srvctl add asm -l LISTENERASM -p +dg_data/spfile.ora
srvctl remove asm [-f]
srvctl remove asm -f
srvctl start asm [-n node_name] [-o start_options]
srvctl start asm -n devnode1
srvctl stop asm [-n node_name] [-o stop_options] [-f]
srvctl stop asm -n devnode1 -f
srvctl status asm [-n node_name] [-a]
srvctl status asm -n devnode1 -a
srvctl enable asm [-n node_name]
srvctl enable asm -n devnode1
srvctl disable asm [-n node_name]
srvctl disable asm -n devnode1
srvctl config asm [-a]
srvctl config asm -a
srvctl modify asm [-l lsnr_name] [-p spfile] [-d asm_diskstring]
srvctl modify asm [-n node_name] [-l listener_name] [-d asm_diskstring] [-p spfile_path_name]
srvctl modify asm -l lsnr1
srvctl getenv asm [-t name[, ...]]
srvctl getenv asm
srvctl setenv asm {-t "name=val [,...]" | -T "name=value"}
srvctl setenv asm -t LANG=en
srvctl unsetenv asm -t "name[, ...]"
srvctl unsetenv asm -t CLASSPATH
Listener:
--------------------------------------------------------------------------------
srvctl add listener -n node_name -o ORACLE_HOME [-l listener_name] -- 11g R1 command
srvctl remove listener -n node_name [-l listener_name] -- 11g R1 command
srvctl start listener -n node_name [-l listener_names]
srvctl start listener -n node1
srvctl stop listener -n node_name [-l listener_names]
srvctl stop listener -n node1
srvctl status listener [-n node_name] [-l listener_names] -- 11g R1 command
srvctl status listener -n node2
srvctl config listener -n node_name
srvctl modify listener -n node_name [-l listener_names] -o ORACLE_HOME -- 11g R1 command
srvctl modify listener -n racdb4 -o /u01/app/oracle/product/11.1/asm -l "LISTENER_RACDB4"
In 11g Release 2, some command's syntax has been changed:
srvctl add listener [-l lsnr_name] [-s] [-p "[TCP:]port[, ...][/IPC:key][/NMP:pipe_name][/TCPS:s_port] [/SDP:port]"] [-k network_number] [-o ORACLE_HOME]
srvctl add listener -l LISTENERASM -p "TCP:1522" -o $ORACLE_HOME
srvctl add listener -l listener112 -p 1341 -o /ora/ora112
srvctl remove listener [-l lsnr_name|-a] [-f]
srvctl remove listener -l lsnr01
srvctl stop listener [-n node_name] [-l lsnr_name] [-f]
srvctl enable listener [-l lsnr_name] [-n node_name]
srvctl enable listener -l listener_dev -n node5
srvctl disable listener [-l lsnr_name] [-n node_name]
srvctl disable listener -l listener_dev -n node5
srvctl config listener [-l lsnr_name] [-a]
srvctl config listener
srvctl modify listener [-l listener_name] [-o oracle_home] [-u user_name] [-p "[TCP:]port_list[/IPC:key][/NMP:pipe_name][/TCPS:s_port][/SDP:port]"] [-k network_number]
srvctl modify listener -n node1 -p "TCP:1521,1522"
srvctl getenv listener [-l lsnr_name] [-t name[, ...]]
srvctl getenv listener
srvctl setenv listener [-l lsnr_name] {-t "name=val [,...]" | -T "name=value"}
srvctl setenv listener -t LANG=en
srvctl unsetenv listener [-l lsnr_name] -t "name[, ...]"
srvctl unsetenv listener -t "TNS_ADMIN"
New srvctl commands in 11g Release 2
Diskgroup:
--------------------------------------------------------------------------------
srvctl remove diskgroup -g diskgroup_name [-n node_list] [-f]
srvctl remove diskgroup -g DG1 -f
srvctl start diskgroup -g diskgroup_name [-n node_list]
srvctl start diskgroup -g diskgroup1 -n node1,node2
srvctl stop diskgroup -g diskgroup_name [-n node_list] [-f]
srvctl stop diskgroup -g ASM_FRA_DG
srvctl stop diskgroup -g dg1 -n node1,node2 -f
srvctl status diskgroup -g diskgroup_name [-n node_list] [-a]
srvctl status diskgroup -g dg_data -n node1,node2 -a
srvctl enable diskgroup -g diskgroup_name [-n node_list]
srvctl enable diskgroup -g diskgroup1 -n node1,node2
srvctl disable diskgroup -g diskgroup_name [-n node_list]
srvctl disable diskgroup -g dg_fra -n node1, node2
Home:
-------
srvctl start home -o ORACLE_HOME -s state_file [-n node_name]
srvctl start home -o /u01/app/oracle/product/11.2.0/db_1 -s ~/state.txt
srvctl stop home -o ORACLE_HOME -s state_file [-t stop_options] [-n node_name] [-f]
srvctl stop home -o /u01/app/oracle/product/11.2.0/db_1 -s ~/state.txt
srvctl status home -o ORACLE_HOME -s state_file [-n node_name]
srvctl status home -o /u01/app/oracle/product/11.2.0/db_1 -s ~/state.txt
ONS (Oracle Notification Service):
-------------------------------------
srvctl add ons [-l ons-local-port] [-r ons-remote-port] [-t host[:port][,host[:port]...]] [-v]
srvctl add ons -l 6200
srvctl remove ons [-f] [-v]
srvctl remove ons -f
srvctl start ons [-v]
srvctl start ons -v
srvctl stop ons [-v]
srvctl stop ons -v
srvctl status ons
srvctl enable ons [-v]
srvctl enable ons
srvctl disable ons [-v]
srvctl disable ons
srvctl config ons
srvctl modify ons [-l ons-local-port] [-r ons-remote-port] [-t host[:port][,host[:port]...]] [-v]
srvctl modify ons
EONS (E Oracle Notification Service):
---------------------------------------
srvctl add eons [-p portnum] [-m multicast-ip-address] [-e eons-listen-port] [-v]
#srvctl add eons -p 2018
srvctl remove eons [-f] [-v]
srvctl remove eons -f
srvctl start eons [-v]
srvctl start eons
srvctl stop eons [-f] [-v]
srvctl stop eons -f
srvctl status eons
srvctl enable eons [-v]
srvctl enable eons
srvctl disable eons [-v]
srvctl disable eons
srvctl config eons
srvctl modify eons [-m multicast_ip_address] [-p multicast_port_number] [-e eons_listen_port] [-v]
srvctl modify eons -p 2018
FileSystem:
--------------------------------------------------------------------------------
srvctl add filesystem -d volume_device -v volume_name -g diskgroup_name [-m mountpoint_path] [-u user_name]
srvctl add filesystem -d /dev/asm/d1volume1 -v VOLUME1 -d RAC_DATA -m /oracle/cluster1/acfs1
srvctl remove filesystem -d volume_device_name [-f]
srvctl remove filesystem -d /dev/asm/racvol1
srvctl start filesystem -d volume_device_name [-n node_name]
srvctl start filesystem -d /dev/asm/racvol3
srvctl stop filesystem -d volume_device_name [-n node_name] [-f]
srvctl stop filesystem -d /dev/asm/racvol1 -f
srvctl status filesystem -d volume_device_name
srvctl status filesystem -d /dev/asm/racvol2
srvctl enable filesystem -d volume_device_name
srvctl enable filesystem -d /dev/asm/racvol9
srvctl disable filesystem -d volume_device_name
srvctl disable filesystem -d /dev/asm/racvol1
srvctl config filesystem -d volume_device_path
srvctl modify filesystem -d volume_device_name -u user_name
srvctl modify filesystem -d /dev/asm/racvol1 -u sysadmin
SrvPool (Server Pool):
--------------------------------------------------------------------------------
srvctl add srvpool -g server_pool [-i importance] [-l min_size] [-u max_size] [-n node_list] [-f]
srvctl add srvpool -g SP1 -i 1 -l 3 -u 7 -n node1,node2
srvctl remove srvpool -g server_pool
srvctl remove srvpool -g srvpool1
srvctl status srvpool [-g server_pool] [-a]
srvctl status srvpool -g srvpool2 -a
srvctl config srvpool [-g server_pool]
srvctl config srvpool -g dbpool
srvctl modify srvpool -g server_pool [-i importance] [-l min_size] [-u max_size] [-n node_name_list] [-f]
srvctl modify srvpool -g srvpool4 -i 0 -l 2 -u 4 -n node3, node4
Server:
--------------------------------------------------------------------------------
srvctl status server -n "server_name_list" [-a]
srvctl status server -n server11 -a
srvctl relocate server -n "server_name_list" -g server_pool_name [-f]
srvctl relocate server -n "linux1, linux2" -g sp2
Scan (Single Client Access Name):
----------------------------------
srvctl add scan -n scan_name [-k network_number] [-S subnet/netmask[/if1[|if2|...]]]
#srvctl add scan -n scan.mycluster.example.com
srvctl remove scan [-f]
srvctl remove scan
srvctl remove scan -f
srvctl start scan [-i ordinal_number] [-n node_name]
srvctl start scan
srvctl start scan -i 1 -n node1
srvctl stop scan [-i ordinal_number] [-f]
srvctl stop scan
srvctl stop scan -i 1
srvctl status scan [-i ordinal_number]
srvctl status scan
srvctl status scan -i 1
srvctl enable scan [-i ordinal_number]
srvctl enable scan
srvctl enable scan -i 1
srvctl disable scan [-i ordinal_number]
srvctl disable scan
srvctl disable scan -i 3
srvctl config scan [-i ordinal_number]
srvctl config scan
srvctl config scan -i 2
srvctl modify scan -n scan_name
srvctl modify scan
srvctl modify scan -n scan1
srvctl relocate scan -i ordinal_number [-n node_name]
srvctl relocate scan -i 2 -n node2
ordinal_number=1,2,3
Scan_listener:
--------------
srvctl add scan_listener [-l lsnr_name_prefix] [-s] [-p "[TCP:]port_list[/IPC:key][/NMP:pipe_name][/TCPS:s_port] [/SDP:port]"]
#srvctl add scan_listener -l myscanlistener
srvctl remove scan_listener [-f]
srvctl remove scan_listener
srvctl remove scan_listener -f
srvctl start scan_listener [-n node_name] [-i ordinal_number]
srvctl start scan_listener
srvctl start scan_listener -i 1
srvctl stop scan_listener [-i ordinal_number] [-f]
srvctl stop scan_listener -i 3
srvctl status scan_listener [-i ordinal_number]
srvctl status scan_listener
srvctl status scan_listener -i 1
srvctl enable scan_listener [-i ordinal_number]
srvctl enable scan_listener
srvctl enable scan_listener -i 2
srvctl disable scan_listener [-i ordinal_number]
srvctl disable scan_listener
srvctl disable scan_listener -i 1
srvctl config scan_listener [-i ordinal_number]
srvctl config scan_listener
srvctl config scan_listener -i 3
srvctl modify scan_listener {-p [TCP:]port[/IPC:key][/NMP:pipe_name] [/TCPS:s_port][/SDP:port] | -u }
srvctl modify scan_listener -u
srvctl relocate scan_listener -i ordinal_number [-n node_name]
srvctl relocate scan_listener -i 1
ordinal_number=1,2,3
GNS (Grid Naming Service):
------------------------------
srvctl add gns -i ip_address -d domain
srvctl add gns -i 192.124.16.96 -d cluster.mycompany.com
srvctl remove gns [-f]
srvctl remove gns
srvctl start gns [-l log_level] [-n node_name]
srvctl start gns
srvctl stop gns [-n node_name [-v] [-f]
srvctl stop gns
srvctl status gns [-n node_name]
srvctl status gns
srvctl enable gns [-n node_name]
srvctl enable gns
srvctl disable gns [-n node_name]
srvctl disable gns -n devnode2
srvctl config gns [-a] [-d] [-k] [-m] [-n node_name] [-p] [-s] [-V] [-q name] [-l] [-v]
srvctl config gns -n lnx03
srvctl modify gns [-i ip_address] [-d domain]
srvctl modify gns -i 192.000.000.007
srvctl relocate gns [-n node_name]
srvctl relocate gns -n node2
VIP (Virtual Internet Protocol):
--------------------------------
srvctl add vip -n node_name -A {name|ip}/netmask[/if1[if2|...]] [-k network_number] [-v]
#srvctl add vip -n node96 -A 192.124.16.96/255.255.255.0 -k 2
srvctl remove vip -i "vip_name_list" [-f] [-y] [-v]
srvctl remove vip -i "vip1,vip2,vip3" -f -y -v
srvctl start vip {-n node_name|-i vip_name} [-v]
srvctl start vip -i dev1-vip -v
srvctl stop vip {-n node_name|-i vip_name} [-r] [-v]
srvctl stop vip -n node1 -v
srvctl status vip {-n node_name|-i vip_name}
srvctl status vip -i node1-vip
srvctl enable vip -i vip_name [-v]
srvctl enable vip -i prod-vip -v
srvctl disable vip -i vip_name [-v]
srvctl disable vip -i vip3 -v
srvctl config vip {-n node_name|-i vip_name}
srvctl config vip -n devnode2
srvctl getenv vip -i vip_name [-t "name_list"] [-v]
srvctl getenv vip -i node1-vip
srvctl setenv vip -i vip_name {-t "name=val[,name=val,...]" | -T "name=val"}
srvctl setenv vip -i dev1-vip -t LANG=en
srvctl unsetenv vip -i vip_name -t "name_list" [-v]
srvctl unsetenv vip -i myvip -t CLASSPATH
OC4J (Oracle Container for Java):
-----------------------------------
srvctl add oc4j [-v]
srvctl add oc4j
srvctl remove oc4j [-f] [-v]
srvctl remove oc4j
srvctl start ocj4 [-v]
srvctl start ocj4 -v
srvctl stop oc4j [-f] [-v]
srvctl stop oc4j -f -v
srvctl status oc4j [-n node_name]
srvctl status oc4j -n lnx01
srvctl enable oc4j [-n node_name] [-v]
srvctl enable oc4j -n dev3
srvctl disable oc4j [-n node_name] [-v]
srvctl disable oc4j -n dev1
srvctl config oc4j
srvctl modify oc4j -p oc4j_rmi_port [-v]
srvctl modify oc4j -p 5385
srvctl relocate oc4j [-n node_name] [-v]
srvctl relocate oc4j -n lxn06 -v
Labels:
RAC Real Application Clusters
Subscribe to:
Posts (Atom)