The following are the views that are pertinent to DataGuard Configurations that are categorised for logical and/or physical standby databases.
*Views for LOGICAL DATABASE ONLY
DBA_LOGSTDBY_EVENTS
Logical only
Contains information about the activity of a logical standby database. It can be used to determine the cause of failures that occur when SQL Apply is applying redo to a logical standby database.
DBA_LOGSTDBY_HISTORY
Logical only
Displays the history of switchovers and failovers for logical standby databases in a Data Guard configuration. It does this by showing the complete sequence of redo log streams processed or created on the local system, across all role transitions. (After a role transition, a new log stream is started and the log stream sequence number is incremented by the new primary database.)
DBA_LOGSTDBY_LOG
Logical only
Shows the log files registered for logical standby databases.
DBA_LOGSTDBY_NOT_UNIQUE
Logical only
Identifies tables that have no primary and no non-null unique indexes.
DBA_LOGSTDBY_PARAMETERS
Logical only
Contains the list of parameters used by SQL Apply.
DBA_LOGSTDBY_SKIP
Logical only
Lists the tables that will be skipped by SQL Apply.
DBA_LOGSTDBY_SKIP_TRANSACTION
Logical only
Lists the skip settings chosen.
DBA_LOGSTDBY_UNSUPPORTED
Logical only
Identifies the schemas and tables (and columns in those tables) that contain unsupported data types. Use this view when you are preparing to create a logical standby database.
V$LOGSTDBY_PROCESS
Logical only
Provides dynamic information about what is happening with SQL Apply. This view is very helpful when you are diagnosing performance problems during SQL Apply on the logical standby database, and it can be helpful for other problems.
V$LOGSTDBY_PROGRESS
Logical only
Displays the progress of SQL Apply on the logical standby database.
V$LOGSTDBY_STATE
Logical only
Consolidates information from the V$LOGSTDBY_PROCESS and V$LOGSTDBY_STATS views about the running state of SQL Apply and the logical standby database.
V$LOGSTDBY_STATS
Logical only
Displays LogMiner statistics, current state, and status information for a logical standby database during SQL Apply. If SQL Apply is not running, the values for the statistics are cleared.
V$LOGSTDBY_TRANSACTION
Logical only
Displays information about all active transactions being processed by SQL Apply on the logical standby database.
*Views for PHYSICAL DATABASE ONLY
V$MANAGED_STANDBY
Physical only
Displays current status information for Oracle database processes related to physical standby databases.
Note: The information in this view does not persist across an instance shutdown.
*Views for PRIMARY, PHYSICAL AND LOGICAL DATABASES
V$ARCHIVE_DEST
Primary, physical, and logical
Describes all of the destinations in the Data Guard configuration, including each destination’s current value, mode, and status.
Note: The information in this view does not persist across an instance shutdown.
V$ARCHIVE_DEST_STATUS
Primary, physical, and logical
Displays runtime and configuration information for the archived redo log destinations.
Note: The information in this view does not persist across an instance shutdown.
V$ARCHIVE_GAP
Physical and logical
Displays information to help you identify a gap in the archived redo log files.
V$ARCHIVED_LOG
Primary, physical, and logical
Displays archive redo log information from the control file, including names of the archived redo log files.
V$DATABASE
Primary, physical, and logical
Provides database information from the control file. Includes information about fast-start failover (available only with the Data Guard broker).
V$DATABASE_INCARNATION
Primary, physical, and logical
Displays information about all database incarnations. Oracle Database creates a new incarnation whenever a database is opened with the RESETLOGS option. Records about the current and the previous incarnation are also contained in the V$DATABASE view.
V$DATAFILE
Primary, physical, and logical
Provides datafile information from the control file.
V$DATAGUARD_CONFIG
Primary, physical, and logical
Lists the unique database names defined with the DB_UNIQUE_NAME and LOG_ARCHIVE_CONFIG initialization parameters.
V$DATAGUARD_STATS
Primary, physical, and logical
Displays how much redo data generated by the primary database is not yet available on the standby database, showing how much redo data could be lost if the primary database were to crash at the time you queried this view. You can query this view on any instance of a standby database in a Data Guard configuration. If you query this view on a primary database, then the column values are cleared.
V$DATAGUARD_STATUS
Primary, physical, and logical
Displays and records events that would typically be triggered by any message to the alert log or server process trace files.
V$LOG
Primary, physical, and logical
Contains log file information from the online redo log files.
V$LOGFILE
Primary, physical, and logical
Contains information about the online redo log files and standby redo log files.
V$LOG_HISTORY
Primary, physical, and logical
Contains log history information from the control file.
V$STANDBY_LOG
Physical and logical
Contains log file information from the standby redo log files.
References: Oracle® Data Guard Concepts and Administration 10g Release 2 (10.2)
B14239-05
For any queries please don't hesitate to contact me on samiora@gmail.com or call me on 0097433164959.
Oracle Core DBA, Oracle EBS Apps DBA, Microsoft SQL Server DBA, Postgresql DBA, IBM DB2 DBA.
Monday, December 6, 2010
Tuesday, November 23, 2010
Upgrade Oracle Database 9i to 10g 10204
Upgraded JDEdwards oracle database from 9i to 10g using the standard database upgrade procedure,
1. SRPSFTENT --> Microsoft Windows 2003 Server Enterprise Edition SP2. 8GB RAM.
2. On SRPSFTENT: Stopped and made OracleOraHome92HTTPServer &
OracleMTSRecoveryService services manual,
most important stop JDE service 'JDE E812'.
On SRPSFTDEP: Stopped OracleMTSRecoveryService service
On SRPSFTWEB: Stopped Oracle-oc4jadminProcessManager
3. Delete OracleOraHome92TNSListenerlistener_10g service through the add+remove services.exe tool.
4. Exported Oracle Registry as backup to flash\jdedbupgraderough directory.
5. Made startmode=manual for database in registry
6. Start database and check invalid objects
SQL> select objecT_name,Object_type,owner from dba_objects where status='INVALID
';
OBJECT_NAME OBJECT_TYPE OWNER
------------------ ------------------------------
ODCIARGDESC TYPE SYS
ODCICOLINFO TYPE SYS
ODCICOST TYPE SYS
ODCIINDEXINFO TYPE SYS
ODCIINDEXINFO TYPE SYS
CATALOG_TYP TYPE OE
CATEGORY_TYP TYPE OE
COMPOSITE_CATEGORY_TYP TYPE OE
LEAF_CATEGORY_TYP TYPE OE
BSREPORT PROCEDURE TESTDTA
PKG_MULTIRESULTSET PACKAGE BODY TESTDTA
11 rows selected.
7. Shutdown immediate;
8. Before 10g install note the path parameter.
C:\>set path
Path=E:\oracle\ora92\bin;C:\Program Files\Microsoft Visual Studio 8\VC\bin;E:\JD
Edwards\E812\DDP\Verity\winx86\_nti40\bin;E:\oracle\ora92\jre\1.4.2\bin\client;E
:\oracle\ora92\jre\1.4.2\bin;C:\Program Files\Oracle\jre\1.3.1\bin;C:\Program Fi
les\Oracle\jre\1.1.8\bin;C:\Program Files\HP\NCU;C:\WINDOWS\system32;C:\WINDOWS;
C:\WINDOWS\System32\Wbem;C:\Program Files\System Center Operations Manager 2007\
PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH
9. Take backup of Oracle inventory. Backup c:\program files\oracle folder which contains Inventory directory.
10. Install Oracle 10g Release 2 in a new Oracle Home i.e, Install 10201+companion(select Oracle Database 10g Products 10.2.0.1+10204patchset in e:\oracle\ora10g
11. environment variable check
C:\>set oracle
ORACLE_HOME=e:\oracle\ora92
C:\>set path
Path=e:\oracle\ora10g\bin;E:\oracle\ora92\bin;C:\Program Files\Microsoft Visual
Studio 8\VC\bin;E:\JDEdwards\E812\DDP\Verity\winx86\_nti40\bin;E:\oracle\ora92\j
re\1.4.2\bin\client;E:\oracle\ora92\jre\1.4.2\bin;C:\Program Files\Oracle\jre\1.
3.1\bin;C:\Program Files\Oracle\jre\1.1.8\bin;C:\Program Files\HP\NCU;C:\WINDOWS
\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\Program Files\System Center Ope
rations Manager 2007\
Update the path entry and remove 10ghome that is in begining of PATH variable so that we can connect to 9i database using 9i home.
After update in path parameter the new value is,
C:\Documents and Settings\jde.QIA>set path
Path=E:\oracle\ora92\bin;C:\Program Files\Microsoft Visual Studio 8\VC\bin;E:\JD
Edwards\E812\DDP\Verity\winx86\_nti40\bin;E:\oracle\ora92\jre\1.4.2\bin\client;E
:\oracle\ora92\jre\1.4.2\bin;C:\Program Files\Oracle\jre\1.3.1\bin;C:\Program Fi
les\Oracle\jre\1.1.8\bin;C:\Program Files\HP\NCU;C:\WINDOWS\system32;C:\WINDOWS;
C:\WINDOWS\System32\Wbem;C:\Program Files\System Center Operations Manager 2007\
12. Start the 9i database in normal mode (Must) from 9i sqlplus tool,
E:\>sqlplus
SQL*Plus: Release 9.2.0.8.0 - Production on Fri Oct 29 08:19:23 2010
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Enter user-name: / as sysdba
Connected to an idle instance.
SQL> exit
Disconnected
E:\>set oracle
ORACLE_HOME=e:\oracle\ora92
E:\>sqlplus
SQL*Plus: Release 9.2.0.8.0 - Production on Fri Oct 29 08:19:34 2010
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Enter user-name: / as sysdba
Connected to an idle instance.
SQL> startup;
ORACLE instance started.
Total System Global Area 1469131144 bytes
Fixed Size 457096 bytes
Variable Size 696254464 bytes
Database Buffers 771751936 bytes
Redo Buffers 667648 bytes
Database mounted.
Database opened.
SQL>
13. copy the following files from the 10gR2 ORACLE_HOME/rdbms/admin directory to a directory outside of the Oracle home, such as the e:\upgradetemp directory on your system:
ORACLE_HOME/rdbms/admin/utlu102i.sql
ORACLE_HOME/rdbms/admin/utltzuv2.sql
Make a note of the new location of these files.
14. Change to the temporary directory that you copied files in previous step.
Start SQL*Plus and connect to the database instance you are intending to upgrade (running from the original old ORACLE_HOME) as a user with SYSDBA privileges. Then run and spool the utlu102i.sql file.
sqlplus '/as sysdba'
E:\upgradetemp>dir
Volume in drive E is New Volume
Volume Serial Number is 78AD-5ABC
Directory of E:\upgradetemp
10/29/2010 08:10 AM .
10/29/2010 08:10 AM ..
02/07/2007 01:23 PM 18,183 utltzuv2.sql
10/08/2007 12:53 PM 136,217 utlu102i.sql
2 File(s) 154,400 bytes
2 Dir(s) 224,538,529,792 bytes free
SQL> spool Database_Info.log
SQL> @utlu102i.sql
SQL> spool off
Please note: The instance can be running in normal mode and does not need to be running in a restricted (migrate / upgrade) mode to run the script. Also, the instance must not be running in read-only mode. A few registry$ tables may be created, if they do not already exist, and some rows may be inserted into existing Upgrade tables.
Then, check the spool file and examine the output of the upgrade information tool. The sections describe the output of the Upgrade Information Tool (utlu102i.sql).
*************************
SPOOL OUTPUT FILE CONTENT
*************************
SQL> @utlu102i.sql
Oracle Database 10.2 Upgrade Information Utility 10-29-2010 08:23:58
.
**********************************************************************
Database:
**********************************************************************
--> name: JDEDB
--> version: 9.2.0.8.0
--> compatible: 9.2.0.0.0
--> blocksize: 8192
.
**********************************************************************
Logfiles: [make adjustments in the current environment]
**********************************************************************
--> The existing log files are adequate. No changes are required.
.
**********************************************************************
Tablespaces: [make adjustments in the current environment]
**********************************************************************
--> SYSTEM tablespace is adequate for the upgrade.
.... minimum required size: 730 MB
--> UNDOTBS1 tablespace is adequate for the upgrade.
.... minimum required size: 16 MB
--> TEMP tablespace is adequate for the upgrade.
.... minimum required size: 58 MB
--> CWMLITE tablespace is adequate for the upgrade.
.... minimum required size: 19 MB
--> DRSYS tablespace is adequate for the upgrade.
.... minimum required size: 33 MB
.... AUTOEXTEND additional space required: 13 MB
--> EXAMPLE tablespace is adequate for the upgrade.
.... minimum required size: 151 MB
--> ODM tablespace is adequate for the upgrade.
.... minimum required size: 10 MB
--> XDB tablespace is adequate for the upgrade.
.... minimum required size: 49 MB
.... AUTOEXTEND additional space required: 2 MB
.
**********************************************************************
Update Parameters: [Update Oracle Database 10.2 init.ora or spfile]
**********************************************************************
WARNING: --> "streams_pool_size" is not currently defined and needs a value of
at least 50331648
WARNING: --> "session_max_open_files" needs to be increased to at least 20
.
**********************************************************************
Renamed Parameters: [Update Oracle Database 10.2 init.ora or spfile]
**********************************************************************
-- No renamed parameters found. No changes are required.
.
**********************************************************************
Obsolete/Deprecated Parameters: [Update Oracle Database 10.2 init.ora or spfile]
**********************************************************************
--> "hash_join_enabled"
--> "log_archive_start"
.
**********************************************************************
Components: [The following database components will be upgraded or installed]
**********************************************************************
--> Oracle Catalog Views [upgrade] VALID
--> Oracle Packages and Types [upgrade] VALID
--> JServer JAVA Virtual Machine [upgrade] VALID
...The 'JServer JAVA Virtual Machine' JAccelerator (NCOMP)
...is required to be installed from the 10g Companion CD.
--> Oracle XDK for Java [upgrade] VALID
--> Oracle Java Packages [upgrade] VALID
--> Oracle Text [upgrade] VALID
--> Oracle XML Database [upgrade] VALID
--> Oracle Workspace Manager [upgrade] VALID
--> Oracle Data Mining [upgrade] VALID
--> OLAP Analytic Workspace [upgrade] UPGRADED
--> OLAP Catalog [upgrade] VALID
--> Oracle OLAP API [upgrade] UPGRADED
--> Oracle interMedia [upgrade] VALID
...The 'Oracle interMedia Image Accelerator' is
...required to be installed from the 10g Companion CD.
--> Spatial [upgrade] VALID
--> Oracle Ultra Search [upgrade] VALID
... To successfully upgrade Ultra Search, install it from
... the 10g Companion CD.
.
**********************************************************************
Miscellaneous Warnings
**********************************************************************
WARNING: --> Deprecated CONNECT role granted to some user/roles.
.... CONNECT role after upgrade has only CREATE SESSION privilege.
WARNING: --> Database contains stale optimizer statistics.
.... Refer to the 10g Upgrade Guide for instructions to update
.... statistics prior to upgrading the database.
.... Component Schemas with stale statistics:
.... SYS
.... XDB
.... WMSYS
.... ODM
.... OLAPSYS
.... MDSYS
.... WKSYS
WARNING: --> Database contains INVALID objects prior to upgrade.
.... USER OE has 4 INVALID objects.
.... USER SYS has 5 INVALID objects.
.... USER TESTDTA has 2 INVALID objects.
.
**********************************************************************
SYSAUX Tablespace:
[Create tablespace in the Oracle Database 10.2 environment]
**********************************************************************
--> New "SYSAUX" tablespace
.... minimum required size for database upgrade: 500 MB
.
PL/SQL procedure successfully completed.
SQL> spool off;
***********************
***********************
13. Increased the size of system tablespace from 1000MB to 2048MB
ALTER DATABASE DATAFILE 'E:\ORACLE\ORADATA\JDEDB\SYSTEM01.DBF' RESIZE 2048M;
UNDOTBS1 is already 3635MB and has all free space except 15MB used.
Increased the size of TEMP tablespace from 1500MB to 2048MB
ALTER DATABASE TEMPFILE 'E:\ORACLE\ORADATA\JDEDB\TEMP01.DBF' RESIZE 2048M
Increased the size of CWMLITE tablespace from 20MB to 128MB
ALTER DATABASE DATAFILE 'E:\ORACLE\ORADATA\JDEDB\CWMLITE01.DBF' RESIZE 128M
Increased the size of DRSYS01 tablespace from 20MB to 128MB
ALTER DATABASE DATAFILE 'E:\ORACLE\ORADATA\JDEDB\DRSYS01.DBF' RESIZE 128M;
EXAMPLES already 247 MB and has 60 free space i.e, only 148 MB used.
ODM already 20 MB and has 10 free space i.e, only 10 MB used.
Increased the size of XDB tablespace from 47MB to 128MB
ALTER DATABASE DATAFILE 'E:\ORACLE\ORADATA\JDEDB\XDB01.DBF' RESIZE 128M
14. Current initJDEDB.ora file content before any update.
*.aq_tm_processes=1
*.background_dump_dest='E:\oracle\admin\JDEDB\bdump'
*.compatible='9.2.0.0.0'
*.control_files='E:\oracle\oradata\JDEDB\CONTROL01.CTL','E:\oracle\oradata\JDEDB\CONTROL02.CTL','E:\oracle\oradata\JDEDB\CONTROL03.CTL'
*.core_dump_dest='E:\oracle\admin\JDEDB\cdump'
*.db_block_size=8192
*.db_cache_size=771658240
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='JDEDB'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=JDEDBXDB)'
*.fast_start_mttr_target=300
*.hash_join_enabled=TRUE
*.instance_name='JDEDB'
*.java_pool_size=67108864
*.job_queue_processes=10
*.large_pool_size=67108864
*.open_cursors=300
*.pga_aggregate_target=451658240
*.processes=150
*.query_rewrite_enabled='FALSE'
*.remote_login_passwordfile='EXCLUSIVE'
*.shared_pool_size=536870912
*.sort_area_size=2097152
*.star_transformation_enabled='FALSE'
*.timed_statistics=TRUE
*.undo_management='AUTO'
*.undo_retention=10800
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='E:\oracle\admin\JDEDB\udump'
*.log_archive_start=TRUE
*.log_archive_dest=E:\fra
Added *.streams_pool_size=50331648 (but commented as it is new 10g parameter), *.session_max_open_files=20 as required
Updated *.db_cache_size=771658240 to *.db_cache_size=371658240
Updated *.java_pool_size=67108864 to *.java_pool_size=167108864
Updated *.large_pool_size=67108864 to *.large_pool_size=157108864
Updated *.pga_aggregate_target=451658240 to 251658240
Updated *.shared_pool_size=536870912 to 236870912
so the final initJDEDB.ora file content is,
*.aq_tm_processes=1
*.background_dump_dest='E:\oracle\admin\JDEDB\bdump'
*.compatible='9.2.0.0.0'
*.control_files='E:\oracle\oradata\JDEDB\CONTROL01.CTL','E:\oracle\oradata\JDEDB\CONTROL02.CTL','E:\oracle\oradata\JDEDB\CONTROL03.CTL'
*.core_dump_dest='E:\oracle\admin\JDEDB\cdump'
*.db_block_size=8192
*.db_cache_size=371658240
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='JDEDB'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=JDEDBXDB)'
*.fast_start_mttr_target=300
*.hash_join_enabled=TRUE
*.instance_name='JDEDB'
*.java_pool_size=167108864
*.job_queue_processes=10
*.large_pool_size=157108864
*.open_cursors=300
*.pga_aggregate_target=251658240
*.processes=150
*.query_rewrite_enabled='FALSE'
*.remote_login_passwordfile='EXCLUSIVE'
*.shared_pool_size=236870912
*.sort_area_size=2097152
*.star_transformation_enabled='FALSE'
*.timed_statistics=TRUE
*.undo_management='AUTO'
*.undo_retention=10800
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='E:\oracle\admin\JDEDB\udump'
*.log_archive_start=TRUE
*.log_archive_dest=E:\fra
*.session_max_open_files=20
#*.streams_pool_size=50331648
Shutdown immediate
Startup;
15. After upgrading to 10gR2, the CONNECT role will only have the CREATE SESSION privilege; the other privileges granted to the CONNECT role in earlier releases will be revoked during the upgrade. To identify which users and roles in your database are granted the CONNECT role, use the following query:
SQL> SELECT grantee FROM dba_role_privs
2 WHERE granted_role = 'CONNECT' and
3 grantee NOT IN (
4 'SYS', 'OUTLN', 'SYSTEM', 'CTXSYS', 'DBSNMP',
5 'LOGSTDBY_ADMINISTRATOR', 'ORDSYS',
6 'ORDPLUGINS', 'OEM_MONITOR', 'WKSYS', 'WKPROXY',
7 'WK_TEST', 'WKUSER', 'MDSYS', 'LBACSYS', 'DMSYS',
8 'WMSYS', 'OLAPDBA', 'OLAPSVR', 'OLAP_USER',
9 'OLAPSYS', 'EXFSYS', 'SYSMAN', 'MDDATA',
10 'SI_INFORMTN_SCHEMA', 'XDB', 'ODM');
GRANTEE
------------------------------
HR
OE
PM
QS
SH
PSFT
RMAN
DV810
QS_CB
QS_CS
QS_ES
QS_OS
QS_WS
SCOTT
CRPCTL
CRPDTA
JDEDBA
QS_ADM
AUDITOR
PRODCTL
PRODDTA
TESTCTL
TESTDTA
QS_CBADM
24 rows selected.
SQL> SELECT GRANTEE,PRIVILEGE FROM DBA_SYS_PRIVS
2 WHERE GRANTEE='CONNECT'
3 ;
GRANTEE PRIVILEGE
------------------------------ --------------------
CONNECT CREATE VIEW
CONNECT CREATE TABLE
CONNECT ALTER SESSION
CONNECT CREATE CLUSTER
CONNECT CREATE SESSION
CONNECT CREATE SYNONYM
CONNECT CREATE SEQUENCE
CONNECT CREATE DATABASE LINK
8 rows selected.
NOTE: AFTER UPGRADE GRANT THE ABOVE SYSTEM PRIVILEGES TO ALL THE USERS LIST IN PREVIOUS SCRIPT.
16. Check for dblinks and to recreate them after the upgrade is done,
SQL> SELECT
2 'create '||DECODE(U.NAME,'PUBLIC','public ')||'database link '||CHR(10)
3 ||DECODE(U.NAME,'PUBLIC',Null, U.NAME||'.')|| L.NAME||chr(10)
4 ||'connect to ' || L.USERID || ' identified by '''
5 ||L.PASSWORD||''' using ''' || L.host || ''''
6 ||chr(10)||';' TEXT
7 FROM sys.link$ L,
8 sys.user$ U
9 WHERE L.OWNER# = U.USER# ;
no rows selected
17. If you are upgrading from Oracle9iR2 (9.2), verify that the view dba_registry contains data. If the view is empty, run the following scripts from the 9.2 home:
% sqlplus '/as sysdba'
SQL> @?/rdbms/admin/catalog.sql
SQL> @?/rdbms/admin/catproc.sql
SQL> @?/rdbms/admin/utlrp.sql
and verify that the dba_registry view now contains data.
18. I ignored this step,
But if you want as per the standard, check for corruption in the dictionary, use the following commands in sqlplus connected as sys:
Set verify off
Set space 0
Set line 120
Set heading off
Set feedback off
Set pages 1000
Spool analyze.sql
Select 'Analyze cluster "'||cluster_name||'" validate structure cascade;'
from dba_clusters
where owner='SYS'
union
Select 'Analyze table "'||table_name||'" validate structure cascade;'
from dba_tables
where owner='SYS' and partitioned='NO' and (iot_type='IOT' or iot_type is NULL)
union
Select 'Analyze table "'||table_name||'" validate structure cascade into invalid_rows;'
from dba_tables
where owner='SYS' and partitioned='YES';
spool off
This creates a script called analyze.sql.
Now execute the following steps.
$ sqlplus '/as sysdba'
SQL> @$ORACLE_HOME/rdbms/admin/utlvalid.sql
SQL> @analyze.sql
This script (analyze.sql) should not return any errors.
19. Since we don't have replication in database so I ignored the following step,
Ensure that all Materialized views / Snapshot refreshes are successfully completed, and that replication must be stopped (ie: quiesced).
$ sqlplus '/ as sysdba'
SQL> select distinct(trunc(last_refresh)) from dba_snapshot_refresh_times;
You can also use:
SQL> select distinct owner, name mview, master_owner master_owner, last_refresh from dba_mview_refresh_times;
20. Stop the listener for the database:
$ lsnrctl
LSNRCTL> stop
Ensure no files need media recovery:
SQL> select * from v$recover_file;
no rows selected
21. Ensure no files are in backup mode:
SQL> select * from v$backup where status!='NOT ACTIVE';
no rows selected
This should return no rows.
22. Resolve any outstanding unresolved distributed transaction:
SQL> select * from dba_2pc_pending;
If this returns rows you should do the following:
SQL> select local_tran_id from dba_2pc_pending;
SQL> execute dbms_transaction.purge_lost_db_entry('');
SQL> commit;
23. Disable all batch and cron jobs.
24. Ensure the users sys and system have 'system' as their default tablespace.
SQL> select username, default_tablespace from dba_users where username in ('SYS','SYSTEM');
USERNAME DEFAULT_TABLESPACE
------------------------------ -------------------------
SYSTEM SYSTEM
SYS SYSTEM
To modify use:
SQL> alter user sys default tablespace SYSTEM;
SQL> alter user system default tablespace SYSTEM;
25. stop listnere so that no body is connecting to ur database.
26. Ensure that the aud$ is in the system tablespace when auditing is enabled.
SQL> select tablespace_name from dba_tables where table_name='AUD$';
TABLESPACE_NAME
------------------------------
SYSTEM
If the AUD$ table exists, and is in use, upgrade performance can be effected depending on the number of records in the table.
Please refer to the following note for information on exporting and truncating the AUD$ table:
Note.979942.1 Ext/Pub Database upgrade appears to have halted at SYS.AUD$ Table
27. Note down where all control files are located.
SQL> select * from v$controlfile;
STATUS
-------
NAME
-------------------------------------
E:\ORACLE\ORADATA\JDEDB\CONTROL01.CTL
E:\ORACLE\ORADATA\JDEDB\CONTROL02.CTL
E:\ORACLE\ORADATA\JDEDB\CONTROL03.CTL
28. If table XDB.MIGR9202STATUS exists in the database, drop it before upgrading the database (to avoid the issue described in Note:356082.1)
29. Shutdown the database
$ sqlplus '/as sysdba'
SQL> shutdown immediate;
30. Perform a full cold backup (or an online backup using RMAN)
You can either do this by manually copying the files or sign on to RMAN
$ rman "target / nocatalog"
31. Make a backup of the old init.ora file
Copy it from the old (pre-10.2) ORACLE_HOME to the new (10.2) ORACLE_HOME
Copy from E:\oracle\ora92\database to E:\oracle\ora10g\database.
Comment out any obsoleted parameters and Change all deprecated parameters
I commeted #*.log_archive_start=TRUE while added *.streams_pool_size=50331648 parameters.
- Set the COMPATIBLE initialization parameter to an appropriate value. If you are
upgrading from 8.1.7.4 then set the COMPATIBLE parameter to 9.2.0 until after the
upgrade has been completed successfully.
If you are upgrading from 9.2.0 or 10.1.0
then leave the COMPATIBLE parameter set to it's current value until the upgrade
has been completed successfully. This will avoid any unnecessary ORA-942 errors
from being reported in SMON trace files during the upgrade (because the upgrade
is looking for 10.2 objects that have not yet been created)
SO LEAVE COMPATIBLE PARAMETER AS IT IS FOR US.
32. If you have the parameter NLS_LENGTH_SEMANTICS currently set to CHAR, change the value
to BYTE during the upgrade (to avoid the issue described in Note:4638550.8)
SQL> SHOW PARAMETER NLS_LENGTH
NAME TYPE VALUE
------------------------------------ ----------- -----
nls_length_semantics string BYTE
33. Deleted oracle listener Oraclelistner9i using the add/delete services tool.
34. Copy the tnsnames.ora, listener.ora, sqlnet.ora files and put it in new oracle_home/network/admin directory and update accordingly to the new 10g.
35. Create the service---> oradim -delete -sid jdedb
36. update environment variables to point to 10g.
SET PATH=E:\ORACLE\ORA10G\BIN;%PATH%
SET ORACLE_SID=JDEDB
SET TNS_ADMIN=E:\ORACLE\ORA10G\NETWORK\ADMIN
Now create the new database service ---> oradim -new -sid jdedb -startmode auto
37. sqlplus '/ as sysdba'
SQL> STARTUP UPGRADE;
SELECT STATUS FROM V$INSTANCE;
OPEN MIGRATE
38. CREATE TABLESPACE SYSAUX DATAFILE 'e:\oracle\oradata\JDEDB\sysaux01.dbf' SIZE 2048m REUSE EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ONLINE;
39. Now run the following script to upgrade.
spool upgrade10g.txt
@E:\oracle\ora10g\RDBMS\ADMIN\catupgrd.sql
spool off
shutdown immediate;
startup;
spool invalids.txt
@E:\oracle\ora10g\RDBMS\ADMIN\utlrp.sql
spool off;
40. Now grant the system privileges that were missing due to connect role.
GRANT CREATE VIEW TO HR;
GRANT CREATE TABLE TO HR;
GRANT ALTER SESSION TO HR;
GRANT CREATE CLUSTER TO HR;
GRANT CREATE SESSION TO HR;
GRANT CREATE SYNONYM TO HR;
GRANT CREATE SEQUENCE TO HR;
GRANT CREATE DATABASE LINK TO HR;
GRANT CREATE VIEW TO OE;
GRANT CREATE TABLE TO OE;
GRANT ALTER SESSION TO OE;
GRANT CREATE CLUSTER TO OE;
GRANT CREATE SESSION TO OE;
GRANT CREATE SYNONYM TO OE;
GRANT CREATE SEQUENCE TO OE;
GRANT CREATE DATABASE LINK TO OE;
GRANT CREATE VIEW TO PM;
GRANT CREATE TABLE TO PM;
GRANT ALTER SESSION TO PM;
GRANT CREATE CLUSTER TO PM;
GRANT CREATE SESSION TO PM;
GRANT CREATE SYNONYM TO PM;
GRANT CREATE SEQUENCE TO PM;
GRANT CREATE DATABASE LINK TO PM;
GRANT CREATE VIEW TO QS;
GRANT CREATE TABLE TO QS;
GRANT ALTER SESSION TO QS;
GRANT CREATE CLUSTER TO QS;
GRANT CREATE SESSION TO QS;
GRANT CREATE SYNONYM TO QS;
GRANT CREATE SEQUENCE TO QS;
GRANT CREATE DATABASE LINK TO QS;
GRANT CREATE VIEW TO SH;
GRANT CREATE TABLE TO SH;
GRANT ALTER SESSION TO SH;
GRANT CREATE CLUSTER TO SH;
GRANT CREATE SESSION TO SH;
GRANT CREATE SYNONYM TO SH;
GRANT CREATE SEQUENCE TO SH;
GRANT CREATE DATABASE LINK TO SH;
GRANT CREATE VIEW TO PSFT;
GRANT CREATE TABLE TO PSFT;
GRANT ALTER SESSION TO PSFT;
GRANT CREATE CLUSTER TO PSFT;
GRANT CREATE SESSION TO PSFT;
GRANT CREATE SYNONYM TO PSFT;
GRANT CREATE SEQUENCE TO PSFT;
GRANT CREATE DATABASE LINK TO PSFT;
GRANT CREATE VIEW TO RMAN;
GRANT CREATE TABLE TO RMAN;
GRANT ALTER SESSION TO RMAN;
GRANT CREATE CLUSTER TO RMAN;
GRANT CREATE SESSION TO RMAN;
GRANT CREATE SYNONYM TO RMAN;
GRANT CREATE SEQUENCE TO RMAN;
GRANT CREATE DATABASE LINK TO RMAN;
GRANT CREATE VIEW TO DV810;
GRANT CREATE TABLE TO DV810;
GRANT ALTER SESSION TO DV810;
GRANT CREATE CLUSTER TO DV810;
GRANT CREATE SESSION TO DV810;
GRANT CREATE SYNONYM TO DV810;
GRANT CREATE SEQUENCE TO DV810;
GRANT CREATE DATABASE LINK TO DV810;
GRANT CREATE VIEW TO QS_CB;
GRANT CREATE TABLE TO QS_CB;
GRANT ALTER SESSION TO QS_CB;
GRANT CREATE CLUSTER TO QS_CB;
GRANT CREATE SESSION TO QS_CB;
GRANT CREATE SYNONYM TO QS_CB;
GRANT CREATE SEQUENCE TO QS_CB;
GRANT CREATE DATABASE LINK TO QS_CB;
GRANT CREATE VIEW TO QS_CS;
GRANT CREATE TABLE TO QS_CS;
GRANT ALTER SESSION TO QS_CS;
GRANT CREATE CLUSTER TO QS_CS;
GRANT CREATE SESSION TO QS_CS;
GRANT CREATE SYNONYM TO QS_CS;
GRANT CREATE SEQUENCE TO QS_CS;
GRANT CREATE DATABASE LINK TO QS_CS;
GRANT CREATE VIEW TO QS_ES;
GRANT CREATE TABLE TO QS_ES;
GRANT ALTER SESSION TO QS_ES;
GRANT CREATE CLUSTER TO QS_ES;
GRANT CREATE SESSION TO QS_ES;
GRANT CREATE SYNONYM TO QS_ES;
GRANT CREATE SEQUENCE TO QS_ES;
GRANT CREATE DATABASE LINK TO QS_ES;
GRANT CREATE VIEW TO QS_OS;
GRANT CREATE TABLE TO QS_OS;
GRANT ALTER SESSION TO QS_OS;
GRANT CREATE CLUSTER TO QS_OS;
GRANT CREATE SESSION TO QS_OS;
GRANT CREATE SYNONYM TO QS_OS;
GRANT CREATE SEQUENCE TO QS_OS;
GRANT CREATE DATABASE LINK TO QS_OS;
GRANT CREATE VIEW TO QS_WS;
GRANT CREATE TABLE TO QS_WS;
GRANT ALTER SESSION TO QS_WS;
GRANT CREATE CLUSTER TO QS_WS;
GRANT CREATE SESSION TO QS_WS;
GRANT CREATE SYNONYM TO QS_WS;
GRANT CREATE SEQUENCE TO QS_WS;
GRANT CREATE DATABASE LINK TO QS_WS;
GRANT CREATE VIEW TO SCOTT;
GRANT CREATE TABLE TO SCOTT;
GRANT ALTER SESSION TO SCOTT;
GRANT CREATE CLUSTER TO SCOTT;
GRANT CREATE SESSION TO SCOTT;
GRANT CREATE SYNONYM TO SCOTT;
GRANT CREATE SEQUENCE TO SCOTT;
GRANT CREATE DATABASE LINK TO SCOTT;
GRANT CREATE VIEW TO CRPCTL;
GRANT CREATE TABLE TO CRPCTL;
GRANT ALTER SESSION TO CRPCTL;
GRANT CREATE CLUSTER TO CRPCTL;
GRANT CREATE SESSION TO CRPCTL;
GRANT CREATE SYNONYM TO CRPCTL;
GRANT CREATE SEQUENCE TO CRPCTL;
GRANT CREATE DATABASE LINK TO CRPCTL;
GRANT CREATE VIEW TO CRPDTA;
GRANT CREATE TABLE TO CRPDTA;
GRANT ALTER SESSION TO CRPDTA;
GRANT CREATE CLUSTER TO CRPDTA;
GRANT CREATE SESSION TO CRPDTA;
GRANT CREATE SYNONYM TO CRPDTA;
GRANT CREATE SEQUENCE TO CRPDTA;
GRANT CREATE DATABASE LINK TO CRPDTA;
GRANT CREATE VIEW TO JDEDBA;
GRANT CREATE TABLE TO JDEDBA;
GRANT ALTER SESSION TO JDEDBA;
GRANT CREATE CLUSTER TO JDEDBA;
GRANT CREATE SESSION TO JDEDBA;
GRANT CREATE SYNONYM TO JDEDBA;
GRANT CREATE SEQUENCE TO JDEDBA;
GRANT CREATE DATABASE LINK TO JDEDBA;
GRANT CREATE VIEW TO QS_ADM;
GRANT CREATE TABLE TO QS_ADM;
GRANT ALTER SESSION TO QS_ADM;
GRANT CREATE CLUSTER TO QS_ADM;
GRANT CREATE SESSION TO QS_ADM;
GRANT CREATE SYNONYM TO QS_ADM;
GRANT CREATE SEQUENCE TO QS_ADM;
GRANT CREATE DATABASE LINK TO QS_ADM;
GRANT CREATE VIEW TO AUDITOR;
GRANT CREATE TABLE TO AUDITOR;
GRANT ALTER SESSION TO AUDITOR;
GRANT CREATE CLUSTER TO AUDITOR;
GRANT CREATE SESSION TO AUDITOR;
GRANT CREATE SYNONYM TO AUDITOR;
GRANT CREATE SEQUENCE TO AUDITOR;
GRANT CREATE DATABASE LINK TO AUDITOR;
GRANT CREATE VIEW TO PRODCTL;
GRANT CREATE TABLE TO PRODCTL;
GRANT ALTER SESSION TO PRODCTL;
GRANT CREATE CLUSTER TO PRODCTL;
GRANT CREATE SESSION TO PRODCTL;
GRANT CREATE SYNONYM TO PRODCTL;
GRANT CREATE SEQUENCE TO PRODCTL;
GRANT CREATE DATABASE LINK TO PRODCTL;
GRANT CREATE VIEW TO PRODDTA;
GRANT CREATE TABLE TO PRODDTA;
GRANT ALTER SESSION TO PRODDTA;
GRANT CREATE CLUSTER TO PRODDTA;
GRANT CREATE SESSION TO PRODDTA;
GRANT CREATE SYNONYM TO PRODDTA;
GRANT CREATE SEQUENCE TO PRODDTA;
GRANT CREATE DATABASE LINK TO PRODDTA;
GRANT CREATE VIEW TO TESTCTL;
GRANT CREATE TABLE TO TESTCTL;
GRANT ALTER SESSION TO TESTCTL;
GRANT CREATE CLUSTER TO TESTCTL;
GRANT CREATE SESSION TO TESTCTL;
GRANT CREATE SYNONYM TO TESTCTL;
GRANT CREATE SEQUENCE TO TESTCTL;
GRANT CREATE DATABASE LINK TO TESTCTL;
GRANT CREATE VIEW TO TESTDTA;
GRANT CREATE TABLE TO TESTDTA;
GRANT ALTER SESSION TO TESTDTA;
GRANT CREATE CLUSTER TO TESTDTA;
GRANT CREATE SESSION TO TESTDTA;
GRANT CREATE SYNONYM TO TESTDTA;
GRANT CREATE SEQUENCE TO TESTDTA;
GRANT CREATE DATABASE LINK TO TESTDTA;
GRANT CREATE VIEW TO QS_CBADM;
GRANT CREATE TABLE TO QS_CBADM;
GRANT ALTER SESSION TO QS_CBADM;
GRANT CREATE CLUSTER TO QS_CBADM;
GRANT CREATE SESSION TO QS_CBADM;
GRANT CREATE SYNONYM TO QS_CBADM;
GRANT CREATE SEQUENCE TO QS_CBADM;
GRANT CREATE DATABASE LINK TO QS_CBADM;
41. ADD FLASH RECOVERY AREA
42. UPDATE ALL THE SCHEDULED BACKUPS ACCORDINGLY TO NEW UPGRADED ORACLE DATABASE.
You can refer to the following metalink document if you need further assistance or email me at samiora@gmail.com
Note 316889.1 Complete Checklist for Manual Upgrades to 10gR2.docx
1. SRPSFTENT --> Microsoft Windows 2003 Server Enterprise Edition SP2. 8GB RAM.
2. On SRPSFTENT: Stopped and made OracleOraHome92HTTPServer &
OracleMTSRecoveryService services manual,
most important stop JDE service 'JDE E812'.
On SRPSFTDEP: Stopped OracleMTSRecoveryService service
On SRPSFTWEB: Stopped Oracle-oc4jadminProcessManager
3. Delete OracleOraHome92TNSListenerlistener_10g service through the add+remove services.exe tool.
4. Exported Oracle Registry as backup to flash\jdedbupgraderough directory.
5. Made startmode=manual for database in registry
6. Start database and check invalid objects
SQL> select objecT_name,Object_type,owner from dba_objects where status='INVALID
';
OBJECT_NAME OBJECT_TYPE OWNER
------------------ ------------------------------
ODCIARGDESC TYPE SYS
ODCICOLINFO TYPE SYS
ODCICOST TYPE SYS
ODCIINDEXINFO TYPE SYS
ODCIINDEXINFO TYPE SYS
CATALOG_TYP TYPE OE
CATEGORY_TYP TYPE OE
COMPOSITE_CATEGORY_TYP TYPE OE
LEAF_CATEGORY_TYP TYPE OE
BSREPORT PROCEDURE TESTDTA
PKG_MULTIRESULTSET PACKAGE BODY TESTDTA
11 rows selected.
7. Shutdown immediate;
8. Before 10g install note the path parameter.
C:\>set path
Path=E:\oracle\ora92\bin;C:\Program Files\Microsoft Visual Studio 8\VC\bin;E:\JD
Edwards\E812\DDP\Verity\winx86\_nti40\bin;E:\oracle\ora92\jre\1.4.2\bin\client;E
:\oracle\ora92\jre\1.4.2\bin;C:\Program Files\Oracle\jre\1.3.1\bin;C:\Program Fi
les\Oracle\jre\1.1.8\bin;C:\Program Files\HP\NCU;C:\WINDOWS\system32;C:\WINDOWS;
C:\WINDOWS\System32\Wbem;C:\Program Files\System Center Operations Manager 2007\
PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH
9. Take backup of Oracle inventory. Backup c:\program files\oracle folder which contains Inventory directory.
10. Install Oracle 10g Release 2 in a new Oracle Home i.e, Install 10201+companion(select Oracle Database 10g Products 10.2.0.1+10204patchset in e:\oracle\ora10g
11. environment variable check
C:\>set oracle
ORACLE_HOME=e:\oracle\ora92
C:\>set path
Path=e:\oracle\ora10g\bin;E:\oracle\ora92\bin;C:\Program Files\Microsoft Visual
Studio 8\VC\bin;E:\JDEdwards\E812\DDP\Verity\winx86\_nti40\bin;E:\oracle\ora92\j
re\1.4.2\bin\client;E:\oracle\ora92\jre\1.4.2\bin;C:\Program Files\Oracle\jre\1.
3.1\bin;C:\Program Files\Oracle\jre\1.1.8\bin;C:\Program Files\HP\NCU;C:\WINDOWS
\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\Program Files\System Center Ope
rations Manager 2007\
Update the path entry and remove 10ghome that is in begining of PATH variable so that we can connect to 9i database using 9i home.
After update in path parameter the new value is,
C:\Documents and Settings\jde.QIA>set path
Path=E:\oracle\ora92\bin;C:\Program Files\Microsoft Visual Studio 8\VC\bin;E:\JD
Edwards\E812\DDP\Verity\winx86\_nti40\bin;E:\oracle\ora92\jre\1.4.2\bin\client;E
:\oracle\ora92\jre\1.4.2\bin;C:\Program Files\Oracle\jre\1.3.1\bin;C:\Program Fi
les\Oracle\jre\1.1.8\bin;C:\Program Files\HP\NCU;C:\WINDOWS\system32;C:\WINDOWS;
C:\WINDOWS\System32\Wbem;C:\Program Files\System Center Operations Manager 2007\
12. Start the 9i database in normal mode (Must) from 9i sqlplus tool,
E:\>sqlplus
SQL*Plus: Release 9.2.0.8.0 - Production on Fri Oct 29 08:19:23 2010
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Enter user-name: / as sysdba
Connected to an idle instance.
SQL> exit
Disconnected
E:\>set oracle
ORACLE_HOME=e:\oracle\ora92
E:\>sqlplus
SQL*Plus: Release 9.2.0.8.0 - Production on Fri Oct 29 08:19:34 2010
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Enter user-name: / as sysdba
Connected to an idle instance.
SQL> startup;
ORACLE instance started.
Total System Global Area 1469131144 bytes
Fixed Size 457096 bytes
Variable Size 696254464 bytes
Database Buffers 771751936 bytes
Redo Buffers 667648 bytes
Database mounted.
Database opened.
SQL>
13. copy the following files from the 10gR2 ORACLE_HOME/rdbms/admin directory to a directory outside of the Oracle home, such as the e:\upgradetemp directory on your system:
ORACLE_HOME/rdbms/admin/utlu102i.sql
ORACLE_HOME/rdbms/admin/utltzuv2.sql
Make a note of the new location of these files.
14. Change to the temporary directory that you copied files in previous step.
Start SQL*Plus and connect to the database instance you are intending to upgrade (running from the original old ORACLE_HOME) as a user with SYSDBA privileges. Then run and spool the utlu102i.sql file.
sqlplus '/as sysdba'
E:\upgradetemp>dir
Volume in drive E is New Volume
Volume Serial Number is 78AD-5ABC
Directory of E:\upgradetemp
10/29/2010 08:10 AM .
10/29/2010 08:10 AM ..
02/07/2007 01:23 PM 18,183 utltzuv2.sql
10/08/2007 12:53 PM 136,217 utlu102i.sql
2 File(s) 154,400 bytes
2 Dir(s) 224,538,529,792 bytes free
SQL> spool Database_Info.log
SQL> @utlu102i.sql
SQL> spool off
Please note: The instance can be running in normal mode and does not need to be running in a restricted (migrate / upgrade) mode to run the script. Also, the instance must not be running in read-only mode. A few registry$ tables may be created, if they do not already exist, and some rows may be inserted into existing Upgrade tables.
Then, check the spool file and examine the output of the upgrade information tool. The sections describe the output of the Upgrade Information Tool (utlu102i.sql).
*************************
SPOOL OUTPUT FILE CONTENT
*************************
SQL> @utlu102i.sql
Oracle Database 10.2 Upgrade Information Utility 10-29-2010 08:23:58
.
**********************************************************************
Database:
**********************************************************************
--> name: JDEDB
--> version: 9.2.0.8.0
--> compatible: 9.2.0.0.0
--> blocksize: 8192
.
**********************************************************************
Logfiles: [make adjustments in the current environment]
**********************************************************************
--> The existing log files are adequate. No changes are required.
.
**********************************************************************
Tablespaces: [make adjustments in the current environment]
**********************************************************************
--> SYSTEM tablespace is adequate for the upgrade.
.... minimum required size: 730 MB
--> UNDOTBS1 tablespace is adequate for the upgrade.
.... minimum required size: 16 MB
--> TEMP tablespace is adequate for the upgrade.
.... minimum required size: 58 MB
--> CWMLITE tablespace is adequate for the upgrade.
.... minimum required size: 19 MB
--> DRSYS tablespace is adequate for the upgrade.
.... minimum required size: 33 MB
.... AUTOEXTEND additional space required: 13 MB
--> EXAMPLE tablespace is adequate for the upgrade.
.... minimum required size: 151 MB
--> ODM tablespace is adequate for the upgrade.
.... minimum required size: 10 MB
--> XDB tablespace is adequate for the upgrade.
.... minimum required size: 49 MB
.... AUTOEXTEND additional space required: 2 MB
.
**********************************************************************
Update Parameters: [Update Oracle Database 10.2 init.ora or spfile]
**********************************************************************
WARNING: --> "streams_pool_size" is not currently defined and needs a value of
at least 50331648
WARNING: --> "session_max_open_files" needs to be increased to at least 20
.
**********************************************************************
Renamed Parameters: [Update Oracle Database 10.2 init.ora or spfile]
**********************************************************************
-- No renamed parameters found. No changes are required.
.
**********************************************************************
Obsolete/Deprecated Parameters: [Update Oracle Database 10.2 init.ora or spfile]
**********************************************************************
--> "hash_join_enabled"
--> "log_archive_start"
.
**********************************************************************
Components: [The following database components will be upgraded or installed]
**********************************************************************
--> Oracle Catalog Views [upgrade] VALID
--> Oracle Packages and Types [upgrade] VALID
--> JServer JAVA Virtual Machine [upgrade] VALID
...The 'JServer JAVA Virtual Machine' JAccelerator (NCOMP)
...is required to be installed from the 10g Companion CD.
--> Oracle XDK for Java [upgrade] VALID
--> Oracle Java Packages [upgrade] VALID
--> Oracle Text [upgrade] VALID
--> Oracle XML Database [upgrade] VALID
--> Oracle Workspace Manager [upgrade] VALID
--> Oracle Data Mining [upgrade] VALID
--> OLAP Analytic Workspace [upgrade] UPGRADED
--> OLAP Catalog [upgrade] VALID
--> Oracle OLAP API [upgrade] UPGRADED
--> Oracle interMedia [upgrade] VALID
...The 'Oracle interMedia Image Accelerator' is
...required to be installed from the 10g Companion CD.
--> Spatial [upgrade] VALID
--> Oracle Ultra Search [upgrade] VALID
... To successfully upgrade Ultra Search, install it from
... the 10g Companion CD.
.
**********************************************************************
Miscellaneous Warnings
**********************************************************************
WARNING: --> Deprecated CONNECT role granted to some user/roles.
.... CONNECT role after upgrade has only CREATE SESSION privilege.
WARNING: --> Database contains stale optimizer statistics.
.... Refer to the 10g Upgrade Guide for instructions to update
.... statistics prior to upgrading the database.
.... Component Schemas with stale statistics:
.... SYS
.... XDB
.... WMSYS
.... ODM
.... OLAPSYS
.... MDSYS
.... WKSYS
WARNING: --> Database contains INVALID objects prior to upgrade.
.... USER OE has 4 INVALID objects.
.... USER SYS has 5 INVALID objects.
.... USER TESTDTA has 2 INVALID objects.
.
**********************************************************************
SYSAUX Tablespace:
[Create tablespace in the Oracle Database 10.2 environment]
**********************************************************************
--> New "SYSAUX" tablespace
.... minimum required size for database upgrade: 500 MB
.
PL/SQL procedure successfully completed.
SQL> spool off;
***********************
***********************
13. Increased the size of system tablespace from 1000MB to 2048MB
ALTER DATABASE DATAFILE 'E:\ORACLE\ORADATA\JDEDB\SYSTEM01.DBF' RESIZE 2048M;
UNDOTBS1 is already 3635MB and has all free space except 15MB used.
Increased the size of TEMP tablespace from 1500MB to 2048MB
ALTER DATABASE TEMPFILE 'E:\ORACLE\ORADATA\JDEDB\TEMP01.DBF' RESIZE 2048M
Increased the size of CWMLITE tablespace from 20MB to 128MB
ALTER DATABASE DATAFILE 'E:\ORACLE\ORADATA\JDEDB\CWMLITE01.DBF' RESIZE 128M
Increased the size of DRSYS01 tablespace from 20MB to 128MB
ALTER DATABASE DATAFILE 'E:\ORACLE\ORADATA\JDEDB\DRSYS01.DBF' RESIZE 128M;
EXAMPLES already 247 MB and has 60 free space i.e, only 148 MB used.
ODM already 20 MB and has 10 free space i.e, only 10 MB used.
Increased the size of XDB tablespace from 47MB to 128MB
ALTER DATABASE DATAFILE 'E:\ORACLE\ORADATA\JDEDB\XDB01.DBF' RESIZE 128M
14. Current initJDEDB.ora file content before any update.
*.aq_tm_processes=1
*.background_dump_dest='E:\oracle\admin\JDEDB\bdump'
*.compatible='9.2.0.0.0'
*.control_files='E:\oracle\oradata\JDEDB\CONTROL01.CTL','E:\oracle\oradata\JDEDB\CONTROL02.CTL','E:\oracle\oradata\JDEDB\CONTROL03.CTL'
*.core_dump_dest='E:\oracle\admin\JDEDB\cdump'
*.db_block_size=8192
*.db_cache_size=771658240
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='JDEDB'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=JDEDBXDB)'
*.fast_start_mttr_target=300
*.hash_join_enabled=TRUE
*.instance_name='JDEDB'
*.java_pool_size=67108864
*.job_queue_processes=10
*.large_pool_size=67108864
*.open_cursors=300
*.pga_aggregate_target=451658240
*.processes=150
*.query_rewrite_enabled='FALSE'
*.remote_login_passwordfile='EXCLUSIVE'
*.shared_pool_size=536870912
*.sort_area_size=2097152
*.star_transformation_enabled='FALSE'
*.timed_statistics=TRUE
*.undo_management='AUTO'
*.undo_retention=10800
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='E:\oracle\admin\JDEDB\udump'
*.log_archive_start=TRUE
*.log_archive_dest=E:\fra
Added *.streams_pool_size=50331648 (but commented as it is new 10g parameter), *.session_max_open_files=20 as required
Updated *.db_cache_size=771658240 to *.db_cache_size=371658240
Updated *.java_pool_size=67108864 to *.java_pool_size=167108864
Updated *.large_pool_size=67108864 to *.large_pool_size=157108864
Updated *.pga_aggregate_target=451658240 to 251658240
Updated *.shared_pool_size=536870912 to 236870912
so the final initJDEDB.ora file content is,
*.aq_tm_processes=1
*.background_dump_dest='E:\oracle\admin\JDEDB\bdump'
*.compatible='9.2.0.0.0'
*.control_files='E:\oracle\oradata\JDEDB\CONTROL01.CTL','E:\oracle\oradata\JDEDB\CONTROL02.CTL','E:\oracle\oradata\JDEDB\CONTROL03.CTL'
*.core_dump_dest='E:\oracle\admin\JDEDB\cdump'
*.db_block_size=8192
*.db_cache_size=371658240
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='JDEDB'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=JDEDBXDB)'
*.fast_start_mttr_target=300
*.hash_join_enabled=TRUE
*.instance_name='JDEDB'
*.java_pool_size=167108864
*.job_queue_processes=10
*.large_pool_size=157108864
*.open_cursors=300
*.pga_aggregate_target=251658240
*.processes=150
*.query_rewrite_enabled='FALSE'
*.remote_login_passwordfile='EXCLUSIVE'
*.shared_pool_size=236870912
*.sort_area_size=2097152
*.star_transformation_enabled='FALSE'
*.timed_statistics=TRUE
*.undo_management='AUTO'
*.undo_retention=10800
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='E:\oracle\admin\JDEDB\udump'
*.log_archive_start=TRUE
*.log_archive_dest=E:\fra
*.session_max_open_files=20
#*.streams_pool_size=50331648
Shutdown immediate
Startup;
15. After upgrading to 10gR2, the CONNECT role will only have the CREATE SESSION privilege; the other privileges granted to the CONNECT role in earlier releases will be revoked during the upgrade. To identify which users and roles in your database are granted the CONNECT role, use the following query:
SQL> SELECT grantee FROM dba_role_privs
2 WHERE granted_role = 'CONNECT' and
3 grantee NOT IN (
4 'SYS', 'OUTLN', 'SYSTEM', 'CTXSYS', 'DBSNMP',
5 'LOGSTDBY_ADMINISTRATOR', 'ORDSYS',
6 'ORDPLUGINS', 'OEM_MONITOR', 'WKSYS', 'WKPROXY',
7 'WK_TEST', 'WKUSER', 'MDSYS', 'LBACSYS', 'DMSYS',
8 'WMSYS', 'OLAPDBA', 'OLAPSVR', 'OLAP_USER',
9 'OLAPSYS', 'EXFSYS', 'SYSMAN', 'MDDATA',
10 'SI_INFORMTN_SCHEMA', 'XDB', 'ODM');
GRANTEE
------------------------------
HR
OE
PM
QS
SH
PSFT
RMAN
DV810
QS_CB
QS_CS
QS_ES
QS_OS
QS_WS
SCOTT
CRPCTL
CRPDTA
JDEDBA
QS_ADM
AUDITOR
PRODCTL
PRODDTA
TESTCTL
TESTDTA
QS_CBADM
24 rows selected.
SQL> SELECT GRANTEE,PRIVILEGE FROM DBA_SYS_PRIVS
2 WHERE GRANTEE='CONNECT'
3 ;
GRANTEE PRIVILEGE
------------------------------ --------------------
CONNECT CREATE VIEW
CONNECT CREATE TABLE
CONNECT ALTER SESSION
CONNECT CREATE CLUSTER
CONNECT CREATE SESSION
CONNECT CREATE SYNONYM
CONNECT CREATE SEQUENCE
CONNECT CREATE DATABASE LINK
8 rows selected.
NOTE: AFTER UPGRADE GRANT THE ABOVE SYSTEM PRIVILEGES TO ALL THE USERS LIST IN PREVIOUS SCRIPT.
16. Check for dblinks and to recreate them after the upgrade is done,
SQL> SELECT
2 'create '||DECODE(U.NAME,'PUBLIC','public ')||'database link '||CHR(10)
3 ||DECODE(U.NAME,'PUBLIC',Null, U.NAME||'.')|| L.NAME||chr(10)
4 ||'connect to ' || L.USERID || ' identified by '''
5 ||L.PASSWORD||''' using ''' || L.host || ''''
6 ||chr(10)||';' TEXT
7 FROM sys.link$ L,
8 sys.user$ U
9 WHERE L.OWNER# = U.USER# ;
no rows selected
17. If you are upgrading from Oracle9iR2 (9.2), verify that the view dba_registry contains data. If the view is empty, run the following scripts from the 9.2 home:
% sqlplus '/as sysdba'
SQL> @?/rdbms/admin/catalog.sql
SQL> @?/rdbms/admin/catproc.sql
SQL> @?/rdbms/admin/utlrp.sql
and verify that the dba_registry view now contains data.
18. I ignored this step,
But if you want as per the standard, check for corruption in the dictionary, use the following commands in sqlplus connected as sys:
Set verify off
Set space 0
Set line 120
Set heading off
Set feedback off
Set pages 1000
Spool analyze.sql
Select 'Analyze cluster "'||cluster_name||'" validate structure cascade;'
from dba_clusters
where owner='SYS'
union
Select 'Analyze table "'||table_name||'" validate structure cascade;'
from dba_tables
where owner='SYS' and partitioned='NO' and (iot_type='IOT' or iot_type is NULL)
union
Select 'Analyze table "'||table_name||'" validate structure cascade into invalid_rows;'
from dba_tables
where owner='SYS' and partitioned='YES';
spool off
This creates a script called analyze.sql.
Now execute the following steps.
$ sqlplus '/as sysdba'
SQL> @$ORACLE_HOME/rdbms/admin/utlvalid.sql
SQL> @analyze.sql
This script (analyze.sql) should not return any errors.
19. Since we don't have replication in database so I ignored the following step,
Ensure that all Materialized views / Snapshot refreshes are successfully completed, and that replication must be stopped (ie: quiesced).
$ sqlplus '/ as sysdba'
SQL> select distinct(trunc(last_refresh)) from dba_snapshot_refresh_times;
You can also use:
SQL> select distinct owner, name mview, master_owner master_owner, last_refresh from dba_mview_refresh_times;
20. Stop the listener for the database:
$ lsnrctl
LSNRCTL> stop
Ensure no files need media recovery:
SQL> select * from v$recover_file;
no rows selected
21. Ensure no files are in backup mode:
SQL> select * from v$backup where status!='NOT ACTIVE';
no rows selected
This should return no rows.
22. Resolve any outstanding unresolved distributed transaction:
SQL> select * from dba_2pc_pending;
If this returns rows you should do the following:
SQL> select local_tran_id from dba_2pc_pending;
SQL> execute dbms_transaction.purge_lost_db_entry('');
SQL> commit;
23. Disable all batch and cron jobs.
24. Ensure the users sys and system have 'system' as their default tablespace.
SQL> select username, default_tablespace from dba_users where username in ('SYS','SYSTEM');
USERNAME DEFAULT_TABLESPACE
------------------------------ -------------------------
SYSTEM SYSTEM
SYS SYSTEM
To modify use:
SQL> alter user sys default tablespace SYSTEM;
SQL> alter user system default tablespace SYSTEM;
25. stop listnere so that no body is connecting to ur database.
26. Ensure that the aud$ is in the system tablespace when auditing is enabled.
SQL> select tablespace_name from dba_tables where table_name='AUD$';
TABLESPACE_NAME
------------------------------
SYSTEM
If the AUD$ table exists, and is in use, upgrade performance can be effected depending on the number of records in the table.
Please refer to the following note for information on exporting and truncating the AUD$ table:
Note.979942.1 Ext/Pub Database upgrade appears to have halted at SYS.AUD$ Table
27. Note down where all control files are located.
SQL> select * from v$controlfile;
STATUS
-------
NAME
-------------------------------------
E:\ORACLE\ORADATA\JDEDB\CONTROL01.CTL
E:\ORACLE\ORADATA\JDEDB\CONTROL02.CTL
E:\ORACLE\ORADATA\JDEDB\CONTROL03.CTL
28. If table XDB.MIGR9202STATUS exists in the database, drop it before upgrading the database (to avoid the issue described in Note:356082.1)
29. Shutdown the database
$ sqlplus '/as sysdba'
SQL> shutdown immediate;
30. Perform a full cold backup (or an online backup using RMAN)
You can either do this by manually copying the files or sign on to RMAN
$ rman "target / nocatalog"
31. Make a backup of the old init.ora file
Copy it from the old (pre-10.2) ORACLE_HOME to the new (10.2) ORACLE_HOME
Copy from E:\oracle\ora92\database to E:\oracle\ora10g\database.
Comment out any obsoleted parameters and Change all deprecated parameters
I commeted #*.log_archive_start=TRUE while added *.streams_pool_size=50331648 parameters.
- Set the COMPATIBLE initialization parameter to an appropriate value. If you are
upgrading from 8.1.7.4 then set the COMPATIBLE parameter to 9.2.0 until after the
upgrade has been completed successfully.
If you are upgrading from 9.2.0 or 10.1.0
then leave the COMPATIBLE parameter set to it's current value until the upgrade
has been completed successfully. This will avoid any unnecessary ORA-942 errors
from being reported in SMON trace files during the upgrade (because the upgrade
is looking for 10.2 objects that have not yet been created)
SO LEAVE COMPATIBLE PARAMETER AS IT IS FOR US.
32. If you have the parameter NLS_LENGTH_SEMANTICS currently set to CHAR, change the value
to BYTE during the upgrade (to avoid the issue described in Note:4638550.8)
SQL> SHOW PARAMETER NLS_LENGTH
NAME TYPE VALUE
------------------------------------ ----------- -----
nls_length_semantics string BYTE
33. Deleted oracle listener Oraclelistner9i using the add/delete services tool.
34. Copy the tnsnames.ora, listener.ora, sqlnet.ora files and put it in new oracle_home/network/admin directory and update accordingly to the new 10g.
35. Create the service---> oradim -delete -sid jdedb
36. update environment variables to point to 10g.
SET PATH=E:\ORACLE\ORA10G\BIN;%PATH%
SET ORACLE_SID=JDEDB
SET TNS_ADMIN=E:\ORACLE\ORA10G\NETWORK\ADMIN
Now create the new database service ---> oradim -new -sid jdedb -startmode auto
37. sqlplus '/ as sysdba'
SQL> STARTUP UPGRADE;
SELECT STATUS FROM V$INSTANCE;
OPEN MIGRATE
38. CREATE TABLESPACE SYSAUX DATAFILE 'e:\oracle\oradata\JDEDB\sysaux01.dbf' SIZE 2048m REUSE EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ONLINE;
39. Now run the following script to upgrade.
spool upgrade10g.txt
@E:\oracle\ora10g\RDBMS\ADMIN\catupgrd.sql
spool off
shutdown immediate;
startup;
spool invalids.txt
@E:\oracle\ora10g\RDBMS\ADMIN\utlrp.sql
spool off;
40. Now grant the system privileges that were missing due to connect role.
GRANT CREATE VIEW TO HR;
GRANT CREATE TABLE TO HR;
GRANT ALTER SESSION TO HR;
GRANT CREATE CLUSTER TO HR;
GRANT CREATE SESSION TO HR;
GRANT CREATE SYNONYM TO HR;
GRANT CREATE SEQUENCE TO HR;
GRANT CREATE DATABASE LINK TO HR;
GRANT CREATE VIEW TO OE;
GRANT CREATE TABLE TO OE;
GRANT ALTER SESSION TO OE;
GRANT CREATE CLUSTER TO OE;
GRANT CREATE SESSION TO OE;
GRANT CREATE SYNONYM TO OE;
GRANT CREATE SEQUENCE TO OE;
GRANT CREATE DATABASE LINK TO OE;
GRANT CREATE VIEW TO PM;
GRANT CREATE TABLE TO PM;
GRANT ALTER SESSION TO PM;
GRANT CREATE CLUSTER TO PM;
GRANT CREATE SESSION TO PM;
GRANT CREATE SYNONYM TO PM;
GRANT CREATE SEQUENCE TO PM;
GRANT CREATE DATABASE LINK TO PM;
GRANT CREATE VIEW TO QS;
GRANT CREATE TABLE TO QS;
GRANT ALTER SESSION TO QS;
GRANT CREATE CLUSTER TO QS;
GRANT CREATE SESSION TO QS;
GRANT CREATE SYNONYM TO QS;
GRANT CREATE SEQUENCE TO QS;
GRANT CREATE DATABASE LINK TO QS;
GRANT CREATE VIEW TO SH;
GRANT CREATE TABLE TO SH;
GRANT ALTER SESSION TO SH;
GRANT CREATE CLUSTER TO SH;
GRANT CREATE SESSION TO SH;
GRANT CREATE SYNONYM TO SH;
GRANT CREATE SEQUENCE TO SH;
GRANT CREATE DATABASE LINK TO SH;
GRANT CREATE VIEW TO PSFT;
GRANT CREATE TABLE TO PSFT;
GRANT ALTER SESSION TO PSFT;
GRANT CREATE CLUSTER TO PSFT;
GRANT CREATE SESSION TO PSFT;
GRANT CREATE SYNONYM TO PSFT;
GRANT CREATE SEQUENCE TO PSFT;
GRANT CREATE DATABASE LINK TO PSFT;
GRANT CREATE VIEW TO RMAN;
GRANT CREATE TABLE TO RMAN;
GRANT ALTER SESSION TO RMAN;
GRANT CREATE CLUSTER TO RMAN;
GRANT CREATE SESSION TO RMAN;
GRANT CREATE SYNONYM TO RMAN;
GRANT CREATE SEQUENCE TO RMAN;
GRANT CREATE DATABASE LINK TO RMAN;
GRANT CREATE VIEW TO DV810;
GRANT CREATE TABLE TO DV810;
GRANT ALTER SESSION TO DV810;
GRANT CREATE CLUSTER TO DV810;
GRANT CREATE SESSION TO DV810;
GRANT CREATE SYNONYM TO DV810;
GRANT CREATE SEQUENCE TO DV810;
GRANT CREATE DATABASE LINK TO DV810;
GRANT CREATE VIEW TO QS_CB;
GRANT CREATE TABLE TO QS_CB;
GRANT ALTER SESSION TO QS_CB;
GRANT CREATE CLUSTER TO QS_CB;
GRANT CREATE SESSION TO QS_CB;
GRANT CREATE SYNONYM TO QS_CB;
GRANT CREATE SEQUENCE TO QS_CB;
GRANT CREATE DATABASE LINK TO QS_CB;
GRANT CREATE VIEW TO QS_CS;
GRANT CREATE TABLE TO QS_CS;
GRANT ALTER SESSION TO QS_CS;
GRANT CREATE CLUSTER TO QS_CS;
GRANT CREATE SESSION TO QS_CS;
GRANT CREATE SYNONYM TO QS_CS;
GRANT CREATE SEQUENCE TO QS_CS;
GRANT CREATE DATABASE LINK TO QS_CS;
GRANT CREATE VIEW TO QS_ES;
GRANT CREATE TABLE TO QS_ES;
GRANT ALTER SESSION TO QS_ES;
GRANT CREATE CLUSTER TO QS_ES;
GRANT CREATE SESSION TO QS_ES;
GRANT CREATE SYNONYM TO QS_ES;
GRANT CREATE SEQUENCE TO QS_ES;
GRANT CREATE DATABASE LINK TO QS_ES;
GRANT CREATE VIEW TO QS_OS;
GRANT CREATE TABLE TO QS_OS;
GRANT ALTER SESSION TO QS_OS;
GRANT CREATE CLUSTER TO QS_OS;
GRANT CREATE SESSION TO QS_OS;
GRANT CREATE SYNONYM TO QS_OS;
GRANT CREATE SEQUENCE TO QS_OS;
GRANT CREATE DATABASE LINK TO QS_OS;
GRANT CREATE VIEW TO QS_WS;
GRANT CREATE TABLE TO QS_WS;
GRANT ALTER SESSION TO QS_WS;
GRANT CREATE CLUSTER TO QS_WS;
GRANT CREATE SESSION TO QS_WS;
GRANT CREATE SYNONYM TO QS_WS;
GRANT CREATE SEQUENCE TO QS_WS;
GRANT CREATE DATABASE LINK TO QS_WS;
GRANT CREATE VIEW TO SCOTT;
GRANT CREATE TABLE TO SCOTT;
GRANT ALTER SESSION TO SCOTT;
GRANT CREATE CLUSTER TO SCOTT;
GRANT CREATE SESSION TO SCOTT;
GRANT CREATE SYNONYM TO SCOTT;
GRANT CREATE SEQUENCE TO SCOTT;
GRANT CREATE DATABASE LINK TO SCOTT;
GRANT CREATE VIEW TO CRPCTL;
GRANT CREATE TABLE TO CRPCTL;
GRANT ALTER SESSION TO CRPCTL;
GRANT CREATE CLUSTER TO CRPCTL;
GRANT CREATE SESSION TO CRPCTL;
GRANT CREATE SYNONYM TO CRPCTL;
GRANT CREATE SEQUENCE TO CRPCTL;
GRANT CREATE DATABASE LINK TO CRPCTL;
GRANT CREATE VIEW TO CRPDTA;
GRANT CREATE TABLE TO CRPDTA;
GRANT ALTER SESSION TO CRPDTA;
GRANT CREATE CLUSTER TO CRPDTA;
GRANT CREATE SESSION TO CRPDTA;
GRANT CREATE SYNONYM TO CRPDTA;
GRANT CREATE SEQUENCE TO CRPDTA;
GRANT CREATE DATABASE LINK TO CRPDTA;
GRANT CREATE VIEW TO JDEDBA;
GRANT CREATE TABLE TO JDEDBA;
GRANT ALTER SESSION TO JDEDBA;
GRANT CREATE CLUSTER TO JDEDBA;
GRANT CREATE SESSION TO JDEDBA;
GRANT CREATE SYNONYM TO JDEDBA;
GRANT CREATE SEQUENCE TO JDEDBA;
GRANT CREATE DATABASE LINK TO JDEDBA;
GRANT CREATE VIEW TO QS_ADM;
GRANT CREATE TABLE TO QS_ADM;
GRANT ALTER SESSION TO QS_ADM;
GRANT CREATE CLUSTER TO QS_ADM;
GRANT CREATE SESSION TO QS_ADM;
GRANT CREATE SYNONYM TO QS_ADM;
GRANT CREATE SEQUENCE TO QS_ADM;
GRANT CREATE DATABASE LINK TO QS_ADM;
GRANT CREATE VIEW TO AUDITOR;
GRANT CREATE TABLE TO AUDITOR;
GRANT ALTER SESSION TO AUDITOR;
GRANT CREATE CLUSTER TO AUDITOR;
GRANT CREATE SESSION TO AUDITOR;
GRANT CREATE SYNONYM TO AUDITOR;
GRANT CREATE SEQUENCE TO AUDITOR;
GRANT CREATE DATABASE LINK TO AUDITOR;
GRANT CREATE VIEW TO PRODCTL;
GRANT CREATE TABLE TO PRODCTL;
GRANT ALTER SESSION TO PRODCTL;
GRANT CREATE CLUSTER TO PRODCTL;
GRANT CREATE SESSION TO PRODCTL;
GRANT CREATE SYNONYM TO PRODCTL;
GRANT CREATE SEQUENCE TO PRODCTL;
GRANT CREATE DATABASE LINK TO PRODCTL;
GRANT CREATE VIEW TO PRODDTA;
GRANT CREATE TABLE TO PRODDTA;
GRANT ALTER SESSION TO PRODDTA;
GRANT CREATE CLUSTER TO PRODDTA;
GRANT CREATE SESSION TO PRODDTA;
GRANT CREATE SYNONYM TO PRODDTA;
GRANT CREATE SEQUENCE TO PRODDTA;
GRANT CREATE DATABASE LINK TO PRODDTA;
GRANT CREATE VIEW TO TESTCTL;
GRANT CREATE TABLE TO TESTCTL;
GRANT ALTER SESSION TO TESTCTL;
GRANT CREATE CLUSTER TO TESTCTL;
GRANT CREATE SESSION TO TESTCTL;
GRANT CREATE SYNONYM TO TESTCTL;
GRANT CREATE SEQUENCE TO TESTCTL;
GRANT CREATE DATABASE LINK TO TESTCTL;
GRANT CREATE VIEW TO TESTDTA;
GRANT CREATE TABLE TO TESTDTA;
GRANT ALTER SESSION TO TESTDTA;
GRANT CREATE CLUSTER TO TESTDTA;
GRANT CREATE SESSION TO TESTDTA;
GRANT CREATE SYNONYM TO TESTDTA;
GRANT CREATE SEQUENCE TO TESTDTA;
GRANT CREATE DATABASE LINK TO TESTDTA;
GRANT CREATE VIEW TO QS_CBADM;
GRANT CREATE TABLE TO QS_CBADM;
GRANT ALTER SESSION TO QS_CBADM;
GRANT CREATE CLUSTER TO QS_CBADM;
GRANT CREATE SESSION TO QS_CBADM;
GRANT CREATE SYNONYM TO QS_CBADM;
GRANT CREATE SEQUENCE TO QS_CBADM;
GRANT CREATE DATABASE LINK TO QS_CBADM;
41. ADD FLASH RECOVERY AREA
42. UPDATE ALL THE SCHEDULED BACKUPS ACCORDINGLY TO NEW UPGRADED ORACLE DATABASE.
You can refer to the following metalink document if you need further assistance or email me at samiora@gmail.com
Note 316889.1 Complete Checklist for Manual Upgrades to 10gR2.docx
Monday, October 11, 2010
Physical Standby database using Dataguard
Oracle 10g Data Guard is a great tool to ensure high availability, data protection and disaster recovery for enterprise data. I have been working on Data Guard/Standby databases using both Grid control and SQL command line from 2005 while I was working with Oracle Corporation then, and till now using Data guard technology while now I am working with a government sector as Senior Applications DBA.
My latest experience with Data Guard was manually creating a Physical Standby Database in a Disaster Recovery DR project. I am maintaining it daily and it works well. I would like to share my experience with the other DBAs.
In this example the database version is 10.2.0.1. The Primary database and Standby database are located on different machines at different sites. The Primary database is called CELL and the Standby database is called STAN. I use Flash Recovery Area, and OMF.
I. Before you get started:
1. Make sure the operating system and platform architecture on the primary and standby systems are the same.
2. Install Oracle database software without the starter database on the standby server and patch it if necessary. Make sure the same Oracle software release is used on the Primary and Standby databases, and Oracle home paths are identical.
3. Test the Standby Database creation on a test environment first before working on the Production database.
II. On the Primary Database Side:
1. Enable forced logging on your primary database:
SQL> ALTER DATABASE FORCE LOGGING;
2. Create a password file if it doesn’t exist.
i) To check if a password file already exists, run the following command:
SQL> SELECT * FROM V$PWFILE_USERS;
ii) If it doesn’t exist, use the following command to create one:
- On Windows:
$cd %ORACLE_HOME%\database
$orapwd file=pwdCELL.ora password=***** force=y
(Note: Replacewith the actual Oracle home path, and ***** with the password for the SYS user.)
- On UNIX:
$cd $ORACLE_HOME/dbs
$Orapwd file=pwdCELL.ora password=***** force=y
(Note: Replace ***** with your actual password for the SYS user.)
3. Configure a Standby Redo log.
i) The size of the standby redo log files should match the size of the current Primary database online redo log files. To find out the size of your online redo log files:
SQL> select bytes/1024/1024 MB from v$log;
MB
----
50
50
50
ii) Use the following command to determine your current log file groups:
SQL> select group#, member from v$logfile;
iii) Create standby Redo log groups.
My primary database had 3 log file groups originally and I created 3 standby redo log groups using the following commands:
IF OMF (Oracle Managed Files)
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50M;
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50M;
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M;
If non OMF (Manually managed files)
ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 ('C:\oracle\product\10.2.0\oradata\cell\REDO04.LOG') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 ('C:\oracle\product\10.2.0\oradata\cell\REDO05.LOG') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 ('C:\oracle\product\10.2.0\oradata\cell\REDO06.LOG') SIZE 50M;
4) To verify the results of the standby redo log groups creation, run the following query:
SQL>select * from v$standby_log; (or)
SQL> SELECT GROUP#,DBID,BYTES/1024/1024 MB FROM V$STANDBY_LOG;
GROUP# DBID MB
---------- ---------------------------
4 UNASSIGNED 50
5 UNASSIGNED 50
6 UNASSIGNED 50
4. Enable Archiving on Primary.
If your primary database is not already in Archive Log mode, enable the archive log mode:
SQL>shutdown immediate;
SQL>startup mount;
SQL>alter database archivelog;
SQL>alter database open;
SQL>archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination c:\oracle\product\10.2.0\flash_recovery_area\CELL\ARCHIVELOG
Oldest online log sequence 92
Next log sequence to archive 94
Current log sequence 94
5. Set Primary Database Initialization Parameters
Create a text initialization parameter file (PFILE) from the server parameter file (SPFILE), to add the new primary role parameters.
i) Create pfile from spfile for the primary database:
- On Windows:
SQL>create pfile=’c:\pfileCELL.ora’ from spfile;
- On UNIX:
SQL>create pfile=’/dbs/pfilePRIM.ora’ from spfile;
ii) Edit this pfile to add the new primary and standby role parameters: (Here the file paths are from a windows system. For UNIX system, specify the path accordingly)
cell.__db_cache_size=490733568
cell.__java_pool_size=4194304
cell.__large_pool_size=4194304
cell.__shared_pool_size=104857600
cell.__streams_pool_size=0
*.audit_file_dest='c:\oracle\product\10.2.0\admin\cell\adump'
*.background_dump_dest='c:\oracle\product\10.2.0\admin\cell\bdump'
*.compatible='10.2.0.1.0'
*.control_files='c:\oracle\product\10.2.0\oradata\cell\control01.ctl','c:\oracle\product\10.2.0\oradata\cell\control02.ctl','c:\oracle\product\10.2.0\oradata\cell\control03.ctl'
*.core_dump_dest='c:\oracle\product\10.2.0\admin\cell\cdump'*.db_block_size=8192
*.db_cache_size=490733568
*.db_domain=''
*.db_file_multiblock_read_count=16
*.DB_FILE_NAME_CONVERT='c:\oracle\product\10.2.0\oradata\stan','c:\oracle\product\10.2.0\oradata\cell'
*.db_name='cell'
*.db_recovery_file_dest='c:\oracle\product\10.2.0\flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.db_unique_name='cell'
*.FAL_CLIENT='cell'
*.FAL_SERVER='stan'
*.java_pool_size=4194304
*.job_queue_processes=10
*.large_pool_size=4194304
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(cell,stan)'
*.LOG_ARCHIVE_DEST_1='LOCATION=c:\oracle\product\10.2.0\flash_recovery_area\CELL\ARCHIVELOG VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=cell'
*.LOG_ARCHIVE_DEST_2='SERVICE=stan LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=stan'
*.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
*.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
*.LOG_ARCHIVE_FORMAT='%t_%s_%r.arc'
*.LOG_ARCHIVE_MAX_PROCESSES=30
*.LOG_FILE_NAME_CONVERT='c:\oracle\product\10.2.0\oradata\stan','c:\oracle\product\10.2.0\oradata\cell'
*.open_cursors=300
*.pga_aggregate_target=203423744
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=612368384
*.shared_pool_size=104857600
*.STANDBY_FILE_MANAGEMENT='AUTO'
*.streams_pool_size=0
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='c:\oracle\product\10.2.0/admin/cell/udump'
(Note: For DB_FILE_NAME_CONVERT -Specify the location of the standby DB datafiles followed by the primary location;
For LOG_FILE_NAME_CONVERT - Specify the location of the standby DB online redo log files followed by the primary location.)
6. Create spfile from pfile, and restart primary database using the new spfile.
Data Guard must use SPFILE. Create the SPFILE and restart database.
- On windows:
SQL> shutdown immediate;
SQL> startup nomount pfile=’c:\pfileCELL.ora’;
SQL>create spfile from pfile=’c:\pfileCELL.ora’;
-- Restart the Primary database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>Startup;
(Note- specify your Oracle home path to replace ‘’).
- On Unix:
SQL> shutdown immediate;
SQL> startup nomount pfile=’/u01/app/oraclepfileCELL.ora’;
SQL>create spfile from pfile=’/u01/app/oracle/pfileCELL.ora’;
-- Restart the Primary database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>Startup;
(Note- specify your Oracle home path to replace ‘’).
On the Standby Database Site:
1. Create a copy of Primary database data files on the Standby Server:
On Primary DB:
SQL>shutdown immediate;
On Standby Server (While the Primary database is shut down):
i) Create directory for data files, for example, on windows,
c:\oracle\product\10.2.0\oradata\stan
ii) Copy the data files and temp files over.
iii) Create directory for online logs, for example, on Windows, c:\oracle\product\10.2.0\oradata\stan
iv) Copy the online logs over.
2. Create a Control File for the standby database:
On Primary DB, create a control file for the standby to use:
SQL>startup mount;
SQL>alter database create standby controlfile as ‘STAN.ctl;
File will be created in %ORACLE_HOME%\database directory.
SQL>ALTER DATABASE OPEN;
3. Copy the Primary DB pfile to Standby server and rename/edit the file.
i) Copy pfileCELL.ora from Primary server to Standby server, to database folder on Windows or dbs folder on UNIX under the Oracle home path.
ii) Rename it to pfileSTAN.ora, and modify the file as follows. : (Here the file paths are from a windows system. For UNIX system, specify the path accordingly)
stan.__db_cache_size=490733568
stan.__java_pool_size=4194304
stan.__large_pool_size=4194304
stan.__shared_pool_size=104857600
stan.__streams_pool_size=0
*.audit_file_dest='c:\oracle\product\10.2.0\admin\stan\adump'
*.background_dump_dest='c:\oracle\product\10.2.0\admin\stan\bdump'
*.compatible='10.2.0.1.0'
*.control_files='c:\oracle\product\10.2.0\oradata\stan\stan.ctl','c:\oracle\stan.ctl'
*.core_dump_dest='c:\oracle\product\10.2.0\admin\stan\cdump'
*.db_block_size=8192
*.db_cache_size=490733568
*.db_domain=''
*.db_file_multiblock_read_count=16
*.DB_FILE_NAME_CONVERT='c:\oracle\product\10.2.0\oradata\cell','c:\oracle\product\10.2.0\oradata\stan'
*.db_name='cell'
*.db_recovery_file_dest='c:\oracle\product\10.2.0\flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.db_unique_name='stan'
*.FAL_CLIENT='stan'
*.FAL_SERVER='cell'
*.java_pool_size=4194304
*.job_queue_processes=10
*.large_pool_size=4194304
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(cell,stan)'
*.LOG_ARCHIVE_DEST_1='LOCATION=c:\oracle\product\10.2.0\flash_recovery_area\stan\ARCHIVELOG VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=stan'
*.LOG_ARCHIVE_DEST_2='SERVICE=cell LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=cell'
*.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
*.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
*.LOG_ARCHIVE_FORMAT='%t_%s_%r.arc'
*.LOG_ARCHIVE_MAX_PROCESSES=30
*.LOG_FILE_NAME_CONVERT='c:\oracle\product\10.2.0\oradata\cell','c:\oracle\product\10.2.0\oradata\stan'
*.open_cursors=300
*.pga_aggregate_target=203423744
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=612368384
*.shared_pool_size=104857600
*.STANDBY_FILE_MANAGEMENT='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='c:\oracle\product\10.2.0\admin\stan\udump'
4) On Standby server, create all required directories for dump and archived log destination:
Create directories adump, bdump, cdump, udump, and archived log destinations for the standby database.
'c:\oracle\product\10.2.0\admin\stan\adump'
'c:\oracle\product\10.2.0\admin\stan\bdump'
'c:\oracle\product\10.2.0\admin\stan\cdump'
'c:\oracle\product\10.2.0\admin\stan\udump'
c:\oracle\product\10.2.0\flash_recovery_area\stan\ARCHIVELOG
5. Copy the standby control file ‘STAN.ctl’ from primary to standby destinations
c:\oracle\STAN.CTL
c:\oracle\product\10.2.0\oradata\stan\STAN.CTL
6. Copy the Primary password file to standby and rename it to pwdSTAN.ora.
On Windows copy it to %ORACLE_HOME%\database folder, and on UNIX copy it to $ORACLE_HOME/dbs directory. And then rename the password file.
7. For Windows, create a Windows-based services (optional):
$oradim –NEW –SID STAN –STARTMODE manual
8. Configure listeners for the primary and standby databases.
i) On Primary system:
use Oracle Net Manager to configure a listener for PRIM and STAN services.
Then restart the listener.
$lsnrctl stop
$lsnrctl start
$lsnrctl stop
$lsnrctl start
9. Create Oracle Net service names.
i) On Primary system: use Oracle Net Manager to create network service names for PRIM and STAN.
Check tnsping to both services:
$tnsping PRIM
$tnsping STAN
ii) On Standby system: use Oracle Net Manager to create network service names for PRIM and STAN.
Check tnsping to both services:
$tnsping PRIM
$tnsping STAN
Content of TNSNAMES.ORA File on both PRIMARY and STANDBY Server is,
# tnsnames.ora Network Configuration File: C:\oracle\product\10.2.0\db_1\NETWORK\ADMIN\tnsnames.ora
# Generated by Oracle configuration tools.
CELL =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = abdulsamipc)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = cell)
)
)
STAN =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = asamipc)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = stan)
)
)
10. On Standby server, setup the environment variables to point to the Standby database.
Set up ORACLE_HOME and ORACLE_SID.
11. Start up nomount the standby database and generate a spfile.
- On Windows:
SQL>startup nomount pfile=’c:\pfileSTAN.ora’;
SQL>create spfile from pfile=’c:\pfileSTAN.ora’;
-- Restart the Standby database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>startup mount;
- On Unix:
SQL>startup nomount pfile=’$ORACLE_HOME/dbs/pfileSTAN.ora’;
SQL>create spfile from pfile=’$ORACLE_HOME/dbs/pfileSTAN.ora’;
-- Restart the Standby database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>startup mount;
(Note- specify your Oracle home path to replace ‘’).
12. Start Redo apply
i) On the standby database, to start redo apply:
SQL>alter database recover managed standby database disconnect from session;
If you ever need to stop log apply services:
SQL> alter database recover managed standby database cancel;
13. Verify the standby database is performing properly:
i) On Standby perform a query:
SQL>select sequence#, first_time, next_time from v$archived_log;
ii) On Primary, force a logfile switch:
SQL>alter system switch logfile;
iii) On Standby, verify the archived redo log files were applied:
SQL>select sequence#, applied from v$archived_log order by sequence#;
14. If you want the redo data to be applied as it is received without waiting for the current standby redo log file to be archived, enable the real-time apply.
To start real-time apply:
SQL> alter database recover managed standby database using current logfile disconnect;
15. To create multiple standby databases, repeat this procedure.
Maintenance:
1. Check the alert log files of Primary and Standby databases frequently to monitor the database operations in a Data Guard environment.
2. Cleanup the archive logs on Primary and Standby servers.
I scheduled weekly Hot Whole database backup against my primary database that also backs up and delete the archived logs on Primary.
For the standby database, I run RMAN to backup and delete the archive logs once per week.
$rman target 'sys/*****@STAN ';
Rman>backup archivelog all delete input;
3. Password management
The password for the SYS user must be identical on every system for the redo data transmission to succeed. If you change the password for SYS on Primary database, you will have to update the password file for Standby database accordingly, otherwise the logs won’t be shipped to the standby server. If the password files are not in sync then update/recreate password file for the Standby database.
Reference:
Oracle Data Guard Concepts and Administration 10g Release 2 (10.2), B14239-04
For any queries please don't hesitate to contact me, Sami Malik (samiora@gmail.com)
[Senior Oracle Applicaitons DBA at Government Sector in Middle East]
Cell: 0097433164959
My latest experience with Data Guard was manually creating a Physical Standby Database in a Disaster Recovery DR project. I am maintaining it daily and it works well. I would like to share my experience with the other DBAs.
In this example the database version is 10.2.0.1. The Primary database and Standby database are located on different machines at different sites. The Primary database is called CELL and the Standby database is called STAN. I use Flash Recovery Area, and OMF.
I. Before you get started:
1. Make sure the operating system and platform architecture on the primary and standby systems are the same.
2. Install Oracle database software without the starter database on the standby server and patch it if necessary. Make sure the same Oracle software release is used on the Primary and Standby databases, and Oracle home paths are identical.
3. Test the Standby Database creation on a test environment first before working on the Production database.
II. On the Primary Database Side:
1. Enable forced logging on your primary database:
SQL> ALTER DATABASE FORCE LOGGING;
2. Create a password file if it doesn’t exist.
i) To check if a password file already exists, run the following command:
SQL> SELECT * FROM V$PWFILE_USERS;
ii) If it doesn’t exist, use the following command to create one:
- On Windows:
$cd %ORACLE_HOME%
$orapwd file=pwdCELL.ora password=***** force=y
(Note: Replace
- On UNIX:
$cd $ORACLE_HOME/dbs
$Orapwd file=pwdCELL.ora password=***** force=y
(Note: Replace ***** with your actual password for the SYS user.)
3. Configure a Standby Redo log.
i) The size of the standby redo log files should match the size of the current Primary database online redo log files. To find out the size of your online redo log files:
SQL> select bytes/1024/1024 MB from v$log;
MB
----
50
50
50
ii) Use the following command to determine your current log file groups:
SQL> select group#, member from v$logfile;
iii) Create standby Redo log groups.
My primary database had 3 log file groups originally and I created 3 standby redo log groups using the following commands:
IF OMF (Oracle Managed Files)
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50M;
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50M;
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M;
If non OMF (Manually managed files)
ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 ('C:\oracle\product\10.2.0\oradata\cell\REDO04.LOG') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 ('C:\oracle\product\10.2.0\oradata\cell\REDO05.LOG') SIZE 50M;
ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 ('C:\oracle\product\10.2.0\oradata\cell\REDO06.LOG') SIZE 50M;
4) To verify the results of the standby redo log groups creation, run the following query:
SQL>select * from v$standby_log; (or)
SQL> SELECT GROUP#,DBID,BYTES/1024/1024 MB FROM V$STANDBY_LOG;
GROUP# DBID MB
---------- ---------------------------
4 UNASSIGNED 50
5 UNASSIGNED 50
6 UNASSIGNED 50
4. Enable Archiving on Primary.
If your primary database is not already in Archive Log mode, enable the archive log mode:
SQL>shutdown immediate;
SQL>startup mount;
SQL>alter database archivelog;
SQL>alter database open;
SQL>archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination c:\oracle\product\10.2.0\flash_recovery_area\CELL\ARCHIVELOG
Oldest online log sequence 92
Next log sequence to archive 94
Current log sequence 94
5. Set Primary Database Initialization Parameters
Create a text initialization parameter file (PFILE) from the server parameter file (SPFILE), to add the new primary role parameters.
i) Create pfile from spfile for the primary database:
- On Windows:
SQL>create pfile=’c:\pfileCELL.ora’ from spfile;
- On UNIX:
SQL>create pfile=’
ii) Edit this pfile to add the new primary and standby role parameters: (Here the file paths are from a windows system. For UNIX system, specify the path accordingly)
cell.__db_cache_size=490733568
cell.__java_pool_size=4194304
cell.__large_pool_size=4194304
cell.__shared_pool_size=104857600
cell.__streams_pool_size=0
*.audit_file_dest='c:\oracle\product\10.2.0\admin\cell\adump'
*.background_dump_dest='c:\oracle\product\10.2.0\admin\cell\bdump'
*.compatible='10.2.0.1.0'
*.control_files='c:\oracle\product\10.2.0\oradata\cell\control01.ctl','c:\oracle\product\10.2.0\oradata\cell\control02.ctl','c:\oracle\product\10.2.0\oradata\cell\control03.ctl'
*.core_dump_dest='c:\oracle\product\10.2.0\admin\cell\cdump'*.db_block_size=8192
*.db_cache_size=490733568
*.db_domain=''
*.db_file_multiblock_read_count=16
*.DB_FILE_NAME_CONVERT='c:\oracle\product\10.2.0\oradata\stan','c:\oracle\product\10.2.0\oradata\cell'
*.db_name='cell'
*.db_recovery_file_dest='c:\oracle\product\10.2.0\flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.db_unique_name='cell'
*.FAL_CLIENT='cell'
*.FAL_SERVER='stan'
*.java_pool_size=4194304
*.job_queue_processes=10
*.large_pool_size=4194304
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(cell,stan)'
*.LOG_ARCHIVE_DEST_1='LOCATION=c:\oracle\product\10.2.0\flash_recovery_area\CELL\ARCHIVELOG VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=cell'
*.LOG_ARCHIVE_DEST_2='SERVICE=stan LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=stan'
*.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
*.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
*.LOG_ARCHIVE_FORMAT='%t_%s_%r.arc'
*.LOG_ARCHIVE_MAX_PROCESSES=30
*.LOG_FILE_NAME_CONVERT='c:\oracle\product\10.2.0\oradata\stan','c:\oracle\product\10.2.0\oradata\cell'
*.open_cursors=300
*.pga_aggregate_target=203423744
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=612368384
*.shared_pool_size=104857600
*.STANDBY_FILE_MANAGEMENT='AUTO'
*.streams_pool_size=0
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='c:\oracle\product\10.2.0/admin/cell/udump'
(Note: For DB_FILE_NAME_CONVERT -Specify the location of the standby DB datafiles followed by the primary location;
For LOG_FILE_NAME_CONVERT - Specify the location of the standby DB online redo log files followed by the primary location.)
6. Create spfile from pfile, and restart primary database using the new spfile.
Data Guard must use SPFILE. Create the SPFILE and restart database.
- On windows:
SQL> shutdown immediate;
SQL> startup nomount pfile=’c:
SQL>create spfile from pfile=’c:\pfileCELL.ora’;
-- Restart the Primary database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>Startup;
- On Unix:
SQL> shutdown immediate;
SQL> startup nomount pfile=’/u01/app/oraclepfileCELL.ora’;
SQL>create spfile from pfile=’
SQL>shutdown immediate;
SQL>Startup;
(Note- specify your Oracle home path to replace ‘
On the Standby Database Site:
1. Create a copy of Primary database data files on the Standby Server:
On Primary DB:
SQL>shutdown immediate;
On Standby Server (While the Primary database is shut down):
i) Create directory for data files, for example, on windows,
c:\oracle\product\10.2.0\oradata\stan
ii) Copy the data files and temp files over.
iii) Create directory for online logs, for example, on Windows, c:\oracle\product\10.2.0\oradata\stan
iv) Copy the online logs over.
2. Create a Control File for the standby database:
On Primary DB, create a control file for the standby to use:
SQL>startup mount;
SQL>alter database create standby controlfile as ‘STAN.ctl;
File will be created in %ORACLE_HOME%\database directory.
SQL>ALTER DATABASE OPEN;
3. Copy the Primary DB pfile to Standby server and rename/edit the file.
i) Copy pfileCELL.ora from Primary server to Standby server, to database folder on Windows or dbs folder on UNIX under the Oracle home path.
ii) Rename it to pfileSTAN.ora, and modify the file as follows. : (Here the file paths are from a windows system. For UNIX system, specify the path accordingly)
stan.__db_cache_size=490733568
stan.__java_pool_size=4194304
stan.__large_pool_size=4194304
stan.__shared_pool_size=104857600
stan.__streams_pool_size=0
*.audit_file_dest='c:\oracle\product\10.2.0\admin\stan\adump'
*.background_dump_dest='c:\oracle\product\10.2.0\admin\stan\bdump'
*.compatible='10.2.0.1.0'
*.control_files='c:\oracle\product\10.2.0\oradata\stan\stan.ctl','c:\oracle\stan.ctl'
*.core_dump_dest='c:\oracle\product\10.2.0\admin\stan\cdump'
*.db_block_size=8192
*.db_cache_size=490733568
*.db_domain=''
*.db_file_multiblock_read_count=16
*.DB_FILE_NAME_CONVERT='c:\oracle\product\10.2.0\oradata\cell','c:\oracle\product\10.2.0\oradata\stan'
*.db_name='cell'
*.db_recovery_file_dest='c:\oracle\product\10.2.0\flash_recovery_area'
*.db_recovery_file_dest_size=2147483648
*.db_unique_name='stan'
*.FAL_CLIENT='stan'
*.FAL_SERVER='cell'
*.java_pool_size=4194304
*.job_queue_processes=10
*.large_pool_size=4194304
*.LOG_ARCHIVE_CONFIG='DG_CONFIG=(cell,stan)'
*.LOG_ARCHIVE_DEST_1='LOCATION=c:\oracle\product\10.2.0\flash_recovery_area\stan\ARCHIVELOG VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=stan'
*.LOG_ARCHIVE_DEST_2='SERVICE=cell LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=cell'
*.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
*.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
*.LOG_ARCHIVE_FORMAT='%t_%s_%r.arc'
*.LOG_ARCHIVE_MAX_PROCESSES=30
*.LOG_FILE_NAME_CONVERT='c:\oracle\product\10.2.0\oradata\cell','c:\oracle\product\10.2.0\oradata\stan'
*.open_cursors=300
*.pga_aggregate_target=203423744
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=612368384
*.shared_pool_size=104857600
*.STANDBY_FILE_MANAGEMENT='AUTO'
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='c:\oracle\product\10.2.0\admin\stan\udump'
4) On Standby server, create all required directories for dump and archived log destination:
Create directories adump, bdump, cdump, udump, and archived log destinations for the standby database.
'c:\oracle\product\10.2.0\admin\stan\adump'
'c:\oracle\product\10.2.0\admin\stan\bdump'
'c:\oracle\product\10.2.0\admin\stan\cdump'
'c:\oracle\product\10.2.0\admin\stan\udump'
c:\oracle\product\10.2.0\flash_recovery_area\stan\ARCHIVELOG
5. Copy the standby control file ‘STAN.ctl’ from primary to standby destinations
c:\oracle\STAN.CTL
c:\oracle\product\10.2.0\oradata\stan\STAN.CTL
6. Copy the Primary password file to standby and rename it to pwdSTAN.ora.
On Windows copy it to %ORACLE_HOME%
7. For Windows, create a Windows-based services (optional):
$oradim –NEW –SID STAN –STARTMODE manual
8. Configure listeners for the primary and standby databases.
i) On Primary system:
use Oracle Net Manager to configure a listener for PRIM and STAN services.
Then restart the listener.
$lsnrctl stop
$lsnrctl start
Content of LISTENER.ORA file on PRIMARY Server is,
# listener.ora Network Configuration File: c:\oracle\product\10.2.0\db_1\network\admin\listener.ora
# Generated by Oracle configuration tools.
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = c:\oracle\product\10.2.0\db_1)
(PROGRAM = extproc)
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))
(ADDRESS = (PROTOCOL = TCP)(HOST = AbdulsamiPC.qic.local)(PORT = 1521))
)
)
ii) On Standby server: use Net Manager to configure a listener for PRIM and STAN services. Then restart the listener.
$lsnrctl stop
$lsnrctl start
9. Create Oracle Net service names.
i) On Primary system: use Oracle Net Manager to create network service names for PRIM and STAN.
Check tnsping to both services:
$tnsping PRIM
$tnsping STAN
ii) On Standby system: use Oracle Net Manager to create network service names for PRIM and STAN.
Check tnsping to both services:
$tnsping PRIM
$tnsping STAN
Content of TNSNAMES.ORA File on both PRIMARY and STANDBY Server is,
# tnsnames.ora Network Configuration File: C:\oracle\product\10.2.0\db_1\NETWORK\ADMIN\tnsnames.ora
# Generated by Oracle configuration tools.
CELL =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = abdulsamipc)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = cell)
)
)
STAN =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = asamipc)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = stan)
)
)
10. On Standby server, setup the environment variables to point to the Standby database.
Set up ORACLE_HOME and ORACLE_SID.
11. Start up nomount the standby database and generate a spfile.
- On Windows:
SQL>startup nomount pfile=’c:\pfileSTAN.ora’;
SQL>create spfile from pfile=’c:\pfileSTAN.ora’;
-- Restart the Standby database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>startup mount;
- On Unix:
SQL>startup nomount pfile=’$ORACLE_HOME
SQL>create spfile from pfile=’$ORACLE_HOME
-- Restart the Standby database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>startup mount;
(Note- specify your Oracle home path to replace ‘
12. Start Redo apply
i) On the standby database, to start redo apply:
SQL>alter database recover managed standby database disconnect from session;
If you ever need to stop log apply services:
SQL> alter database recover managed standby database cancel;
13. Verify the standby database is performing properly:
i) On Standby perform a query:
SQL>select sequence#, first_time, next_time from v$archived_log;
ii) On Primary, force a logfile switch:
SQL>alter system switch logfile;
iii) On Standby, verify the archived redo log files were applied:
SQL>select sequence#, applied from v$archived_log order by sequence#;
14. If you want the redo data to be applied as it is received without waiting for the current standby redo log file to be archived, enable the real-time apply.
To start real-time apply:
SQL> alter database recover managed standby database using current logfile disconnect;
15. To create multiple standby databases, repeat this procedure.
Maintenance:
1. Check the alert log files of Primary and Standby databases frequently to monitor the database operations in a Data Guard environment.
2. Cleanup the archive logs on Primary and Standby servers.
I scheduled weekly Hot Whole database backup against my primary database that also backs up and delete the archived logs on Primary.
For the standby database, I run RMAN to backup and delete the archive logs once per week.
$rman target 'sys/*****
Rman>backup archivelog all delete input;
3. Password management
The password for the SYS user must be identical on every system for the redo data transmission to succeed. If you change the password for SYS on Primary database, you will have to update the password file for Standby database accordingly, otherwise the logs won’t be shipped to the standby server. If the password files are not in sync then update/recreate password file for the Standby database.
Reference:
Oracle Data Guard Concepts and Administration 10g Release 2 (10.2), B14239-04
For any queries please don't hesitate to contact me, Sami Malik (samiora@gmail.com)
[Senior Oracle Applicaitons DBA at Government Sector in Middle East]
Cell: 0097433164959
Wednesday, September 29, 2010
Disaster Recovery Project Guidelines
Kickoff for Disaster Recovery project is tomorrow (30Sept2010). I will be very busy during this project as we are preparing the DR for all the business critical applications at the local DR site in the same country-Qa. While from January2011 we will be implementing another DR site in Europe. I will keep this blog updated with the updates on the DR project that we are starting tomorrow. Please wish me good luck and thanks for all your support and for visiting this blog. For any queries please don't hesitate to contact me on my email masamimalik@yahoo.com, samiora@gmail.com.
Disaster recovery is becoming increasingly important for businesses aware of the threat of both man-made and natural disasters. Having a disaster recovery plan will not only protect organization’s essential data from destruction, it will help refine business processes and enable business to recover its operations in the event of a disaster. Though each organization has unique knowledge and assets to maintain, general principles can be applied to disaster recovery. This set of planning guidelines can assist organization in moving forward with an IT disaster recovery project.
Accountability and endorsement
A key factor in the success of your disaster recovery plan will be holding someone on the executive management team accountable. It could be the CIO, CTO, COO, or, if your company is small, the IT director. Whether this person manages the disaster recovery preparations or delegates them to someone else, it will be necessary for the entire organization to know that the disaster recovery preparations are deemed essential by executive management in order to obtain the cooperation you’ll need from all the staff involved. Without endorsement from higher management, collecting all the information you’ll need to make the disaster recovery project a success will be much more difficult. Even if the disaster recovery project is managed by someone who has had the task delegated, upper management needs to convey to the entire organization the importance and essentiality of the project.
Identify and organize your data
One of the first steps in putting together a disaster recovery plan is to identify your mission-critical data. Doing so will help you understand what data you need to back up for off-site storage. It will also prompt you to document why you need this data and plan how to put it back where it belongs in the event of a recovery operation.
Next, instruct your users to assist you in organizing the data in intuitive directories on a central server. Even if you plan to just back up everything, knowing which files are where is a key part of the recovery process. If, for example, when disaster strikes you have customer data spread all over your network on different users' hard drives, finding the data will not be easy. Restoring all the data from backup media is only half the battle. Once the data is restored, you don't want to be walking around the office saying, "Does anyone know where we keep the XYZ contract?" The data must be organized before you back it up.
Some data types that you should take into consideration for organization on a central repository are as follows:
Key customer files: contracts, agreements, contact information, proposals
User login data: profiles, UNIX .dot files, Config.sys files, Autoexec.bat files
Network infrastructure files: DNS, WINS, DHCP, router tables
User directories
Application data: databases, Web site files
Security configuration files: ACLs, firewall rules, IDS configuration files, UNIX password/shadow files, Microsoft Windows SAM database, VPN configuration files, RADIUS configuration files
Messaging files: key configuration files, user mailboxes, system accounts
Engineer files: source code, release engineering code
Financial and company files: general ledger, insurance policies, accounts payable and accounts receivable files, incorporation registration, employee resource planning (ERP) data
License files for applications
Asset inventory
Aside from the data itself, your company needs to have an up-to-date hardware and software asset inventory list on hand at all times. The hardware list should include the equipment make, model, and serial number, and a description of what each particular piece of equipment is being used for. The software inventory should be similar, with the vendor name, version number, patch number, license information, and what the software is being used for. The information for each piece of equipment and software on the list should be mapped to the corresponding devices on the company network map. Be sure to include all cables and connectors, as well as peripheral devices such as printers, fax machines, and scanners.You might want to submit the asset inventory list to your insurance company once a year.
Restoration and recovery procedures
Imagine that a disaster has occurred. You have the data, now what should you do with it? If you don’t have any restoration and recovery procedures, your data won’t be nearly as useful to you. With the data in hand, you need to be able to re-create your entire business from brand-new systems. You’re going to need procedures for rebuilding systems and networks. System recovery and restoration procedures are typically best written by the people that currently administer and maintain the systems. Each system should have recovery procedures that indicate which versions of software and patches should be installed on which types of hardware platforms. It's also important to indicate which configuration files should be restored into which directories. A good procedure will include low-level file execution instructions, such as what commands to type and in what order to type them.
Document decision-making processes
Recovering your data, systems, and networks is one thing, but when you lose staff, recovering the knowledge they held is quite different. You will never be able to recover that knowledge completely. However, you can mitigate this loss by documenting decision-making processes in flowcharts. To do this, have each of your staff identify decisions that they make and then create flowcharts for their thought processes. Sample decisions could be:
How much do you charge for a new service?
How do you know if taking on a particular new project is worth the return?
How do you evaluate new business?
How do you decide whom you should partner with?
How do you decide who your sales prospects are?
How do you decide who your suppliers are?
When a call comes in to your help desk, how does it get routed?
What are your QA procedures for your product?
It's impossible to document every decision your staff is capable of making. To get started, don't ask your staff to document every possible decision-making scenario. Ask them to document the three most important decision-making processes that they use on a consistent basis. You can add new processes to your disaster recovery plan in the future, and you may want to have employees write three new decision-making flowcharts each year at the time of their annual reviews.
Backups are key
As an IT or network administrator, you need to bring all your key data, processes, and procedures together through a backup system that is reliable and easy to replicate. Your IT director's most important job is to ensure that all systems are being backed up on a reliable schedule. This process, though it seems obvious, is often not realized. Assigning backup responsibilities to an administrator is not enough. The IT department needs to have a written schedule that describes which systems get backed up when and whether the backups are full or incremental. You also need to have the backup process fully documented. Finally, test your backup process to make sure it works. Can you restore lost databases? Can you restore lost source code? Can you restore key system files?Finally, you need to store your backup media off-site, preferably in a location at least 50 miles from your present office. Numerous off-site storage vendors offer safe media storage.
Disaster strikes
Let’s say for a moment that the worst occurs and your business is devastated by a disaster, to the point where you need to rebuild your business from scratch. Here are some of the key steps you should take to recover your operations:
1. Notify your insurance company immediately.
2. Identify a recovery site where you will bring your business back up.
3. Obtain your asset inventory list and reorder all lost items.
4. Distribute a network map and asset inventory list to your recovery team.
5. As the new hardware comes in, have your recovery team connect the pieces.
6. Restore your network infrastructure servers first (DNS, routers, etc.).
7. Restore your application servers second.
8. Restore your user data third.
9. Perform any necessary configuration tweaks according to your guidelines.
10. Test all applications for functionality.
11. Test all user logins.
12. Put a notice on your Web site stating that your business was affected by a disaster.
Summary recommendations
It’s likely that in the event of a real disaster, not everything will be recoverable. Your goal should be to recover enough data, processes, and procedures so that your business can be up and running as quickly as possible, once you’re in a new office.Testing your plan is key to ensuring its success. A good way to test your plan is in a lab setting. With uninstalled systems that aren’t connected to the network, see how fast you can install your systems, configure them, and restore essential data. The best test is to use a recovery staff other than the everyday staff that uses and administers the systems. By using staff that aren’t familiar with everyday usage of your systems and applications, you’ll uncover deficiencies in the processes and procedures you’ve documented. Time your recovery scenario and see if you can improve the time it takes for recovery each time you hold a practice drill.
Conclusion
A disaster recovery plan is essential to your company’s long-term success. Even if you never have to use the plan, the process of putting it together will by its very nature increase the security of your assets and improve your overall business efficiency. The preparation of a disaster recovery plan will teach you what data is important and will necessitate that you understand how your business works from a decision-making standpoint. Disaster recovery can be more easily achieved if you follow this simple outline:
>Hold someone accountable for disaster recovery.
>Identify mission-critical data.
>Organize data on a central repository.
>Create procedures for recovering mission-critical servers.
>Create knowledge-based decision-making flowcharts.
>Back up your data on a regular schedule.
>Store your data off-site.
>Test your recovery plan.
For 'Download these guidelines for developing a disaster recovery plan' and more go to the following link,
http://articles.techrepublic.com.com/5100-10878_11-1060886.html?tag=rbxccnbtr1
Disaster recovery is becoming increasingly important for businesses aware of the threat of both man-made and natural disasters. Having a disaster recovery plan will not only protect organization’s essential data from destruction, it will help refine business processes and enable business to recover its operations in the event of a disaster. Though each organization has unique knowledge and assets to maintain, general principles can be applied to disaster recovery. This set of planning guidelines can assist organization in moving forward with an IT disaster recovery project.
Accountability and endorsement
A key factor in the success of your disaster recovery plan will be holding someone on the executive management team accountable. It could be the CIO, CTO, COO, or, if your company is small, the IT director. Whether this person manages the disaster recovery preparations or delegates them to someone else, it will be necessary for the entire organization to know that the disaster recovery preparations are deemed essential by executive management in order to obtain the cooperation you’ll need from all the staff involved. Without endorsement from higher management, collecting all the information you’ll need to make the disaster recovery project a success will be much more difficult. Even if the disaster recovery project is managed by someone who has had the task delegated, upper management needs to convey to the entire organization the importance and essentiality of the project.
Identify and organize your data
One of the first steps in putting together a disaster recovery plan is to identify your mission-critical data. Doing so will help you understand what data you need to back up for off-site storage. It will also prompt you to document why you need this data and plan how to put it back where it belongs in the event of a recovery operation.
Next, instruct your users to assist you in organizing the data in intuitive directories on a central server. Even if you plan to just back up everything, knowing which files are where is a key part of the recovery process. If, for example, when disaster strikes you have customer data spread all over your network on different users' hard drives, finding the data will not be easy. Restoring all the data from backup media is only half the battle. Once the data is restored, you don't want to be walking around the office saying, "Does anyone know where we keep the XYZ contract?" The data must be organized before you back it up.
Some data types that you should take into consideration for organization on a central repository are as follows:
Key customer files: contracts, agreements, contact information, proposals
User login data: profiles, UNIX .dot files, Config.sys files, Autoexec.bat files
Network infrastructure files: DNS, WINS, DHCP, router tables
User directories
Application data: databases, Web site files
Security configuration files: ACLs, firewall rules, IDS configuration files, UNIX password/shadow files, Microsoft Windows SAM database, VPN configuration files, RADIUS configuration files
Messaging files: key configuration files, user mailboxes, system accounts
Engineer files: source code, release engineering code
Financial and company files: general ledger, insurance policies, accounts payable and accounts receivable files, incorporation registration, employee resource planning (ERP) data
License files for applications
Asset inventory
Aside from the data itself, your company needs to have an up-to-date hardware and software asset inventory list on hand at all times. The hardware list should include the equipment make, model, and serial number, and a description of what each particular piece of equipment is being used for. The software inventory should be similar, with the vendor name, version number, patch number, license information, and what the software is being used for. The information for each piece of equipment and software on the list should be mapped to the corresponding devices on the company network map. Be sure to include all cables and connectors, as well as peripheral devices such as printers, fax machines, and scanners.You might want to submit the asset inventory list to your insurance company once a year.
Restoration and recovery procedures
Imagine that a disaster has occurred. You have the data, now what should you do with it? If you don’t have any restoration and recovery procedures, your data won’t be nearly as useful to you. With the data in hand, you need to be able to re-create your entire business from brand-new systems. You’re going to need procedures for rebuilding systems and networks. System recovery and restoration procedures are typically best written by the people that currently administer and maintain the systems. Each system should have recovery procedures that indicate which versions of software and patches should be installed on which types of hardware platforms. It's also important to indicate which configuration files should be restored into which directories. A good procedure will include low-level file execution instructions, such as what commands to type and in what order to type them.
Document decision-making processes
Recovering your data, systems, and networks is one thing, but when you lose staff, recovering the knowledge they held is quite different. You will never be able to recover that knowledge completely. However, you can mitigate this loss by documenting decision-making processes in flowcharts. To do this, have each of your staff identify decisions that they make and then create flowcharts for their thought processes. Sample decisions could be:
How much do you charge for a new service?
How do you know if taking on a particular new project is worth the return?
How do you evaluate new business?
How do you decide whom you should partner with?
How do you decide who your sales prospects are?
How do you decide who your suppliers are?
When a call comes in to your help desk, how does it get routed?
What are your QA procedures for your product?
It's impossible to document every decision your staff is capable of making. To get started, don't ask your staff to document every possible decision-making scenario. Ask them to document the three most important decision-making processes that they use on a consistent basis. You can add new processes to your disaster recovery plan in the future, and you may want to have employees write three new decision-making flowcharts each year at the time of their annual reviews.
Backups are key
As an IT or network administrator, you need to bring all your key data, processes, and procedures together through a backup system that is reliable and easy to replicate. Your IT director's most important job is to ensure that all systems are being backed up on a reliable schedule. This process, though it seems obvious, is often not realized. Assigning backup responsibilities to an administrator is not enough. The IT department needs to have a written schedule that describes which systems get backed up when and whether the backups are full or incremental. You also need to have the backup process fully documented. Finally, test your backup process to make sure it works. Can you restore lost databases? Can you restore lost source code? Can you restore key system files?Finally, you need to store your backup media off-site, preferably in a location at least 50 miles from your present office. Numerous off-site storage vendors offer safe media storage.
Disaster strikes
Let’s say for a moment that the worst occurs and your business is devastated by a disaster, to the point where you need to rebuild your business from scratch. Here are some of the key steps you should take to recover your operations:
1. Notify your insurance company immediately.
2. Identify a recovery site where you will bring your business back up.
3. Obtain your asset inventory list and reorder all lost items.
4. Distribute a network map and asset inventory list to your recovery team.
5. As the new hardware comes in, have your recovery team connect the pieces.
6. Restore your network infrastructure servers first (DNS, routers, etc.).
7. Restore your application servers second.
8. Restore your user data third.
9. Perform any necessary configuration tweaks according to your guidelines.
10. Test all applications for functionality.
11. Test all user logins.
12. Put a notice on your Web site stating that your business was affected by a disaster.
Summary recommendations
It’s likely that in the event of a real disaster, not everything will be recoverable. Your goal should be to recover enough data, processes, and procedures so that your business can be up and running as quickly as possible, once you’re in a new office.Testing your plan is key to ensuring its success. A good way to test your plan is in a lab setting. With uninstalled systems that aren’t connected to the network, see how fast you can install your systems, configure them, and restore essential data. The best test is to use a recovery staff other than the everyday staff that uses and administers the systems. By using staff that aren’t familiar with everyday usage of your systems and applications, you’ll uncover deficiencies in the processes and procedures you’ve documented. Time your recovery scenario and see if you can improve the time it takes for recovery each time you hold a practice drill.
Conclusion
A disaster recovery plan is essential to your company’s long-term success. Even if you never have to use the plan, the process of putting it together will by its very nature increase the security of your assets and improve your overall business efficiency. The preparation of a disaster recovery plan will teach you what data is important and will necessitate that you understand how your business works from a decision-making standpoint. Disaster recovery can be more easily achieved if you follow this simple outline:
>Hold someone accountable for disaster recovery.
>Identify mission-critical data.
>Organize data on a central repository.
>Create procedures for recovering mission-critical servers.
>Create knowledge-based decision-making flowcharts.
>Back up your data on a regular schedule.
>Store your data off-site.
>Test your recovery plan.
For 'Download these guidelines for developing a disaster recovery plan' and more go to the following link,
http://articles.techrepublic.com.com/5100-10878_11-1060886.html?tag=rbxccnbtr1
Monday, August 30, 2010
Discoverer 11g certified on new platforms
Oracle Business Intelligence Discoverer is an ad-hoc query, reporting, analysis, and Web-publishing tool that allows end-users to work directly with Oracle E-Business Suite OLTP data.
Oracle has certified Discoverer 11g version 11.1.1.3 (Patchset 2) with the E-Business Suite on six platforms. Those certifications included Oracle Enterprise Linux, Red Hat Enterprise Linux, and SUSE Linux Enterprise Server.
As a follow-on to that original certification announcement, Oracle Fusion Middleware 11g Release 1 Discoverer version 11.1.1.3 is now certified with Oracle E-Business Suite Release 11i and R12 (12.0.x, 12.1.x) on a dozen additional platforms.
New platforms certified for EBS 11i
* HP-UX PA-RISC (64-bit) (11.23, 11.31)
* IBM AIX on Power Systems (64-bit) (5.3, 6.1)
* Microsoft Windows Server (32-bit) (2003)
* Oracle Solaris on SPARC (64-bit) (9, 10)
New platforms certified for EBS 12.0.x and 12.1.x
* Linux x86-64 (Oracle Enterprise Linux 4, 5)
* Linux x86-64 (RHEL 4, 5)
* Linux x86-64 (SLES 10)
* HP-UX Itanium (11.23, 11.31)
* HP-UX PA-RISC (64-bit) (11.23, 11.31)
* IBM AIX on Power Systems (64-bit) (5.3, 6.1)
* Microsoft Windows Server (32-bit) (2003, 2008)
* Oracle Solaris on SPARC (64-bit) (9, 10)
References
* Using Discoverer 11.1.1 with Oracle E-Business Suite Release 12 (Note 1074326.1)
* Using Discoverer 11.1.1 with Oracle E-Business Suite Release 11i (Note 1073963.1)
Oracle has certified Discoverer 11g version 11.1.1.3 (Patchset 2) with the E-Business Suite on six platforms. Those certifications included Oracle Enterprise Linux, Red Hat Enterprise Linux, and SUSE Linux Enterprise Server.
As a follow-on to that original certification announcement, Oracle Fusion Middleware 11g Release 1 Discoverer version 11.1.1.3 is now certified with Oracle E-Business Suite Release 11i and R12 (12.0.x, 12.1.x) on a dozen additional platforms.
New platforms certified for EBS 11i
* HP-UX PA-RISC (64-bit) (11.23, 11.31)
* IBM AIX on Power Systems (64-bit) (5.3, 6.1)
* Microsoft Windows Server (32-bit) (2003)
* Oracle Solaris on SPARC (64-bit) (9, 10)
New platforms certified for EBS 12.0.x and 12.1.x
* Linux x86-64 (Oracle Enterprise Linux 4, 5)
* Linux x86-64 (RHEL 4, 5)
* Linux x86-64 (SLES 10)
* HP-UX Itanium (11.23, 11.31)
* HP-UX PA-RISC (64-bit) (11.23, 11.31)
* IBM AIX on Power Systems (64-bit) (5.3, 6.1)
* Microsoft Windows Server (32-bit) (2003, 2008)
* Oracle Solaris on SPARC (64-bit) (9, 10)
References
* Using Discoverer 11.1.1 with Oracle E-Business Suite Release 12 (Note 1074326.1)
* Using Discoverer 11.1.1 with Oracle E-Business Suite Release 11i (Note 1073963.1)
Labels:
Oracle Applications ERP
Wednesday, July 7, 2010
Linux Cron job examples
MIN HOUR DOM MON DOW CMD > /tmp/output.log
Crontab Fields and Allowed Ranges (Linux Crontab Syntax)
Field Description AllowedValue
MIN Minute field 0 to 59
HOUR Hour field 0 to 23
DOM Day of Month 1-31
MON Month field 1-12
DOW Day Of Week 0-6
CMD Command Any command to be executed
/tmp/output.log is the command output log file.
1. Scheduling a Job For a Specific Time Every Day
The basic usage of cron is to execute a job in a specific time as shown below. This will execute the Full backup shell script (full-backup) on 10th June 08:30 AM.
Please note that the time field uses 24 hours format. So, for 8 AM use 8, and for 8 PM use 20.
30 08 10 06 * /home/samimalik/full-backup30 – 30th Minute
08 – 08 AM
10 – 10th Day
06 – 6th Month (June)
* – Every day of the week
2. Schedule a Job For More Than One Instance (e.g. Twice a Day)The following script take a incremental backup twice a day every day.
This example executes the specified incremental backup shell script (incremental-backup) at 11:00 and 16:00 on every day. The comma separated value in a field specifies that the command needs to be executed in all the mentioned time.
00 11,16 * * * /home/samimalik/bin/incremental-backup00 – 0th Minute (Top of the hour)
11,16 – 11 AM and 4 PM
* – Every day
* – Every month
* – Every day of the week
3. Schedule a Job for Specific Range of Time (e.g. Only on Weekdays)
If you wanted a job to be scheduled for every hour with in a specific range of time then use the following.
Cron Job everyday during working hours
This example checks the status of the database everyday (including weekends) during the working hours 9 a.m – 6 p.m
00 09-18 * * * /home/samimalik/bin/check-db-status00 – 0th Minute (Top of the hour)
09-18 – 9 am, 10 am,11 am, 12 am, 1 pm, 2 pm, 3 pm, 4 pm, 5 pm, 6 pm
* – Every day
* – Every month
* – Every day of the week
Cron Job every weekday during working hours
This example checks the status of the database every weekday (i.e excluding Sat and Sun) during the working hours 9 a.m – 6 p.m.
00 09-18 * * 1-5 /home/samimalik/bin/check-db-status00 – 0th Minute (Top of the hour)
09-18 – 9 am, 10 am,11 am, 12 am, 1 pm, 2 pm, 3 pm, 4 pm, 5 pm, 6 pm
* – Every day
* – Every month
1-5 -Mon, Tue, Wed, Thu and Fri (Every Weekday)
4. How to View Crontab Entries?
View Current Logged-In User’s Crontab entries
To view your crontab entries type crontab -l from your unix account as shown below.
samimalik@dev-db$ crontab -l
@yearly /home/samimalik/annual-maintenance
*/10 * * * * /home/samimalik/check-disk-space
[Note: This displays crontab of the current logged in user]
View Root Crontab entries
Login as root user (su – root) and do crontab -l as shown below.
root@dev-db# crontab -l
no crontab for rootCrontab HowTo: View Other Linux User’s Crontabs entries
To view crontab entries of other Linux users, login to root and use -u {username} -l as shown below.
root@dev-db# crontab -u sathiya -l
@monthly /home/sathiya/monthly-backup
00 09-18 * * * /home/sathiya/check-db-status
5. How to Edit Crontab Entries?
Edit Current Logged-In User’s Crontab entries
To edit a crontab entries, use crontab -e as shown below. By default this will edit the current logged-in users crontab.
samimalik@dev-db$ crontab -e
@yearly /home/samimalik/centos/bin/annual-maintenance
*/10 * * * * /home/samimalik/debian/bin/check-disk-space
~
"/tmp/crontab.XXXXyjWkHw" 2L, 83C
[Note: This will open the crontab file in Vim editor for editing.
Please note cron created a temporary /tmp/crontab.XX... ]When you save the above temporary file with :wq, it will save the crontab and display the following message indicating the crontab is successfully modified.
~
"crontab.XXXXyjWkHw" 2L, 83C written
crontab: installing new crontabEdit Root Crontab entries
Login as root user (su – root) and do crontab -e as shown below.
root@dev-db# crontab -eEdit Other Linux User’s Crontab File entries
To edit crontab entries of other Linux users, login to root and use -u {username} -e as shown below.
root@dev-db# crontab -u sathiya -e
@monthly /home/sathiya/fedora/bin/monthly-backup
00 09-18 * * * /home/sathiya/ubuntu/bin/check-db-status
~
~
~
"/tmp/crontab.XXXXyjWkHw" 2L, 83C
6. Schedule a Job for Every Minute Using Cron.
Ideally you may not have a requirement to schedule a job every minute. But understanding this example will will help you understand the other examples mentioned below in this article.
* * * * * CMDThe * means all the possible unit — i.e every minute of every hour through out the year. More than using this * directly, you will find it very useful in the following cases.
When you specify */5 in minute field means every 5 minutes.
When you specify 0-10/2 in minute field mean every 2 minutes in the first 10 minute.
Thus the above convention can be used for all the other 4 fields.
7. Schedule a Background Cron Job For Every 10 Minutes.
Use the following, if you want to check the disk space every 10 minutes.
*/10 * * * * /home/samimalik/check-disk-spaceIt executes the specified command check-disk-space every 10 minutes through out the year. But you may have a requirement of executing the command only during office hours or vice versa. The above examples shows how to do those things.
Instead of specifying values in the 5 fields, we can specify it using a single keyword as mentioned below.
There are special cases in which instead of the above 5 fields you can use @ followed by a keyword — such as reboot, midnight, yearly, hourly.
Table: Cron special keywords and its meaning Keyword Equivalent
@yearly 0 0 1 1 *
@daily 0 0 * * *
@hourly 0 * * * *
@reboot Run at startup.
8. Schedule a Job For First Minute of Every Year using @yearly
If you want a job to be executed on the first minute of every year, then you can use the @yearly cron keyword as shown below.
This will execute the system annual maintenance using annual-maintenance shell script at 00:00 on Jan 1st for every year.
@yearly /home/samimalik/red-hat/bin/annual-maintenance
9. Schedule a Cron Job Beginning of Every Month using @monthly
It is as similar as the @yearly as above. But executes the command monthly once using @monthly cron keyword.
This will execute the shell script tape-backup at 00:00 on 1st of every month.
@monthly /home/samimalik/suse/bin/tape-backup
10. Schedule a Background Job Every Day using @daily
Using the @daily cron keyword, this will do a daily log file cleanup using cleanup-logs shell scriptat 00:00 on every day.
@daily /home/samimalik/arch-linux/bin/cleanup-logs "day started"
11. How to Execute a Linux Command After Every Reboot using @reboot?
Using the @reboot cron keyword, this will execute the specified command once after the machine got booted every time.
@reboot CMD
12. How to Disable/Redirect the Crontab Mail Output using MAIL keyword?
By default crontab sends the job output to the user who scheduled the job. If you want to redirect the output to a specific user, add or update the MAIL variable in the crontab as shown below.
samimalik@dev-db$ crontab -l
MAIL="samimalik"
@yearly /home/samimalik/annual-maintenance
*/10 * * * * /home/samimalik/check-disk-space
[Note: Crontab of the current logged in user with MAIL variable]
If you wanted the mail not to be sent to anywhere, i.e to stop the crontab output to be emailed, add or update the MAIL variable in the crontab as shown below.
MAIL=""
13. How to Execute a Linux Cron Jobs Every Second Using Crontab.
You cannot schedule a every-second cronjob. Because in cron the minimum unit you can specify is minute. In a typical scenario, there is no reason for most of us to run any job every second in the system.
14. Specify PATH Variable in the Crontab
All the above examples we specified absolute path of the Linux command or the shell-script that needs to be executed.
For example, instead of specifying /home/samimalik/tape-backup, if you want to just specify tape-backup, then add the path /home/samimalik to the PATH variable in the crontab as shown below.
samimalik@dev-db$ crontab -l
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/home/samimalik
@yearly annual-maintenance
*/10 * * * * check-disk-space
[Note: Crontab of the current logged in user with PATH variable]15. Installing Crontab From a Cron File
Instead of directly editing the crontab file, you can also add all the entries to a cron-file first. Once you have all thoese entries in the file, you can upload or install them to the cron as shown below.
samimalik@dev-db$ crontab -l
no crontab for samimalik
$ cat cron-file.txt
@yearly /home/samimalik/annual-maintenance
*/10 * * * * /home/samimalik/check-disk-space
samimalik@dev-db$ crontab cron-file.txt
samimalik@dev-db$ crontab -l
@yearly /home/samimalik/annual-maintenance
*/10 * * * * /home/samimalik/check-disk-spaceNote: This will install the cron-file.txt to your crontab, which will also remove your old cron entries. So, please be careful while uploading cron entries from a cron-file.txt.
Crontab Fields and Allowed Ranges (Linux Crontab Syntax)
Field Description AllowedValue
MIN Minute field 0 to 59
HOUR Hour field 0 to 23
DOM Day of Month 1-31
MON Month field 1-12
DOW Day Of Week 0-6
CMD Command Any command to be executed
/tmp/output.log is the command output log file.
1. Scheduling a Job For a Specific Time Every Day
The basic usage of cron is to execute a job in a specific time as shown below. This will execute the Full backup shell script (full-backup) on 10th June 08:30 AM.
Please note that the time field uses 24 hours format. So, for 8 AM use 8, and for 8 PM use 20.
30 08 10 06 * /home/samimalik/full-backup30 – 30th Minute
08 – 08 AM
10 – 10th Day
06 – 6th Month (June)
* – Every day of the week
2. Schedule a Job For More Than One Instance (e.g. Twice a Day)The following script take a incremental backup twice a day every day.
This example executes the specified incremental backup shell script (incremental-backup) at 11:00 and 16:00 on every day. The comma separated value in a field specifies that the command needs to be executed in all the mentioned time.
00 11,16 * * * /home/samimalik/bin/incremental-backup00 – 0th Minute (Top of the hour)
11,16 – 11 AM and 4 PM
* – Every day
* – Every month
* – Every day of the week
3. Schedule a Job for Specific Range of Time (e.g. Only on Weekdays)
If you wanted a job to be scheduled for every hour with in a specific range of time then use the following.
Cron Job everyday during working hours
This example checks the status of the database everyday (including weekends) during the working hours 9 a.m – 6 p.m
00 09-18 * * * /home/samimalik/bin/check-db-status00 – 0th Minute (Top of the hour)
09-18 – 9 am, 10 am,11 am, 12 am, 1 pm, 2 pm, 3 pm, 4 pm, 5 pm, 6 pm
* – Every day
* – Every month
* – Every day of the week
Cron Job every weekday during working hours
This example checks the status of the database every weekday (i.e excluding Sat and Sun) during the working hours 9 a.m – 6 p.m.
00 09-18 * * 1-5 /home/samimalik/bin/check-db-status00 – 0th Minute (Top of the hour)
09-18 – 9 am, 10 am,11 am, 12 am, 1 pm, 2 pm, 3 pm, 4 pm, 5 pm, 6 pm
* – Every day
* – Every month
1-5 -Mon, Tue, Wed, Thu and Fri (Every Weekday)
4. How to View Crontab Entries?
View Current Logged-In User’s Crontab entries
To view your crontab entries type crontab -l from your unix account as shown below.
samimalik@dev-db$ crontab -l
@yearly /home/samimalik/annual-maintenance
*/10 * * * * /home/samimalik/check-disk-space
[Note: This displays crontab of the current logged in user]
View Root Crontab entries
Login as root user (su – root) and do crontab -l as shown below.
root@dev-db# crontab -l
no crontab for rootCrontab HowTo: View Other Linux User’s Crontabs entries
To view crontab entries of other Linux users, login to root and use -u {username} -l as shown below.
root@dev-db# crontab -u sathiya -l
@monthly /home/sathiya/monthly-backup
00 09-18 * * * /home/sathiya/check-db-status
5. How to Edit Crontab Entries?
Edit Current Logged-In User’s Crontab entries
To edit a crontab entries, use crontab -e as shown below. By default this will edit the current logged-in users crontab.
samimalik@dev-db$ crontab -e
@yearly /home/samimalik/centos/bin/annual-maintenance
*/10 * * * * /home/samimalik/debian/bin/check-disk-space
~
"/tmp/crontab.XXXXyjWkHw" 2L, 83C
[Note: This will open the crontab file in Vim editor for editing.
Please note cron created a temporary /tmp/crontab.XX... ]When you save the above temporary file with :wq, it will save the crontab and display the following message indicating the crontab is successfully modified.
~
"crontab.XXXXyjWkHw" 2L, 83C written
crontab: installing new crontabEdit Root Crontab entries
Login as root user (su – root) and do crontab -e as shown below.
root@dev-db# crontab -eEdit Other Linux User’s Crontab File entries
To edit crontab entries of other Linux users, login to root and use -u {username} -e as shown below.
root@dev-db# crontab -u sathiya -e
@monthly /home/sathiya/fedora/bin/monthly-backup
00 09-18 * * * /home/sathiya/ubuntu/bin/check-db-status
~
~
~
"/tmp/crontab.XXXXyjWkHw" 2L, 83C
6. Schedule a Job for Every Minute Using Cron.
Ideally you may not have a requirement to schedule a job every minute. But understanding this example will will help you understand the other examples mentioned below in this article.
* * * * * CMDThe * means all the possible unit — i.e every minute of every hour through out the year. More than using this * directly, you will find it very useful in the following cases.
When you specify */5 in minute field means every 5 minutes.
When you specify 0-10/2 in minute field mean every 2 minutes in the first 10 minute.
Thus the above convention can be used for all the other 4 fields.
7. Schedule a Background Cron Job For Every 10 Minutes.
Use the following, if you want to check the disk space every 10 minutes.
*/10 * * * * /home/samimalik/check-disk-spaceIt executes the specified command check-disk-space every 10 minutes through out the year. But you may have a requirement of executing the command only during office hours or vice versa. The above examples shows how to do those things.
Instead of specifying values in the 5 fields, we can specify it using a single keyword as mentioned below.
There are special cases in which instead of the above 5 fields you can use @ followed by a keyword — such as reboot, midnight, yearly, hourly.
Table: Cron special keywords and its meaning Keyword Equivalent
@yearly 0 0 1 1 *
@daily 0 0 * * *
@hourly 0 * * * *
@reboot Run at startup.
8. Schedule a Job For First Minute of Every Year using @yearly
If you want a job to be executed on the first minute of every year, then you can use the @yearly cron keyword as shown below.
This will execute the system annual maintenance using annual-maintenance shell script at 00:00 on Jan 1st for every year.
@yearly /home/samimalik/red-hat/bin/annual-maintenance
9. Schedule a Cron Job Beginning of Every Month using @monthly
It is as similar as the @yearly as above. But executes the command monthly once using @monthly cron keyword.
This will execute the shell script tape-backup at 00:00 on 1st of every month.
@monthly /home/samimalik/suse/bin/tape-backup
10. Schedule a Background Job Every Day using @daily
Using the @daily cron keyword, this will do a daily log file cleanup using cleanup-logs shell scriptat 00:00 on every day.
@daily /home/samimalik/arch-linux/bin/cleanup-logs "day started"
11. How to Execute a Linux Command After Every Reboot using @reboot?
Using the @reboot cron keyword, this will execute the specified command once after the machine got booted every time.
@reboot CMD
12. How to Disable/Redirect the Crontab Mail Output using MAIL keyword?
By default crontab sends the job output to the user who scheduled the job. If you want to redirect the output to a specific user, add or update the MAIL variable in the crontab as shown below.
samimalik@dev-db$ crontab -l
MAIL="samimalik"
@yearly /home/samimalik/annual-maintenance
*/10 * * * * /home/samimalik/check-disk-space
[Note: Crontab of the current logged in user with MAIL variable]
If you wanted the mail not to be sent to anywhere, i.e to stop the crontab output to be emailed, add or update the MAIL variable in the crontab as shown below.
MAIL=""
13. How to Execute a Linux Cron Jobs Every Second Using Crontab.
You cannot schedule a every-second cronjob. Because in cron the minimum unit you can specify is minute. In a typical scenario, there is no reason for most of us to run any job every second in the system.
14. Specify PATH Variable in the Crontab
All the above examples we specified absolute path of the Linux command or the shell-script that needs to be executed.
For example, instead of specifying /home/samimalik/tape-backup, if you want to just specify tape-backup, then add the path /home/samimalik to the PATH variable in the crontab as shown below.
samimalik@dev-db$ crontab -l
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/home/samimalik
@yearly annual-maintenance
*/10 * * * * check-disk-space
[Note: Crontab of the current logged in user with PATH variable]15. Installing Crontab From a Cron File
Instead of directly editing the crontab file, you can also add all the entries to a cron-file first. Once you have all thoese entries in the file, you can upload or install them to the cron as shown below.
samimalik@dev-db$ crontab -l
no crontab for samimalik
$ cat cron-file.txt
@yearly /home/samimalik/annual-maintenance
*/10 * * * * /home/samimalik/check-disk-space
samimalik@dev-db$ crontab cron-file.txt
samimalik@dev-db$ crontab -l
@yearly /home/samimalik/annual-maintenance
*/10 * * * * /home/samimalik/check-disk-spaceNote: This will install the cron-file.txt to your crontab, which will also remove your old cron entries. So, please be careful while uploading cron entries from a cron-file.txt.
Labels:
Redhat Linux
Monday, June 21, 2010
Upgrade steps from 11i to r12 with db 11g - rough
Hi Folks, Last month I was busy in upgrade project. Upgrade EBS 11i (11.5.10.2) to release 12 (12.1.1) with 64bit oracle 11g (11.1.0.7) database. The following were the steps that were performed. The steps are just a rough run but if you want to see the exact phases and tasks then please refer to my earlier post 'Tasks and Phases to Upgrade Oracle Applications 11i to r12 (12.1.1)'. For any queries please email me at masamimalik@yahoo.com or call me on my cell 0097433164959.
Also refer to the oracle material from the following link that gives the details on how to upgrade.
http://download.oracle.com/docs/cd/B53825_05/current/acrobat/121upgrade.pdf
1) clone the apps tier and the db tier from PROD to TEST (perform this on TEST DB)
2) Apply the ATG RUP 6
Apply these patches on 11.5.10.2 using 9i database
4252319 - 06:48 PM - 06:50 (total 2 min)
5903765 - 06:52 PM - 08:30 PM
starts with 315 jobs
there will be 14000 invali objects
while running autoconfig , there were file locks .. so press ctrl +c and then reboot the server + then restart the adpatch session
5903765_AR
4583125
7705743
before upgrade and running the utlu111i.sql copy the timezone files from the patch or from the oacore directory of any existing 9.2.0 oracle home under f:
3)Upgrade the database to 11gR1 on the same TEST DB server
4)convert to multiorg
5) Install the filesystem pointing to the same driver
6) Apply these patches
6767273
7461070
7) The upgrade American english driver patch starts with 119322 jobs
workers will fail at
Time is: Sat May 15 2010 02:36:41
1) There are now 119259 jobs remaining (current phase=A0):
4 running, 60 ready to run and 119195 waiting.
FAILED: file funmochk.sql on worker 2 for product fun username FUN
Reason:
sqlplus -s APPS/***** @F:\R12\apps\apps_st\appl\fun\12.0.0\patch\115\sql\funmochk.sql
Error:
Program exited with status 1
Cause: The program terminated, returning status code 1.
Action: Check your installation manual for the meaning of this code on this operating system.DECLARE
*
ERROR at line 1:
ORA-20000: Multi-Org architecture is not enabled. Please convert the target
instance to enable Multi-Org before upgrading to Release 12
ORA-06512: at line 19
Solution - enable multiorg
2) Time is: Mon May 17 2010 12:08:34
There are now 38169 jobs remaining (current phase=A144):
0 running, 0 ready to run and 38169 waiting.
ATTENTION: All workers either have failed or are waiting:
FAILED: file MSDODPCODE.sql on worker 1.
sqlplus -s APPS/***** @F:\R12\apps\apps_st\appl\msd\12.0.0\patch\115\sql\MSDODPCODE.sql
Error:
Program exited with status 1
Cause: The program terminated, returning status code 1.
Action: Check your installation manual for the meaning of this code on this operating system.begin
*
ERROR at line 1:
ORA-33292: Insufficient permissions to access analytic workspace APPS.ODPCODE
using the specified access mode.
ORA-06512: at "SYS.DBMS_AW", line 67
ORA-06512: at "SYS.DBMS_AW", line 131
ORA-06512: at "SYS.DBMS_AW", line 984
ORA-06512: at "SYS.DBMS_AW", line 937
ORA-33262: Analytic workspace ODPCODE does not exist.
ORA-06512: at "SYS.DBMS_AW", line 908
ORA-06512: at "APPS.MSD_AW_LOADER", line 10
ORA-06512: at line 3
FAILED: file pqbgtasd.sql on worker 4.
sqlplus -s APPS/***** @F:\R12\apps\apps_st\appl\pqh\12.0.0\patch\115\sql\pqbgtasd.sql
Error:
Program exited with status 1
Cause: The program terminated, returning status code 1.
Action: Check your installation manual for the meaning of this code on this operating system.
PL/SQL procedure successfully completed.
Commit complete.
API User Hook Pre-Processor One Module Report
=============================================
Created on 2010/05/17 12:08:31 (YYYY/MM/DD HH:MM:SS)
CREATE_BUDGET(Business Process API) successful.
Module Name: UPDATE_BUDGET
Module Type: Business Process API
Hook Package: PQH_BUDGETS_BK2
-----------------------------
Hook: Before Process
--------------------
(XX_RETURN_FROM_LEAVE_PKG.LEAVE_DEL) There is a parameter to the call procedure
which is not available at this hook. Check your call procedure parameters. Code
to carry out this hook call has not been created.
DELETE_BUDGET(Business Process API) successful.
BEGIN hr_api_user_hooks_utility.clear_hook_report; END;
Reason : hookup
solution delete it .
3) Time is: Tue May 18 2010 01:20:39
ATTENTION: All workers either have failed or are waiting:
FAILED: file czhist.sql on worker 1.
Start time for file is: Tue May 18 2010 01:11:38
sqlplus -s APPS/***** @F:\R12\apps\apps_st\appl\cz\12.0.0\patch\115\sql\czhist.sql &un_cz
Error:
Program exited with status 1
Cause: The program terminated, returning status code 1.
Action: Check your installation manual for the meaning of this code on this operating system.
PL/SQL procedure successfully completed.
BEGIN FND_STATS.GATHER_COLUMN_STATS('CZ','CZ_DEVL_PROJECTS','DELETED_FLAG'); END;
*
ERROR at line 1:
ORA-20002: Version of statistics table APPLSYS.FND_STATTAB is too old. Please
try upgrading it with dbms_stats.upgrade_stat_table
ORA-06512: at "APPS.FND_STATS", line 664
ORA-06512: at "APPS.FND_STATS", line 2497
ORA-06512: at line 1
4)
Time is: Sun May 16 2010 03:04:48
ATTENTION: All workers either have failed or are waiting:
FAILED: file csppl.odf on worker 1.
FAILED: file csprecm.odf on worker 2.
FAILED: file csdcsdv.odf on worker 3.
FAILED: file csmcsma.odf on worker 4.
Time is: Sun May 16 2010 09:24:04
There are now 81692 jobs remaining (current phase=A70):
3 running, 115 ready to run and 81574 waiting.
ATTENTION: All workers either have failed or are waiting:
FAILED: file csppl.odf on worker 1.
FAILED: file csprecm.odf on worker 2.
FAILED: file csdcsdv.odf on worker 3.
FAILED: file csmcsma.odf on worker 4.
ATTENTION: All workers either have failed or are waiting:
FAILED: file bivdelmenu.sql on worker 1.
FAILED: file BIVMEDE71.sql on worker 2.
FAILED: file amwmndel.sql on worker 3.
FAILED: file arhdelmn.sql on worker 4.
Reason :Start time for file is: Sun May 16 2010 15:52:28
sqlplus -s APPS/***** @F:\R12\apps\apps_st\appl\amw\12.0.0\patch\115\sql\amwmndel.sql
Error:
Program exited with status 1
Cause: The program terminated, returning status code 1.
Action: Check your installation manual for the meaning of this code on this operating system.BEGIN
*
ERROR at line 1:
ORA-04063: package body "APPS.FND_FUNCTION_SECURITY" has errors
ORA-06508: PL/SQL: could not find program unit being called:
"APPS.FND_FUNCTION_SECURITY"
ORA-06512: at line 2
ATTENTION: All workers either have failed or are waiting:
FAILED: file bivdelmenu.sql on worker 1.
FAILED: file BIVMEDE71.sql on worker 2.
FAILED: file amwmndel.sql on worker 3.
FAILED: file arhdelmn.sql on worker 4.
Time is: Sun May 16 2010 22:52:18
ATTENTION: All workers either have failed or are waiting:
FAILED: file IGSPR4700925.sql on worker 1.
Reason :
sqlplus -s APPS/***** @F:\R12\apps\apps_st\appl\igs\12.0.0\patch\115\sql\IGSPR4700925.sql
Error:
Program exited with status 1
Cause: The program terminated, returning status code 1.
Action: Check your installation manual for the meaning of this code on this operating system. WHERE CURRENT OF c_pr_inst_sta_ref;
*
ERROR at line 29:
ORA-06550: line 29, column 26:
PLS-00404: cursor 'C_PR_INST_STA_REF' must be declared with FOR UPDATE to use
with CURRENT OF
solution : invalid objects and disable and enable the events parameter in the init.ora file
Time is: Sat May 15 2010 21:34:01
FAILED: file jtf_diagnostic_report_s.xdf on worker 3 for product fnd username APPS.
Time is: Sun May 16 2010 00:52:53
FAILED: file PO_VENDORS_OBS_F1.xdf on worker 2 for product po username APPS.
Time is: Sun May 16 2010 00:52:54
FAILED: file PO_VENDOR_CONTACTS_OBS_N4.xdf on worker 3 for product po username APPS.
Time is: Sun May 16 2010 00:53:40
FAILED: file jtfiaitn.sql on worker 4 for product jtf username JTF.
Reason :
Start time for file is: Sun May 16 2010 00:53:31
sqlplus -s APPS/***** @F:\R12\apps\apps_st\appl\jtf\12.0.0\patch\115\sql\jtfiaitn.sql &un_jtf &pw_jtf
Error:
Program exited with status 1
Cause: The program terminated, returning status code 1.
Action: Check your installation manual for the meaning of this code on this operating system.Connected.
DECLARE
*
ERROR at line 1:
ORA-00955: name is already used by an existing object
ORA-06512: at "SYS.DBMS_SQL", line 1053
ORA-06512: at line 35
Time is: Sun May 16 2010 02:03:29
FAILED: file eamasso.odf on worker 3 for product eam username EAM.
Time is: Sun May 16 2010 02:03:48
FAILED: file dppobjects.odf on worker 4 for product dpp username DPP.
Time is: Sun May 16 2010 02:05:22
FAILED: file egov083.odf on worker 3 for product ego username EGO.
Start time for statement below is: Sun May 16 2010 02:02:04
D Worker error:
The following ORACLE error:
ORA-00600: internal error code, arguments: [kksfbc-reparse-infinite-loop], [0x549E78EC], [], [], [], [], [], [], [], [], [], []
occurred while executing the SQL statement:
CREATE OR REPLACE FORCE VIEW APPS.CSM_REQ_LINES_V ("ACCESS_ID",
"REQUIREMENT_LINE_ID", "REQUIREMENT_HEADER_ID", "INVENTORY_ITEM_ID",
"ITEM_NAME", "ITEM_DESCRIPTION", "UOM_CODE", "REQUIRED_QUANTITY",
"SHIP_COMPLETE_FLAG", "REVISION", "SOURCE_ORGANIZATION_ID",
"SOURCE_SUBINVENTORY", "ORDERED_QUANTITY", "ORDER_LINE_ID",
"RESERVATION_ID", "LOCAL_RESERVATION_ID", "ORDER_BY_DATE", "ARRIVAL_DATE",
"SHIPPING_METHOD_CODE", "LAST_UPDATE_DATE", "ORDER_NUMBER",
"SOURCED_FROM", "PO_REQUISITION_NUMBER", "ORDER_STATUS",
"ATTRIBUTE_CATEGORY", "ATTRIBUTE1", "ATTRIBUTE2", "ATTRIBUTE3",
"ATTRIBUTE4", "ATTRIBUTE5", "ATTRIBUTE6", "ATTRIBUTE7", "ATTRIBUTE8",
"ATTRIBUTE9", "ATTRIBUTE10", "ATTRIBUTE11", "ATTRIBUTE12", "ATTRIBUTE13",
"ATTRIBUTE14", "ATTRIBUTE15", "LIKELIHOOD", "ITEM_SCRATCHPAD",
"SCHEDULE_ARRIVAL_DATE", "WAYBILL_NUMBER", "ACTUAL_SHIPMENT_DATE",
"SHIPPING_METHOD_MEANING", "SOURCED_FROM_DISP", "SOURCE_ID", "CREATED_BY",
"LAST_UPDATED_BY") AS SELECT acc.access_id , lines.requirement_line_id ,
lines.requirement_header_id , lines.inventory_item_id ,
csm_util_pkg.item_name(msi.concatenated_segments) AS item_name,
msi.description AS item_description, lines.uom_code ,
lines.required_quantity , lines.ship_complete_flag , lines.revision ,
lines.source_organization_id , lines.source_subinventory ,
lines.ordered_quantity , lines.order_line_id , lines.reservation_id ,
lines.local_reservation_id , lines.order_by_date ,
nvl(oe_lines.actual_arrival_date,oe_lines.schedule_arrival_date)
arrival_date, oe_lines.shipping_method_code, lines.last_update_date,
oe_headers.order_number, lines.sourced_from, null po_requisition_number,
CSP_PICK_UTILS.get_order_status (oe_lines.LINE_ID,
oe_lines.FLOW_STATUS_CODE) order_status, lines.attribute_category,
lines.attribute1, lines.attribute2, lines.attribute3, lines.attribute4,
lines.attribute5, lines.attribute6, lines.attribute7, lines.attribute8,
lines.attribute9, lines.attribute10, lines.attribute11, lines.attribute12,
lines.attribute13, lines.attribute14, lines.attribute15 ,
lines.LIKELIHOOD, lines.ITEM_SCRATCHPAD,oe_lines.schedule_arrival_date,
csp_pick_utils.get_attribute_value('WAYBILL') waybill_number,
oe_lines.actual_shipment_date, flv.meaning shipping_method_meaning,
fl.meaning SOURCED_FROM_DISP, rld.source_id,
CSM_UTIL_PKG.GET_NUMBER(lines.created_by) AS CREATED_BY,
CSM_UTIL_PKG.GET_NUMBER(lines.last_updated_by) AS LAST_UPDATED_BY FROM
csm_req_lines_acc acc, csp_requirement_lines lines, oe_order_lines_all
oe_lines, oe_order_headers_all oe_headers, mtl_system_items_b_kfv msi,
csp_req_line_details rld , fnd_lookup_values flv, fnd_lookups fl WHERE
acc.user_id = asg_base.get_user_id AND acc.requirement_line_id =
lines.requirement_line_id AND (lines.sourced_from = 'INVENTORY' OR
lines.sourced_from is null) AND lines.requirement_line_id =
rld.requirement_line_id(+) AND rld.source_id = oe_lines.line_id(+) AND
oe_lines.header_id = oe_headers.header_id(+) AND msi.inventory_item_id =
lines.inventory_item_id AND msi.organization_id =
nvl(lines.source_organization_id,
csm_profile_pkg.get_organization_id(acc.user_id)) and flv.LOOKUP_TYPE(+) =
'SHIP_METHOD' and flv.LANGUAGE(+) = asg_base.get_language and
Resetting events to create views with compilation ...
AD Worker error:
The following ORACLE error:
ORA-01012: not logged on
occurred while executing the SQL statement:
alter session set events '10520 trace name context off'
Solution : execute the view creation manually
sqlplus -s APPS/***** @F:\R12\apps\apps_st\appl\igs\12.0.0\patch\115\sql\IGSAD4287260A.sql
Error:
Program exited with status 1
Cause: The program terminated, returning status code 1.
Action: Check your installation manual for the meaning of this code on this operating system.DECLARE
*
ERROR at line 1:
ORA-20100: Error due to ORA-06502: PL/SQL: numeric or value error: character to
number conversion error
ORA-06512: at line 146
After resolving the above issue did the database upgrade to 11g with database migration to 64bit.
============
custom 6i forms and reports files should be migrated to the new 10g forms reports format using the following script,
frmcmp userid=apps/password@prodsid module=XXPER_ENTITLEMENT.fmb batch=yes module_type=form
::compile_forms.bat
cls
Echo compiling Forms....
for %%f IN (*.fmb) do frmcmp userid=apps/password@testerp module=%%f batch=yes module_type=form compile_all=yes window_state=minimize
ECHO FINISHED COMPILING
REM WINDOWS COMPILE REPORT
::compile_report.bat
cls
Echo compiling Report .....
for %%f IN (*.rdf) do rwconverter userid=apps/password@testerp batch=yes source=%%f stype=rdffile DTYPE=rdffile compile_all=yes OVERWRITE=yes logfile=replog.txt
ECHO FINISHED COMPILING
=============
STEPS FOR DATABASE UPGRADE--ROUGH
oracle home installation for 11.1.0.7 -
Database binary installation -
started at 07:15 PM -07:32 PM
example installation -
started at 07:34 - 07:38 PM
Oracle Database 11.1 Post-Upgrade Status Tool 05-12-2010 05:32:04
.
Component Status Version HH:MM:SS
.
Oracle Server
. VALID 11.1.0.7.0 01:04:22
JServer JAVA Virtual Machine
. VALID 11.1.0.7.0 00:15:13
Oracle Real Application Clusters
. INVALID 11.1.0.7.0 00:00:04
OLAP Analytic Workspace
. VALID 11.1.0.7.0 00:00:41
OLAP Catalog
. VALID 11.1.0.7.0 00:01:59
Oracle OLAP API
. VALID 11.1.0.7.0 00:00:44
Oracle XDK
. VALID 11.1.0.7.0 00:03:19
Oracle Text
. VALID 11.1.0.7.0 00:02:40
Oracle XML Database
. VALID 11.1.0.7.0 00:04:36
Oracle Database Java Packages
. VALID 11.1.0.7.0 00:00:46
Oracle Multimedia
. VALID 11.1.0.7.0 00:05:41
Spatial
. VALID 11.1.0.7.0 00:08:17
Gathering Statistics
. 00:21:59
Total Upgrade Time: 02:10:29
the job was started completed at COMP_TIMESTAMP UPGRD__BGN 2010-05-12 03:14:13 and completed at 05:32 AM
so the total time is - 2 1/2 hours
2 nd job script ran from 05:32 to 08:15
- 2 1/2 hours
so total 5 hours for this job .
====================
IMPORTANT FOR PROD ATG UPGRADE ---> ROUGH
6502082 / AR
5989593 / AR post step adgrants
4252319
3218526 / AR
3854951 / AR
3761838 / AR
4206794 / AR
./txkprepatchcheck.pl -script=ValidateRollup -outfile=$APPLTMP/txkValidateRollup.html -appspass=apps
5903765 /AR
4619025
4583125
Run Autoconfig
CREATE TABLESPACE APPS_TS_TOOLS DATAFILE
'datafile_name.dbf' SIZE 500M
EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
Copy the following font files from $FND_TOP/resource to the /lib/fonts directory (under OA_JRE_TOP and JRE_TOP) on all Web and concurrent nodes.
If $FND_TOP/resource does not contain the font files, apply patch 3639533: Albany (Display) Font 4.02 Release.
The font files are:
ALBANYWT.ttf - "single-byte" characters only
ALBANWTJ.ttf - Unicode font with Japanese Han Ideographs
ALBANWTK.ttf - Unicode font with Korean Han Ideographs
ALBANWTS.ttf - Unicode font with Simplified Chinese Han Ideographs
ALBANWTT.ttf - Unicode font with Traditional Chinese Han Ideographs
select bug_number,CREATION_DATE from ad_bugs where bug_number in ('6502082','5989593','4252319','3218526','3854951','3761838','4206794','5903765','4619025','4583125');
BUG_NUMBER CREATION_
------------------------------ ---------
4252319 10-MAY-10
4583125 11-MAY-10
5903765 10-MAY-10
select bug_number,CREATION_DATE from ad_bugs where bug_number in ('3639533');
===========********===========
Post upgrade tasks has many steps while one of them is as follows,
Human Resources
Important: Customers using Oracle iRecruitment and running database
11gR1 version 11.1.0.7 must add the following line to their database
initialization file:
event="31151 trace name context forever, level 0x100"
Setting this event provides a fix for the issue reported in bug 6952870.
===============================
If you face any issues in upgrade please don't hesitate to contact me at masamimalik@yahoo.com or call me on my cell 009743164959
Also refer to the oracle material from the following link that gives the details on how to upgrade.
http://download.oracle.com/docs/cd/B53825_05/current/acrobat/121upgrade.pdf
1) clone the apps tier and the db tier from PROD to TEST (perform this on TEST DB)
2) Apply the ATG RUP 6
Apply these patches on 11.5.10.2 using 9i database
4252319 - 06:48 PM - 06:50 (total 2 min)
5903765 - 06:52 PM - 08:30 PM
starts with 315 jobs
there will be 14000 invali objects
while running autoconfig , there were file locks .. so press ctrl +c and then reboot the server + then restart the adpatch session
5903765_AR
4583125
7705743
before upgrade and running the utlu111i.sql copy the timezone files from the patch or from the oacore directory of any existing 9.2.0 oracle home under f:
3)Upgrade the database to 11gR1 on the same TEST DB server
4)convert to multiorg
5) Install the filesystem pointing to the same driver
6) Apply these patches
6767273
7461070
7) The upgrade American english driver patch starts with 119322 jobs
workers will fail at
Time is: Sat May 15 2010 02:36:41
1) There are now 119259 jobs remaining (current phase=A0):
4 running, 60 ready to run and 119195 waiting.
FAILED: file funmochk.sql on worker 2 for product fun username FUN
Reason:
sqlplus -s APPS/***** @F:\R12\apps\apps_st\appl\fun\12.0.0\patch\115\sql\funmochk.sql
Error:
Program exited with status 1
Cause: The program terminated, returning status code 1.
Action: Check your installation manual for the meaning of this code on this operating system.DECLARE
*
ERROR at line 1:
ORA-20000: Multi-Org architecture is not enabled. Please convert the target
instance to enable Multi-Org before upgrading to Release 12
ORA-06512: at line 19
Solution - enable multiorg
2) Time is: Mon May 17 2010 12:08:34
There are now 38169 jobs remaining (current phase=A144):
0 running, 0 ready to run and 38169 waiting.
ATTENTION: All workers either have failed or are waiting:
FAILED: file MSDODPCODE.sql on worker 1.
sqlplus -s APPS/***** @F:\R12\apps\apps_st\appl\msd\12.0.0\patch\115\sql\MSDODPCODE.sql
Error:
Program exited with status 1
Cause: The program terminated, returning status code 1.
Action: Check your installation manual for the meaning of this code on this operating system.begin
*
ERROR at line 1:
ORA-33292: Insufficient permissions to access analytic workspace APPS.ODPCODE
using the specified access mode.
ORA-06512: at "SYS.DBMS_AW", line 67
ORA-06512: at "SYS.DBMS_AW", line 131
ORA-06512: at "SYS.DBMS_AW", line 984
ORA-06512: at "SYS.DBMS_AW", line 937
ORA-33262: Analytic workspace ODPCODE does not exist.
ORA-06512: at "SYS.DBMS_AW", line 908
ORA-06512: at "APPS.MSD_AW_LOADER", line 10
ORA-06512: at line 3
FAILED: file pqbgtasd.sql on worker 4.
sqlplus -s APPS/***** @F:\R12\apps\apps_st\appl\pqh\12.0.0\patch\115\sql\pqbgtasd.sql
Error:
Program exited with status 1
Cause: The program terminated, returning status code 1.
Action: Check your installation manual for the meaning of this code on this operating system.
PL/SQL procedure successfully completed.
Commit complete.
API User Hook Pre-Processor One Module Report
=============================================
Created on 2010/05/17 12:08:31 (YYYY/MM/DD HH:MM:SS)
CREATE_BUDGET(Business Process API) successful.
Module Name: UPDATE_BUDGET
Module Type: Business Process API
Hook Package: PQH_BUDGETS_BK2
-----------------------------
Hook: Before Process
--------------------
(XX_RETURN_FROM_LEAVE_PKG.LEAVE_DEL) There is a parameter to the call procedure
which is not available at this hook. Check your call procedure parameters. Code
to carry out this hook call has not been created.
DELETE_BUDGET(Business Process API) successful.
BEGIN hr_api_user_hooks_utility.clear_hook_report; END;
Reason : hookup
solution delete it .
3) Time is: Tue May 18 2010 01:20:39
ATTENTION: All workers either have failed or are waiting:
FAILED: file czhist.sql on worker 1.
Start time for file is: Tue May 18 2010 01:11:38
sqlplus -s APPS/***** @F:\R12\apps\apps_st\appl\cz\12.0.0\patch\115\sql\czhist.sql &un_cz
Error:
Program exited with status 1
Cause: The program terminated, returning status code 1.
Action: Check your installation manual for the meaning of this code on this operating system.
PL/SQL procedure successfully completed.
BEGIN FND_STATS.GATHER_COLUMN_STATS('CZ','CZ_DEVL_PROJECTS','DELETED_FLAG'); END;
*
ERROR at line 1:
ORA-20002: Version of statistics table APPLSYS.FND_STATTAB is too old. Please
try upgrading it with dbms_stats.upgrade_stat_table
ORA-06512: at "APPS.FND_STATS", line 664
ORA-06512: at "APPS.FND_STATS", line 2497
ORA-06512: at line 1
4)
Time is: Sun May 16 2010 03:04:48
ATTENTION: All workers either have failed or are waiting:
FAILED: file csppl.odf on worker 1.
FAILED: file csprecm.odf on worker 2.
FAILED: file csdcsdv.odf on worker 3.
FAILED: file csmcsma.odf on worker 4.
Time is: Sun May 16 2010 09:24:04
There are now 81692 jobs remaining (current phase=A70):
3 running, 115 ready to run and 81574 waiting.
ATTENTION: All workers either have failed or are waiting:
FAILED: file csppl.odf on worker 1.
FAILED: file csprecm.odf on worker 2.
FAILED: file csdcsdv.odf on worker 3.
FAILED: file csmcsma.odf on worker 4.
ATTENTION: All workers either have failed or are waiting:
FAILED: file bivdelmenu.sql on worker 1.
FAILED: file BIVMEDE71.sql on worker 2.
FAILED: file amwmndel.sql on worker 3.
FAILED: file arhdelmn.sql on worker 4.
Reason :Start time for file is: Sun May 16 2010 15:52:28
sqlplus -s APPS/***** @F:\R12\apps\apps_st\appl\amw\12.0.0\patch\115\sql\amwmndel.sql
Error:
Program exited with status 1
Cause: The program terminated, returning status code 1.
Action: Check your installation manual for the meaning of this code on this operating system.BEGIN
*
ERROR at line 1:
ORA-04063: package body "APPS.FND_FUNCTION_SECURITY" has errors
ORA-06508: PL/SQL: could not find program unit being called:
"APPS.FND_FUNCTION_SECURITY"
ORA-06512: at line 2
ATTENTION: All workers either have failed or are waiting:
FAILED: file bivdelmenu.sql on worker 1.
FAILED: file BIVMEDE71.sql on worker 2.
FAILED: file amwmndel.sql on worker 3.
FAILED: file arhdelmn.sql on worker 4.
Time is: Sun May 16 2010 22:52:18
ATTENTION: All workers either have failed or are waiting:
FAILED: file IGSPR4700925.sql on worker 1.
Reason :
sqlplus -s APPS/***** @F:\R12\apps\apps_st\appl\igs\12.0.0\patch\115\sql\IGSPR4700925.sql
Error:
Program exited with status 1
Cause: The program terminated, returning status code 1.
Action: Check your installation manual for the meaning of this code on this operating system. WHERE CURRENT OF c_pr_inst_sta_ref;
*
ERROR at line 29:
ORA-06550: line 29, column 26:
PLS-00404: cursor 'C_PR_INST_STA_REF' must be declared with FOR UPDATE to use
with CURRENT OF
solution : invalid objects and disable and enable the events parameter in the init.ora file
Time is: Sat May 15 2010 21:34:01
FAILED: file jtf_diagnostic_report_s.xdf on worker 3 for product fnd username APPS.
Time is: Sun May 16 2010 00:52:53
FAILED: file PO_VENDORS_OBS_F1.xdf on worker 2 for product po username APPS.
Time is: Sun May 16 2010 00:52:54
FAILED: file PO_VENDOR_CONTACTS_OBS_N4.xdf on worker 3 for product po username APPS.
Time is: Sun May 16 2010 00:53:40
FAILED: file jtfiaitn.sql on worker 4 for product jtf username JTF.
Reason :
Start time for file is: Sun May 16 2010 00:53:31
sqlplus -s APPS/***** @F:\R12\apps\apps_st\appl\jtf\12.0.0\patch\115\sql\jtfiaitn.sql &un_jtf &pw_jtf
Error:
Program exited with status 1
Cause: The program terminated, returning status code 1.
Action: Check your installation manual for the meaning of this code on this operating system.Connected.
DECLARE
*
ERROR at line 1:
ORA-00955: name is already used by an existing object
ORA-06512: at "SYS.DBMS_SQL", line 1053
ORA-06512: at line 35
Time is: Sun May 16 2010 02:03:29
FAILED: file eamasso.odf on worker 3 for product eam username EAM.
Time is: Sun May 16 2010 02:03:48
FAILED: file dppobjects.odf on worker 4 for product dpp username DPP.
Time is: Sun May 16 2010 02:05:22
FAILED: file egov083.odf on worker 3 for product ego username EGO.
Start time for statement below is: Sun May 16 2010 02:02:04
D Worker error:
The following ORACLE error:
ORA-00600: internal error code, arguments: [kksfbc-reparse-infinite-loop], [0x549E78EC], [], [], [], [], [], [], [], [], [], []
occurred while executing the SQL statement:
CREATE OR REPLACE FORCE VIEW APPS.CSM_REQ_LINES_V ("ACCESS_ID",
"REQUIREMENT_LINE_ID", "REQUIREMENT_HEADER_ID", "INVENTORY_ITEM_ID",
"ITEM_NAME", "ITEM_DESCRIPTION", "UOM_CODE", "REQUIRED_QUANTITY",
"SHIP_COMPLETE_FLAG", "REVISION", "SOURCE_ORGANIZATION_ID",
"SOURCE_SUBINVENTORY", "ORDERED_QUANTITY", "ORDER_LINE_ID",
"RESERVATION_ID", "LOCAL_RESERVATION_ID", "ORDER_BY_DATE", "ARRIVAL_DATE",
"SHIPPING_METHOD_CODE", "LAST_UPDATE_DATE", "ORDER_NUMBER",
"SOURCED_FROM", "PO_REQUISITION_NUMBER", "ORDER_STATUS",
"ATTRIBUTE_CATEGORY", "ATTRIBUTE1", "ATTRIBUTE2", "ATTRIBUTE3",
"ATTRIBUTE4", "ATTRIBUTE5", "ATTRIBUTE6", "ATTRIBUTE7", "ATTRIBUTE8",
"ATTRIBUTE9", "ATTRIBUTE10", "ATTRIBUTE11", "ATTRIBUTE12", "ATTRIBUTE13",
"ATTRIBUTE14", "ATTRIBUTE15", "LIKELIHOOD", "ITEM_SCRATCHPAD",
"SCHEDULE_ARRIVAL_DATE", "WAYBILL_NUMBER", "ACTUAL_SHIPMENT_DATE",
"SHIPPING_METHOD_MEANING", "SOURCED_FROM_DISP", "SOURCE_ID", "CREATED_BY",
"LAST_UPDATED_BY") AS SELECT acc.access_id , lines.requirement_line_id ,
lines.requirement_header_id , lines.inventory_item_id ,
csm_util_pkg.item_name(msi.concatenated_segments) AS item_name,
msi.description AS item_description, lines.uom_code ,
lines.required_quantity , lines.ship_complete_flag , lines.revision ,
lines.source_organization_id , lines.source_subinventory ,
lines.ordered_quantity , lines.order_line_id , lines.reservation_id ,
lines.local_reservation_id , lines.order_by_date ,
nvl(oe_lines.actual_arrival_date,oe_lines.schedule_arrival_date)
arrival_date, oe_lines.shipping_method_code, lines.last_update_date,
oe_headers.order_number, lines.sourced_from, null po_requisition_number,
CSP_PICK_UTILS.get_order_status (oe_lines.LINE_ID,
oe_lines.FLOW_STATUS_CODE) order_status, lines.attribute_category,
lines.attribute1, lines.attribute2, lines.attribute3, lines.attribute4,
lines.attribute5, lines.attribute6, lines.attribute7, lines.attribute8,
lines.attribute9, lines.attribute10, lines.attribute11, lines.attribute12,
lines.attribute13, lines.attribute14, lines.attribute15 ,
lines.LIKELIHOOD, lines.ITEM_SCRATCHPAD,oe_lines.schedule_arrival_date,
csp_pick_utils.get_attribute_value('WAYBILL') waybill_number,
oe_lines.actual_shipment_date, flv.meaning shipping_method_meaning,
fl.meaning SOURCED_FROM_DISP, rld.source_id,
CSM_UTIL_PKG.GET_NUMBER(lines.created_by) AS CREATED_BY,
CSM_UTIL_PKG.GET_NUMBER(lines.last_updated_by) AS LAST_UPDATED_BY FROM
csm_req_lines_acc acc, csp_requirement_lines lines, oe_order_lines_all
oe_lines, oe_order_headers_all oe_headers, mtl_system_items_b_kfv msi,
csp_req_line_details rld , fnd_lookup_values flv, fnd_lookups fl WHERE
acc.user_id = asg_base.get_user_id AND acc.requirement_line_id =
lines.requirement_line_id AND (lines.sourced_from = 'INVENTORY' OR
lines.sourced_from is null) AND lines.requirement_line_id =
rld.requirement_line_id(+) AND rld.source_id = oe_lines.line_id(+) AND
oe_lines.header_id = oe_headers.header_id(+) AND msi.inventory_item_id =
lines.inventory_item_id AND msi.organization_id =
nvl(lines.source_organization_id,
csm_profile_pkg.get_organization_id(acc.user_id)) and flv.LOOKUP_TYPE(+) =
'SHIP_METHOD' and flv.LANGUAGE(+) = asg_base.get_language and
Resetting events to create views with compilation ...
AD Worker error:
The following ORACLE error:
ORA-01012: not logged on
occurred while executing the SQL statement:
alter session set events '10520 trace name context off'
Solution : execute the view creation manually
sqlplus -s APPS/***** @F:\R12\apps\apps_st\appl\igs\12.0.0\patch\115\sql\IGSAD4287260A.sql
Error:
Program exited with status 1
Cause: The program terminated, returning status code 1.
Action: Check your installation manual for the meaning of this code on this operating system.DECLARE
*
ERROR at line 1:
ORA-20100: Error due to ORA-06502: PL/SQL: numeric or value error: character to
number conversion error
ORA-06512: at line 146
After resolving the above issue did the database upgrade to 11g with database migration to 64bit.
============
custom 6i forms and reports files should be migrated to the new 10g forms reports format using the following script,
frmcmp userid=apps/password@prodsid module=XXPER_ENTITLEMENT.fmb batch=yes module_type=form
::compile_forms.bat
cls
Echo compiling Forms....
for %%f IN (*.fmb) do frmcmp userid=apps/password@testerp module=%%f batch=yes module_type=form compile_all=yes window_state=minimize
ECHO FINISHED COMPILING
REM WINDOWS COMPILE REPORT
::compile_report.bat
cls
Echo compiling Report .....
for %%f IN (*.rdf) do rwconverter userid=apps/password@testerp batch=yes source=%%f stype=rdffile DTYPE=rdffile compile_all=yes OVERWRITE=yes logfile=replog.txt
ECHO FINISHED COMPILING
=============
STEPS FOR DATABASE UPGRADE--ROUGH
oracle home installation for 11.1.0.7 -
Database binary installation -
started at 07:15 PM -07:32 PM
example installation -
started at 07:34 - 07:38 PM
Oracle Database 11.1 Post-Upgrade Status Tool 05-12-2010 05:32:04
.
Component Status Version HH:MM:SS
.
Oracle Server
. VALID 11.1.0.7.0 01:04:22
JServer JAVA Virtual Machine
. VALID 11.1.0.7.0 00:15:13
Oracle Real Application Clusters
. INVALID 11.1.0.7.0 00:00:04
OLAP Analytic Workspace
. VALID 11.1.0.7.0 00:00:41
OLAP Catalog
. VALID 11.1.0.7.0 00:01:59
Oracle OLAP API
. VALID 11.1.0.7.0 00:00:44
Oracle XDK
. VALID 11.1.0.7.0 00:03:19
Oracle Text
. VALID 11.1.0.7.0 00:02:40
Oracle XML Database
. VALID 11.1.0.7.0 00:04:36
Oracle Database Java Packages
. VALID 11.1.0.7.0 00:00:46
Oracle Multimedia
. VALID 11.1.0.7.0 00:05:41
Spatial
. VALID 11.1.0.7.0 00:08:17
Gathering Statistics
. 00:21:59
Total Upgrade Time: 02:10:29
the job was started completed at COMP_TIMESTAMP UPGRD__BGN 2010-05-12 03:14:13 and completed at 05:32 AM
so the total time is - 2 1/2 hours
2 nd job script ran from 05:32 to 08:15
- 2 1/2 hours
so total 5 hours for this job .
====================
IMPORTANT FOR PROD ATG UPGRADE ---> ROUGH
6502082 / AR
5989593 / AR post step adgrants
4252319
3218526 / AR
3854951 / AR
3761838 / AR
4206794 / AR
./txkprepatchcheck.pl -script=ValidateRollup -outfile=$APPLTMP/txkValidateRollup.html -appspass=apps
5903765 /AR
4619025
4583125
Run Autoconfig
CREATE TABLESPACE APPS_TS_TOOLS DATAFILE
'datafile_name.dbf' SIZE 500M
EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
Copy the following font files from $FND_TOP/resource to the /lib/fonts directory (under OA_JRE_TOP and JRE_TOP) on all Web and concurrent nodes.
If $FND_TOP/resource does not contain the font files, apply patch 3639533: Albany (Display) Font 4.02 Release.
The font files are:
ALBANYWT.ttf - "single-byte" characters only
ALBANWTJ.ttf - Unicode font with Japanese Han Ideographs
ALBANWTK.ttf - Unicode font with Korean Han Ideographs
ALBANWTS.ttf - Unicode font with Simplified Chinese Han Ideographs
ALBANWTT.ttf - Unicode font with Traditional Chinese Han Ideographs
select bug_number,CREATION_DATE from ad_bugs where bug_number in ('6502082','5989593','4252319','3218526','3854951','3761838','4206794','5903765','4619025','4583125');
BUG_NUMBER CREATION_
------------------------------ ---------
4252319 10-MAY-10
4583125 11-MAY-10
5903765 10-MAY-10
select bug_number,CREATION_DATE from ad_bugs where bug_number in ('3639533');
===========********===========
Post upgrade tasks has many steps while one of them is as follows,
Human Resources
Important: Customers using Oracle iRecruitment and running database
11gR1 version 11.1.0.7 must add the following line to their database
initialization file:
event="31151 trace name context forever, level 0x100"
Setting this event provides a fix for the issue reported in bug 6952870.
===============================
If you face any issues in upgrade please don't hesitate to contact me at masamimalik@yahoo.com or call me on my cell 009743164959
Labels:
Oracle Applications ERP
Subscribe to:
Posts (Atom)