Thursday, December 31, 2020

Oracle Autonomous Health Framework AHF

Autonomous Health Framework (AHF) provides a number of diagnostic tools in a single bundle, making it easy to gather diagnostic information about the Oracle database and clusterware, which in turn helps with problem resolution when dealing with Oracle Support. AHF) combines the existing Trace File Analyzer (TFA) and ORAchk/EXAchk, along with a bunch of other stuff, into a single download and setup. For any scenario where you would have installed one of these individual tools, you should now install AHF.

Autonomous Health Framework (AHF) will be shipped with new releases of the database, but you should always download the latest version. Single installer for Trace File Analyzer (TFA) and ORAchk/EXAchk
Autonomous Health Framework (AHF) - Including TFA and ORAchk/EXAChk (Doc ID 2550798.1)
If possible you should install as root. This will give you the richest capabilities.Unzip the software and run the ahf_setup command. Answer the questions when prompted. 
Download the appropriate installer from above, copy to a target machine and unzip
Run the TFA & ORAchk/EXAchk install command 

./ahf_setup [-ahf_loc install_dir] [-data_dir data_dir]

# cd /u01/software
# unzip -oq AHF-LINUX_v20.4.0.zip
#
# ./ahf_setup

AHF Installation Log : /tmp/ahf_install_7860_2020_12_31-11_11_11.log

Starting Autonomous Health Framework (AHF) Installation

AHF Version: 204000 Build Date: 202012317860

Default AHF Location : /opt/oracle.ahf

Do you want to change AHF Location (/opt/oracle.ahf) ? Y|[N] :

AHF Location : /opt/oracle.ahf

AHF Data Directory stores diagnostic collections and metadata.
AHF Data Directory requires at least 5GB (Recommended 10GB) of free space.

Choose Data Directory from below options :

1. /opt/oracle.ahf [Free Space : 1007860 MB]
2. Enter a different Location

Choose Option [1 - 2] : 1

AHF Data Directory : /opt/oracle.ahf/data

Do you want to add AHF Notification Email IDs ? [Y]|N :

Enter Email IDs separated by space : sami.malik@oracle.com

Extracting AHF to /opt/oracle.ahf

Configuring TFA Services

Discovering Nodes and Oracle Resources

Starting TFA Services
Created symlink from /etc/systemd/system/multi-user.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.
Created symlink from /etc/systemd/system/graphical.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.

.------------------------------------------------------------------------------.
| Host      | Status of TFA | PID  | Port  | Version    | Build ID             |
+-----------+---------------+------+-------+------------+----------------------+
| localhost | RUNNING       | 7860 | 24035 | 20.4.0.0.0 | 204000202012311007860|
'-----------+---------------+------+-------+------------+----------------------'

Running TFA Inventory...

Adding default users to TFA Access list...

.-------------------------------------------------------.
|              Summary of AHF Configuration             |
+-----------------+-------------------------------------+
| Parameter       | Value                               |
+-----------------+-------------------------------------+
| AHF Location    | /opt/oracle.ahf                     |
| TFA Location    | /opt/oracle.ahf/tfa                 |
| Orachk Location | /opt/oracle.ahf/orachk              |
| Data Directory  | /opt/oracle.ahf/data                |
| Repository      | /opt/oracle.ahf/data/repository     |
| Diag Directory  | /opt/oracle.ahf/data/localhost/diag |
'-----------------+-------------------------------------'

AHF binaries are available in /opt/oracle.ahf/bin

AHF is successfully installed

Moving /tmp/ahf_install_7860_2020_12_31-11_11_11.log to /opt/oracle.ahf/data/localhost/diag/ahf/

As shown above, the tfactl and orachk commands are available from the following directory.

/opt/oracle.ahf/bin

[root@ed-ol-rac-lin01::~]$ exachk -v

EXACHK  VERSION: 20.4.0_20201214

As part of installation 'oracle-tfa.service' will be enabled and started.

# systemctl status oracle-tfa.service

● oracle-tfa.service - Oracle Trace File Analyzer

   Loaded: loaded (/etc/systemd/system/oracle-tfa.service; enabled; vendor preset: disabled)

   Active: active (running) since Wed 2020-12-22 11:11:11 NZDT; 1 weeks 2 days ago

 Main PID: 1007860 (init.tfa)

   CGroup: /system.slice/oracle-tfa.service

           ├─ 1007860 /bin/sh /etc/init.d/init.tfa run >/dev/null 2>&1 </dev/null

           ├─ 12345 /opt/oracle.ahf/jre/bin/java -server -Xms256m -Xmx512m -Djava.awt.headless=true -Ddisable.checkForUpdate=true -XX:HeapDump...

           └─284747 /bin/sleep 30

Dec 31 09:39:54 ed-ol-raclin01 su[195700]: (to oracle) root on none

Dec 31 09:39:59 ed-ol-raclin01 su[196070]: (to root) root on none

Dec 31 09:39:59 ed-ol-raclin01 su[196075]: (to root) root on none

Dec 31 09:39:59 ed-ol-raclin01 su[196079]: (to oracle) root on none

Dec 31 09:39:59 ed-ol-raclin01 su[196086]: (to oracle) root on none

Dec 31 09:39:59 ed-ol-raclin01 su[196091]: (to oracle) root on none

Dec 31 09:40:00 ed-ol-raclin01 su[196265]: (to oracle) root on none

Dec 31 09:40:00 ed-ol-raclin01 su[196268]: (to oracle) root on none

Dec 31 09:40:00 ed-ol-raclin01 su[196271]: (to oracle) root on none

Dec 31 09:40:00 ed-ol-raclin01 su[196276]: (to oracle) root on none

Warning: oracle-tfa.service changed on disk. Run 'systemctl daemon-reload' to reload units.

Use 'systemctl daemon-reload' command to reload units

#systemctl daemon-reload

  • tfactl and orachk commands are available from the following directory.
$ORACLE_HOME/ahf/oracle.ahf/bin

With the installation complete we can use the orachk command to check the services running on the server. This can be done on-demand, or run as a background task, which will send an email to the notification email addresses.

# as root

cd /opt/oracle.ahf/bin

# as non-root

cd $ORACLE_HOME/ahf/oracle.ahf/bin

# on-demand

./orachk

# background

./orachk -autostart

Once this is complete the oracle-orachkscheduler.service service will be enabled and running.

# systemctl status oracle-orachkscheduler.service

Run a TFA Collection

With the installation complete we can use the tfactl command to perform a number of collections, including TFA Service Request Data Collections (SRDC). There are a large number of SRDC collection types, with each gathering different information, as described here.

Here are a few examples.

# root
cd /opt/oracle.ahf/bin
# non-root
cd $ORACLE_HOME/ahf/oracle.ahf/bin


# Gather information about errors. You are prompted to select a specific incident.
./tfactl diagcollect -srdc ORA-00600
./tfactl diagcollect -srdc ORA-07445

# Collect data for all components for a specific time period.
./tfactl diagcollect -from "2020-12-15 11:11:11" -to "2020-12-31 11:11:11"

# Collect data for all components for the last 12 hours.
./tfactl diagcollect

Each TFA collection produces a single zip file that can be uploaded to My Oracle Support (MOS), as described below.

Upload Files to My Oracle Support (MOS)

You can manually upload a zip file produced by TFA or ORAchk by attaching it to your SR through the MOS website in the normal way. An alternative is to get TFA to upload it and attach it to your Service Request (SR). To do this you must provide your MOS credentials. This can be done as part of the command line, or they can be stored in a secure wallet by running the following command as the "root" user.

# root
cd /opt/oracle.ahf/bin
# non-root
cd $ORACLE_HOME/ahf/oracle.ahf/bin

# ./tfactl setupmos
Enter User Id: sami.malik@oracle.com
Enter Password:
SUCCESS - CERTIMPORT - Successfully imported certificate
#

The result of a TFA collection can be uploaded directly to MOS by including the "-sr" option in the command to specify the target SR number.

# MOS credentials supplied by wallet.
./tfactl diagcollect -srdc ORA-00600 -sr 1-12345678910

# MOS credentials supplied on command line. You are prompted for the password.
./tfactl diagcollect -srdc ORA-00600 -sr 1-12345678910 -user sami.malik@oracle.com

You can upload one or more files (both TFA generated and other files) as a separate action using the following commands.

# MOS credentials supplied by wallet.
./tfactl upload -sr 1-12345678910 -wallet file1.zip file2.zip file3.zip

# MOS credentials supplied on command line. You are prompted for the password.
./tfactl upload -sr 1-12345678910 -user sami.malik@oracle.com file1.zip file2.zip file3.zip

The Autonomous Health Framework (AHF) installer includes:

  • TFA is the Primary diagnostic collection tool, with Database Support Tools Bundle

Memory can be limited at either the system level using:

tfactl setresourcelimit -resource kmem

or combined system and swap memory, using:

tfactl setresourcelimit -resource swmem

For example:

To limit the memory usage to only 1GB of system memory run:

tfactl setresourcelimit -resource kmem -value 1024

Alternatively to limit the combined total of system memory and the swap memory to 2GB use:

tfactl setresourcelimit -resource swmem -value 2048

  • ORAchk / EXAchk
Oracle Stack Compliance Checks

  • procwatcher
Automate & capture database performance diagnostics & session level hangs
See 459694.1 for more details.

  • events
Reports warnings and errors seen in the logs

  • managelogs
Shows disk space usage and purges ADR log and trace files

  • alertsummary
Provides summary of events for one or more database or ASM alert files from all nodes

  • ls / dir
Lists all files TFA knows about for a given file name pattern across all nodes

  • summary
High level summary of the configuration

  • vi / notepad
Open alert or trace files for viewing a given database and file name pattern in the vi editor

  • tail
Run a tail on an alert or trace files for a given database and file name pattern

  • param
Show all database and OS parameters that match a specified pattern

  • oswatcher
Collect and archive OS metrics, useful for instance / node evictions & performance Issues.
See 301137.1 for more details.

  • oratop
Near real-time database monitoring
See 1500864.1 for more details.

  • pstack
Generate process stack for specified processes across all nodes

  • grep / findstr
Search alert or trace files with a given database and file name pattern, for a search string

  • dbglevel
Set and unset multiple CRS trace levels with one command

  • history
Show the shell history for the tfactl shell

  • changes
Report any noted changes in the system setup over a given time period. This includes database a parameters, OS parameters, patches applied etc

  • calog
Reports major events from the Cluster Event log

  • ps / tasklist
Finds processes

  • triage
Summarize oswatcher/exawatcher data

References: 

Oracle Exadata Database Machine exachk or HealthCheck (Doc ID 1070954.1)

Autonomous Health Framework (AHF) - Including TFA Trace File Analyser and ORAchk/EXAChk (Doc ID 2550798.1)


Monday, November 30, 2020

Switch Database to Copy

Oracle database 'switch database to copy' can be used to avoid lengthy restore times and therefore minimize downtime in the event of a database disaster or to move from a slow running storage to a faster storage. 

A quite common scenario is that we have 3 disk groups, +DATA, +FRA and +REDO with different performance characteristics, like the following:

+DATA Diskgoup is on fast storage (10k rpm)

+FRA Diskgroup is on  slowest storage (7200 rpm)

+REDO Diskgroup is on very fast storage (15k rpm)

example, +PURE_STORAGE_FLASH fastest disk group mounted so we switch the DB to use this diskgroup.



In case +DATA is slow and want to move to a faster storage then use this Oracle feature of switch database to copy with near zero downtime and without any lengthy restore/recover procedure. 

Check RMAN backup status

$ rman target=/ catalog rman/pwd_here@mycatdb

Recovery Manager: Release 19.0.0.0.0 - Production on Mon Nov 30 2020 10:27:26 2020

Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates.  All rights reserved.

connected to target database: MYTESTDB (DBID=64278080088)

RMAN> list copy of database;

specification does not match any datafile copy in the repository

RMAN> list backup of database;

specification does not match any backup in the repository

Take RMAN backup copy

RMAN> run

{

  allocate channel c1 device type disk format '+PURE_STORAGE_FLASH/%U';

  recover copy of database with tag 'FLASHBLADE';

  backup incremental level 1 for recover of copy with tag 'FLASHBLADE' database;

}

Now Let's have a look at our backup using the RMAN tag 'FLASHBLADE'

RMAN> list copy of database tag 'FLASHBLADE';

List of Datafile Copies
Key     File S Completion Time Ckp SCN    Ckp Time        Sparse
967     1    A 30-NOV-2020       12760708   29-NOV-2020      NO    
        Name: +PURE_STORAGE_FLASH/MYTESTDB/data_D-MYTESTDB_I-3539279042_TS-SYSTEM_FNO-1_onut4h02
        Tag: FLASHBLADE
961     2    A 30-NOV-2020       12760708   29-NOV-2020      NO    
        Name: +PURE_STORAGE_FLASH/MYTESTDB/data_D-MYTESTDB_I-3539279042_TS-SOE_FNO-2_ohut4gkn
        Tag: FLASHBLADE
968     3    A 30-NOV-2020       12760708   29-NOV-2020      NO    
        Name: +PURE_STORAGE_FLASH/MYTESTDB/data_D-MYTESTDB_I-3539279042_TS-SYSAUX_FNO-3_omut4gvr
        Tag: FLASHBLADE
966     4    A 30-NOV-2020       12760708   29-NOV-2020      NO    
        Name: +PURE_STORAGE_FLASH/MYTESTDB/data_D-MYTESTDB_I-3539279042_TS-UNDOTBS1_FNO-4_olut4gtq
        Tag: FLASHBLADE
962     5    A 30-NOV-2020       12760708   29-NOV-2020      NO    
        Name: +PURE_STORAGE_FLASH/MYTESTDB/data_D-MYTESTDB_I-3539279042_TS-SOE_FNO-5_oiut4gmp
        Tag: FLASHBLADE
965     7    A 30-NOV-2020       12760708   29-NOV-2020      NO    
        Name: +PURE_STORAGE_FLASH/MYTESTDB/data_D-MYTESTDB_I-3539279042_TS-USERS_FNO-7_oout4h09
        Tag: FLASHBLADE
963     8    A 30-NOV-2020       12760708   29-NOV-2020      NO    
        Name: +PURE_STORAGE_FLASH/MYTESTDB/data_D-MYTESTDB_I-3539279042_TS-SOE_FNO-8_ojut4gp4
        Tag: FLASHBLADE
964     9    A 30-NOV-2020       12760708   29-NOV-2020      NO    
        Name: +PURE_STORAGE_FLASH/MYTESTDB/data_D-MYTESTDB_I-3539279042_TS-SOE_FNO-9_okut4gr5
        Tag: FLASHBLADE

Now let’s create some data after our last incremental backup.

SQL> create table t1 as select * from SCOTT.EMP;

SQL> select count(*) from t1;
  COUNT(*)
      22

Before we perform our ‘switch database to copy’ let’s check our datafile locations.

RMAN> REPORT SCHEMA;

Database Details:
Hostname       : ed-ol-raclin01
Database Name  : mytestdb
Date Created   : 22/02/20 11:11:11
Date Started   : 29/11/20 11:11:11
DB Status      : OPEN
Space Allocated:    144.40 GB
Space Used     :     81.15 GB

Data File Name                                Tbsp      Data MB
--------------------------------------------- --------- -------
+DATA/MYTESTDB/DATAFILE/system.286.1025441287     SYSTEM        940
+DATA/MYTESTDB/DATAFILE/sysaux.287.1025441321     SYSAUX      1,310
+DATA/MYTESTDB/DATAFILE/soe.297.1025448855        SOE        30,720
+DATA/MYTESTDB/DATAFILE/users.289.1025441337      USERS          54
+DATA/MYTESTDB/DATAFILE/soe.298.1025449357        SOE        30,720
+DATA/MYTESTDB/DATAFILE/undotbs1.288.1025441337   UNDOTBS1   22,685
+DATA/MYTESTDB/DATAFILE/soe.299.1025452299        SOE        30,720
+DATA/MYTESTDB/DATAFILE/soe.300.1025456873        SOE        30,720
Temp File Name                                Temp MB
--------------------------------------------- ----------
+DATA/MYTESTDB/TEMPFILE/temp.294.1025441421       5,825
BCT File Name
------------------------------------------------------------
+DATA/MYTESTDB/CHANGETRACKING/ctf.296.1025442733
Control File Name
------------------------------------------------------------
+DATA/MYTESTDB/CONTROLFILE/current.290.1025441411
+FRA/MYTESTDB/CONTROLFILE/current.266.1025441411
REDO Member
------------------------------------------------------------
+DATA/MYTESTDB/ONLINELOG/group_3.293.1025441413
+FRA/MYTESTDB/ONLINELOG/group_3.269.1025441415
+DATA/MYTESTDB/ONLINELOG/group_2.292.1025441413
+FRA/MYTESTDB/ONLINELOG/group_2.268.1025441415
+DATA/MYTESTDB/ONLINELOG/group_1.291.1025441413
+FRA/MYTESTDB/ONLINELOG/group_1.267.1025441415

Simulate Disaster

$ sqlplus / as sysdba

SQL> shutdown abort;

SQL> startup mount;

ORACLE instance started.
Total System Global Area 1.6267E+11 bytes
Fixed Size   30140696 bytes
Variable Size 1.9864E+10 bytes
Database Buffers 1.4227E+11 bytes
Redo Buffers   506728448 bytes
Database mounted.

Switch Database to Copy
Now let’s switch the database to use our RMAN image copy backup.
RMAN> switch database to copy;

Recover Database
Now recover the database to apply all changes since the last Incremental backup.

RMAN> recover database;
media recovery complete, elapsed time: 00:00:05
Finished recover at 30-Nov-2020

Open Database
RMAN> alter database open;
Statement processed

Check the previously created test table 't1'.
SQL> select count(*) from t1;
  COUNT(*)
      22

Check Database Details
SELECT * FROM V$DATAFILE;

RMAN> REPORT SCHEMA;

Summary
By using the 'Switch database to copy' you will be able to minimize downtime by avoiding a 'restore database' and also this feature can be used to switch the database to a faster storages like FLASH or SSDs compared to databases running on SAS or SATA disks. 

Friday, October 30, 2020

dba_flashback_archive Tips

DBA_FLASHBACK_VIEW can be used to monitor Oracle flashback as it contains the columns flashback_archive_name, flashback_archive#, retention_in_days, create_time, last_purge_time  and status to help the DBA administer the flashback feature of Oracle. 

Here is the query to show the dba_flashback_archive contents:

 select
    flashback_archive_name,
    status
from
   dba_flashback_archive;

 The dba_flashback_archive describes all flashback archives in the database and the status column of dba_flashback_archive indicates whether the flashback archive is a default archive for the database.  

All DBA's will enable a flashback_data_archive area that can then be seen by dba_flashback_archive.  

Note these limitations on using DDL with 11g flashback data archive.  

With a flashback data archive it is possible to view data as any point in time  since the flashback data archive was created. However, attempting to view  data as a timestamp before the data archive is created causes the following  ORA-01466 flashback error:

ORA-01466: unable to read data - table definition has changed    

If you try to drop a flashback tablespace that is in-use you get the ORA-55641 error:

SQL > alter flashback flashback_archive_area_one archive remove tablespace flashback_tablespace_one;  

ORA-55641: Cannot drop tablespace used by Flashback Data Archive.

Wednesday, September 30, 2020

Database Migration Assistant for Unicode DMU 19.1

Migrating databases to the Unicode character set can be a frightening task even for the most experienced DBAs. Without careful planning, accurate data analysis, choosing the best execution strategies, it may result in prolonged downtime, loss of data, or jeopardized system integrity. 

Oracle Database Migration Assistant for Unicode (DMU) is a next-generation migration tool that streamlines the entire migration process with an intuitive GUI that minimizes the DBA's manual workload and decision-making. It helps ensure all migration issues are addressed beforehand and the conversion of data is carried out correctly and efficiently. 

Latest version of Oracle DMU 19.1 can be downloaded from this link, https://www.oracle.com/database/technologies/appdev/migration-assistant-unicode.html 
(or) 
From MOS download the patch 30149496 -->Use this for Production databases 

Oracle DMU is a free Oracle Database tool. It is distributed via OTN and via My Oracle Support (MOS). Both download packages are identical but the OTN download is made available under the OTN Developer License, which allows you to evaluate the tool, while the MOS download is a Program Update under the database support contract and permits you to migrate production databases covered by a valid support contract on any level that entitles to Program Updates

Important Pointers for Migrating Character set Native to Unicode 

Follow this document from MOS if you are converting EBS database to Unicode that also contains the steps for post conversion steps of configuring the application tier files as well,
Globalization Guide for Oracle Applications Release 12 (Doc ID 393861.1)

Note: EBS does not allow column expansion, data truncation, nor CHAR semantics. Any approaches related to these are not applicable on EBS.

After conversion to the target character set, the cells classified as Over Column Limit (Scheduled) will no longer fit into their columns unless the columns are lengthened. The cells classified as Over Type Limit (Scheduled) will become longer than the limit of their data types.

The migration happens in-place and reduces the effort and downtime required for data conversion, comparing to conventional export and import migration methods.
Under one of the following conditions, the Import and Export utilities should be used instead of DMU:
•The character set migration process is combined with database upgrade to a higher release and/or migration to another platform.
•A lot of tables in the database are fragmented and the character set migration process is an opportunity to perform de-fragmentation and clean-up.

Tips For and Known Issues With The Database Migration Assistant for Unicode (DMU) Tool version (Doc ID 2018250.1) - Follow all this first to clear max issues that we might face. 

1. This zip file downloaded from MOS (patch 30149496) contains the Oracle Database Migration Assistant for Unicode (DMU) release 19.1 client package. 
Download and installed 64-bit JDK 1.8.0 latest version on a machine with good configuration. 

2. Launch the tool, d:\>cd e:\DMU19.1\dmu -->d:\DMU19.1\dmu>dmuW64.exe 
The interface between the DMU client and the DMU functionality in the database kernel is the PL/SQL package SYS.DBMS_DUMA_INTERNAL. This package is not installed by default. After any required database patches have been installed, start an SQL*Plus session in the Oracle Home of your database, log in with SYSDBA credentials, and run the script ?/rdbms/admin/prvtdumi.plb to create this package SYS.DBMS_DUMA_INTERNAL. 

To avoid DMU Database Scan Report Failing With OutOfMemory Error (Doc ID 1928231.1) Always use 64bit JDK and then Increase the JVM maximum memory like 10-20G from the window Tools > Preferences > Memory Settings > Max JVM Memory 

3. When you connect for the first time to a database, you will be prompted to install a DMU migration repository. Install the repository in the new tablespace created above. The DMU does not function without a repository. It's a good idea to create a separate tablespace for the repository, rather than using the SYSAUX tablespace. 

4. The process is as below,

•Enumeration - auto-identification of database objects containing textual data that requires conversion
•Scanning - comprehensive assessment of migration feasibility and discovery of potential data issues
•Cleansing - sophisticated tool set for iterative data analysis and cleansing to ensure data safety
•Conversion - automated in-place data conversion to minimize time and space requirements

5. Important pointers during SCANNING phase:

The Number of Scanning Processes is the number of threads that the DMU spawns to perform the scanning process. Each thread opens a separate database connection. The DMU assigns a set of tables to each thread to scan independently. Larger tables might be split into chunks and scanned by multiple threads. Smaller tables are scanned entirely by one thread. The default value for this parameter is the value of the CPU_COUNT initialization parameter of the database. You can increase the parameter to see if the scanning time decreases, leveraging the parallel processing of CPUs and disks of the database server, or you can decrease the parameter to limit the negative impact that the scanning process might have on the performance of a busy production database.
You can monitor the scanning progress in the "Scan Progress" tab in DMU . It displays a progress bar and a percentage that indicate how much of the corresponding table or table chunk has already been scanned.
You can monitor the table-level conversion progress in the "View Table Conversion Progress" link on the Conversion Progress tab in DMU. It will display the completion percentage for each table based on the status from the V$SESSION_LONGOPS view along with the execution status of individual SQL statements.

What scan option should I choose for the first scan ever of a database?
For the first scan done on a big database then one might choose "Do not collect rowids" as scan option in the second screen of the scan wizard.
This will reduce the time needed for the scan and limit the amount of space needed for the DMU repository while still giving an overall overview of the dataset condition.
If you then notice that there is a lot of "Invalid Representation" as scan result for example you then know there is some work to do /data to check.

If you have already done conversion of databases with similar datasets (Q&A db or so based on the prod db and you are now converting the prod db) or the database is not really huge use the default "Collect rowids only according to 'Rowids to Collect' table property" option.
The DMU might collect during scanning the rowid addresses of column cells that contain convertible values or values with expansion or invalid binary representation issues.
CLOB columns are not scanned or even listed in the Navigator pane in a multibyte database. Any multibyte database, including databases in AL32UTF8 and UTF8 character sets, store CLOB values in the same internal storage character set (Unicode UTF-16), and so CLOB values need no conversion when the database character set is migrated from one multibyte character set to another. 

https://docs.oracle.com/en/database/oracle/dmu/19.1/dumag/ch4_basic_tasks.html#GUID-8B5BA47D-7BB1-401C-A2AA-7CF34F0D7229

Above link tells about values for ‘Scan Buffer Size’ and ‘Number of Scanning Processes’.
The parameter Scan Buffer Size controls the size in bytes of a buffer that the DMU allocates in each database server session to scan a table. The default value is 1000 kilobytes. The total buffer space used in a single scan is the number of scanning processes times the value of this property. Increasing the value of the property could speed up scanning, but only for as long as the allocated buffer memory fits into the available RAM on the database server.

Ignore below External Tables error while scanning.

ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
04001: error opening file /tmp/inbound/exttab1.log
ORA-06512: at "SYS.DBMS_DUMA_INTERNAL", line 8
ORA-06512: at "SYS.DBMS_DUMA_INTERNAL", line 43
ORA-06512: at line 1

Refer to this document for any scan and cleaning data related errors and respective action to be taken

https://docs.oracle.com/cd/E89575_01/DUMAG/ch4_basic_tasks.htm#DUMAG-GUID-C9DCB0E2-01DD-4B05-9EBE-9B0A2C91232E

At this stage cleanising is not mentioned in section 3.1 of MOS document 
Globalization Guide for Oracle Applications Release 12 (Doc ID 393861.1)
Hence data should be fixed and then do the conversion while cleaning of data not supported for Oracle EBS. 

You must resolve data issues identified during scanning before you can convert the database. Cleansing actions are actions that can take to resolve convertibility issues.
The DMU does not allow the conversion process to start until all issues have been resolved or explicitly marked as ignorable.
Below link details on cleansing the data. 
https://docs.oracle.com/cd/E89575_01/DUMAG/ch6_cleansing_data.htm#DUMAG1465

6. Important pointers during CONVERSION phase:
After scanning check the Database Properties: Scanning tab and click on "Estimate Tablespace Extension" button to check the estimated data expansion and check if there is enough space in the tablespaces and on OS level to do the conversion.
We suggest to add enough space upfront (before conversion) to the table spaces , this will be faster than letting them autoextent

Identify any (local) batch jobs or check the database jobs to see if there are any load or so done , if so then stop those at least during the conversion phase.
If you do not stop the listener then make sure everyone is aware to NOT connect during the conversion phase.

Stop the OEM agent also during the conversion phase.
The Migration Status tab must show that there are no issues as below only then we should proceed with Conversion. 

STOP ALL APPLICATION SERVICES

RESTART THE DATABASE ONCE

The SQL statement ALTER DATABASE CHARACTER SET, which the DMU uses in the conversion phase, succeeds only if the session executing the statement is the only user session logged into the database. Therefore, before starting the conversion, the DMU warns you about any user sessions logged into the database that are not opened by the DMU itself. You may use the following SQL statement in SQL*Plus or SQL Developer to find out the details of the offending sessions:

SELECT sid, serial#, username, status,
       osuser, machine, process, program
FROM v$session
WHERE username IS NOT NULL
AND program <> 'Database Migration Assistant for Unicode';

Note:
After the database is successfully converted to Unicode, you must restart the database instance to synchronize any in-memory character set metadata information with the new Unicode database character set.

In the monitoring mode, the Convert button is replaced with a Stop button. If you click Stop, a warning dialog is displayed. In this dialog, you have the options of aborting the conversion process immediately (Yes), stopping the conversion process after the current running process finishes (No), or returning to the conversion operation (Cancel).

If the database reports an error during the conversion process, the currently executed step and the statement that caused the error are marked with the red X error icon

7. Verification
select NLS_LANGUAGE from apps.FND_LANGUAGES where INSTALLED_FLAG in ('B','I');

NLS_LANGUAGE
AMERICAN

SELECT value FROM NLS_DATABASE_PARAMETERS WHERE parameter='NLS_CHARACTERSET';
select distinct(nls_charset_name(charsetid)) CHARACTERSET,
decode(type#, 1, decode(charsetform, 1, 'VARCHAR2', 2, 'NVARCHAR2','UNKNOWN'),
9, decode(charsetform, 1, 'VARCHAR', 2, 'NCHAR VARYING', 'UNKNOWN'),
96, decode(charsetform, 1, 'CHAR', 2, 'NCHAR', 'UNKNOWN'),
8, decode(charsetform, 1, 'LONG', 'UNKNOWN'),
112, decode(charsetform, 1, 'CLOB', 2, 'NCLOB', 'UNKNOWN')) TYPES_USED_IN
from sys.col$ where charsetform in (1,2) and type# in (1, 8, 9, 96, 112)
order by CHARACTERSET, TYPES_USED_IN;

CHARACTERSET    TYPES_USED_IN
AL16UTF16          NCHAR
AL16UTF16          NCLOB
AL16UTF16          NVARCHAR2
AL32UTF8           CHAR
AL32UTF8           CLOB
AL32UTF8           LONG
AL32UTF8           VARCHAR2

Follow below MOS document if you are converting EBS database to Unicode that also contains the steps for post conversion steps of configuring the application tier files as well,
Globalization Guide for Oracle Applications Release 12 (Doc ID 393861.1)

Friday, August 28, 2020

Diagnose DB performance issue from OS Process

Following steps are very useful when diagnosing performance issues related to Oracle processes
running on the server. When you run multiple databases on one server and are experiencing server-performance issues, it can be difficult to identify which database and session are consuming the most system resources. In these situations, use the top utility or the ps command to identify the highest-consuming processes. The top utility provides a dynamic interface that periodically refreshes to give you the current snapshot of resource consumption, whereas the ps command provides a quick one time snapshot of top resource usage. You may be wondering whether it is better to use top or ps to identify resource-consuming processes.
If your server has top installed, top will probably be your first choice; its ease of use and interactive output is hard to beat. Having said that, the ps command is universally available, so you may have to use ps if the top command is not available. Also the ps command may show you a more descriptive program name associated with the process.
Once you have a process identified, if it’s an Oracle process use the SQL query given below to further identify the type of Oracle process. You then have the option of trying to tune the operation
(whether it be SQL, RMAN, and so on), or you might want to terminate the process.

1. Run the below topcpu command to get the list of processes that are consuming high CPU. 

# alias topcpu='ps -e -o pcpu,pid,user,tty,args | sort -n -k 1 -r | head'

# topcpu

[oracle@ed-ol-raclin1:MYDB1:~]$ topcpu
99.3 140618 oracle   ?        oracleMYDB1 (LOCAL=NO)
16.0   2223 oracle   ?        ora_j061_mydb1
 4.4  48053 root     ?        /u01/app/19.0.0.0/grid/bin/osysmond.bin
 2.3 319689 grid     ?        oracle+ASM1 (LOCAL=NO)

2. Now you can use the top consuming PID (140618) from the output as an input to the following query to show information about the Oracle session responsible for the high consumption of CPU resources,

SET LINES 200 PAGES 0 HEAD OFF LONG 100000
COL dummy_value NOPRINT
--
SELECT 'dummy_value' dummy_value,
'USERNAME : ' || s.username || CHR(10) ||
'SCHEMA : ' || s.schemaname || CHR(10) ||
'OSUSER : ' || s.osuser || CHR(10) ||
'MODULE : ' || s.program || CHR(10) ||
'ACTION : ' || s.schemaname || CHR(10) ||
'CLIENT INFO : ' || s.osuser || CHR(10) ||
'PROGRAM : ' || s.program || CHR(10) ||
'SPID : ' || p.spid || CHR(10) ||
'SID : ' || s.sid || CHR(10) ||
'SERIAL# : ' || s.serial# || CHR(10) ||
'KILL STRING : ' || '''' || s.sid || ',' || s.serial# || '''' || CHR(10) ||
'MACHINE : ' || s.machine || CHR(10) ||
'TYPE : ' || s.type || CHR(10) ||
'TERMINAL : ' || s.terminal || CHR(10) ||
'CPU : ' || q.cpu_time/1000000 || CHR(10) ||
'ELAPSED_TIME: ' || q.elapsed_time/1000000 || CHR(10) ||
'BUFFER_GETS : ' || q.buffer_gets || CHR(10) ||
'SQL_ID : ' || q.sql_id || CHR(10) ||
'CHILD_NUM : ' || q.child_number || CHR(10) ||
'START_TIME : ' || TO_CHAR(s.sql_exec_start,'dd-mon-yy hh24:mi') || CHR(10) ||
'STATUS : ' || s.status || CHR(10) ||
'SQL_TEXT : ' || q.sql_fulltext
FROM v$session s
JOIN v$process p ON (s.paddr = p.addr)
LEFT OUTER JOIN v$sql q ON (s.sql_id = q.sql_id)
WHERE s.username IS NOT NULL -- eliminates background procs
AND NVL(q.sql_text,'x') NOT LIKE '%dummy_value%' -- eliminates this query from output
AND p.spid = '&PID_FROM_OS'
ORDER BY q.cpu_time;


Enter value for pid_from_os: 161918
For this example, when you run the prior query and supply to it the high cpu consuming PID of 161918, you get the following output:

USERNAME : APPS
SCHEMA : APPS
OSUSER : applprod
MODULE : STANDARD@test-app01 (TNS V1-V3)
ACTION : APPS
CLIENT INFO : applprod
PROGRAM : STANDARD@test-app01 (TNS V1-V3)
SPID : 161918
SID : 645
SERIAL# : 57255
KILL STRING : '645,57255'
MACHINE : test-app01
TYPE : USER
TERMINAL :
CPU : 3150.605024
ELAPSED_TIME: 3227.807927
BUFFER_GETS : 693186617
SQL_ID : 86mp84saarg09
CHILD_NUM : 0
START_TIME : 28-aug-20 17:21
STATUS : ACTIVE
SQL_TEXT : SELECT PVS.VENDOR_SITE_CODE FROM WIP_DISCRETE_JOBS WDJ , BOM_DEPARTMENTS BD , PO_VENDOR_SITES_ALL PVS , PO_VENDORS PV WHERE WDJ.WIP_ENTITY_ID = :B1 AND WDJ.OWNING_DEPARTMENT = BD.DEPARTMENT_ID AND PVS.VENDOR_SITE_CODE = BD.DEPARTMENT_CODE AND PV.VENDOR_ID = PVS.VENDOR_ID

From the prior output, you see that a SQL*Plus session is consuming a great deal of CPU time. You can
also determine when the query started, the username, the kill string, and the SQL identifier from the output.

Once you identify information regarding the process, you can drill down further to display the execution plan and the resources the process is waiting for. For example, you can view the SQL execution plan with the SQL ID and child number:

SQL> SELECT * FROM table(DBMS_XPLAN.DISPLAY_CURSOR('&sql_id',&child_num));

Here’s another useful query that uses the SID and serial number to display the state of a session and
whether it is working or waiting for a resource:

SELECT sid,
DECODE(state, 'WAITING','Waiting', 'Working') state,
DECODE(state, 'WAITING', 'So far '||seconds_in_wait,
'Last waited '|| wait_time/100)|| ' seconds for '||event
FROM v$session
WHERE sid = '&&session_id'
AND serial# = '&&serial';

Enter value for session_id: 645
Enter value for serial: 29301

SID STATE   DECODE(STATE,'WAITING','SOFAR'||SECONDS_IN_WAIT,'LASTWAITED'||WAIT_TIME/100)||'S
      1766 Working
Last waited -12345.67 seconds for gc current block 2-way

This information is very useful when diagnosing performance issues related to Oracle processes
running on the server.

Monday, July 27, 2020

Oracle Database 19c Silent Mode Installation

Silent mode installation allows to configure necessary Oracle components without using graphical interface as we usually use in ./runInstaller 
In silent mode installation a response file can be used to provide all the required information for the installation, so no additional user input is required.

Pre-requisites Checks
# grep MemTotal /proc/meminfo -->We need at least 8192 MB of physical RAM.

# grep SwapTotal /proc/meminfo/* -->We need at least 8192 MB of swap
RAM up to 1024MB then swap = 2 times the size of RAM
RAM between 2049MB and 8192MB then swap = equal to the size of RAM
RAM more than 8192MB then swap size = 0.75 times the size of RAM

# df -h /tmp/* -->You need to have at least 2048 MB of space in the /tmp directory.

# df -h /u01 --> minimum 10GB space required for 19c database software

# grep "model name" /proc/cpuinfo -->Verify that the processor architecture matches the Oracle software release that you want to install. If you do not see the expected output, then you cannot install the software on this system.

[oracle@rac1 ~]$ cat /etc/redhat-release -->Verify OS version Oracle Linux Server release 7.5

[root@rac1 ~]# yum install -y oracle-database-preinstall-19c -->Oracle Installation Prerequisites
Download 19c db software from OTN site, https://www.oracle.com/database/technologies/oracle19c-linux-downloads.html

NOTE: You can't edit oracle home location while installation using OUI. It will pickup automatically ORACLE HOME location, where you have unzipped database binaries. Hence directly unzip in ORACLE HOME location and then start ./runInstaller
After unzip, it will NOT keep all files in single directory like 10g,11g and 12c.

Backup response file (db_install.rsp)
[oracle@rac1 response]$ ls -ltr /u01/app/oracle/product/19.0.0/db_1/install/response/db_install.rsp
[oracle@rac1 response]$ cd /u01/app/oracle/product/19.0.0/db_1/install/response
[oracle@rac1 response]$ cp db_install.rsp db_install.rsp.bkp

Modify the below parameters in the response file db_install.rsp.

[oracle@rac1 response]$ vi db_install.rsp

oracle.install.option=INSTALL_DB_SWONLY
UNIX_GROUP_NAME=oinstall
INVENTORY_LOCATION=/u01/app/oraInventory
ORACLE_HOME=/u01/app/oracle/product/19.0.0/db_1
ORACLE_BASE=/u01/app/oracle
oracle.install.db.InstallEdition=EE
oracle.install.db.OSDBA_GROUP=oinstall
oracle.install.db.OSOPER_GROUP=oinstall
oracle.install.db.OSBACKUPDBA_GROUP=oinstall
oracle.install.db.OSDGDBA_GROUP=oinstall
oracle.install.db.OSKMDBA_GROUP=oinstall
oracle.install.db.OSRACDBA_GROUP=oinstall
oracle.install.db.rootconfig.executeRootScript=false

[oracle@rac1 ~]$ cd /u01/app/oracle/product/19.0.0/db_1/
[oracle@rac1 db_1]$ ls -ltr runInstaller
-rwxr-x---. 1 oracle oinstall 1783 Jul 26 2020 runInstaller

[oracle@rac1 db_1]$ ./runInstaller -executePrereqs -silent -responseFile /u01/app/oracle/product/19.0.0/db_1/install/response/db_install.rsp

Install oracle software in Silent mode
[oracle@rac1 db_1]$ ./runInstaller -silent -responseFile /u01/app/oracle/product/19.0.0/db_1/install/response/db_install.rsp

As a root user, execute the following script(s):
        /u01/app/oracle/product/19.0.0/db_1/root.sh

Verification
Verify the database software setup by setting the below environment variables and launch sqlplus utlity.
[oracle@rac1 db_1]$ export ORACLE_HOME=/u01/app/oracle/product/19.0.0/db_1
[oracle@rac1 db_1]$ export PATH=/u01/app/oracle/product/19.0.0/db_1/bin:$PATH;
[oracle@rac1 db_1]$ which sqlplus
/u01/app/oracle/product/19.0.0/db_1/bin/sqlplus
[oracle@rac1 db_1]$ sqlplus
SQL*Plus: Release 19.0.0.0.0 - Production on Sun Jan 26 13:06:03 2020
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
Enter user-name:

So here your oracle database 19c software is successfully setup. To create the database you can use dbca database configuration assistance utility. 

Monday, June 29, 2020

Deploy root certificate on Oracle EBS 12.2

Oracle Fusion Middleware provides two types of keystores for keys and certificates:

JKS Keystore and Truststore
Oracle Wallet

JKS Keystore and Truststore
A JKS keystore is the default JDK implementation of Java keystores provided by Sun Microsystems. In 11g all Java components and Java EE applications use the JKS-based keystore and truststore.
You use a JKS-based keystore for the following:
Oracle Virtual Directory
Applications deployed on Oracle WebLogic Server, including:
Oracle SOA Suite
Oracle WebCenter
In Oracle Fusion Middleware, you can use GUI or command-line tools to create, import, export, and delete a Java keystore and the certificates contained in the keystore

Oracle Wallet
An Oracle wallet is a container that stores your credentials, such as certificates, trusted certificates, certificate requests, and private keys. You can store Oracle wallets on the file system or in LDAP directories such as Oracle Internet Directory. Oracle wallets can be auto-login or password-protected wallets.
You use an Oracle Wallet for the following components:
Oracle HTTP Server
Oracle Web Cache
Oracle Internet Directory
In Oracle Fusion Middleware, you can use GUI or command-line tools to create, import, export and delete a wallet and the certificates contained in the wallet.

Commands to verify the certificate,

#cd $OA_JRE_TOP/lib/security
#pwd
/t01/erptest/fs2/EBSapps/comn/util/jdk32/jre/lib/security
# keytool -list -keystore $OA_JRE_TOP/lib/security/cacerts

# cd $APPL_TOP_NE/ad/admin
#pwd
/t01/erptest/fs_ne/EBSapps/appl/ad/admin
# keytool -list -v -keystore adkeystore.dat

Below are the steps to migrate the java certificate into Oracle E-Business Suite 12.2.x version.


<JRI_DATA_LOC> = $<fs_ne>/EBSapps/appl/ad/admin


Generate a new keypair (private key and public key)

PATH:/t01/testerp/fs_ne/EBSapps/ad/admin

[applerp@ed-olaplin1 admin]$ adjkey -initialize -keysize 2048 -alias SAMICORP

 Copyright (c) 2002, 2012 Oracle Corporation

                        Redwood Shores, California, USA

                             AD Java Key Generation

                              s   Version 12.2.0

NOTE: You may not use this utility for custom development

      unless you have written permission from Oracle Corporation.

Reading product information from file...

Reading language and territory information from file...

Reading language information from applUS.txt ...

Enter the APPS username: apps

Enter the APPS password:

Successfully created javaVersionFile.

adjkey will now create a signing entity for you.

Enter the COMMON NAME [ ] : SAMICORP

Enter the ORGANIZATION NAME [SAMICORP] : SAMICORP

Enter the ORGANIZATION UNIT [ ] : IT

Enter the LOCALITY (or City) [ ] : London

Enter the STATE (or Province or County) [ ] : London

Enter the COUNTRY (two-letter ISO abbreviation) [ ] : UK

Enter keystore password:  Re-enter new password: Enter key password for <SAMICORP>

        (RETURN if same as keystore password):  Re-enter new password:

/t01/erptr11225/fs2/EBSapps/comn/util/jdk32/jre/bin/java -Djava.security.egd=file:/dev/urandom sun.security.tools.KeyTool -genkey -alias                                                                                                     SAMICORP -keyalg RSA -keysize  2048 -keystore /t01/erptr11225/fs_ne/EBSapps/appl/ad/admin/adkeystore.dat -validity 14600 -dname " CN=SAMICORP, O=                                                                                                    SAMICORP, OU=IT, L=London, S=London, C=AE"

The above Java program completed successfully.

Your digital signature has been created successfully and

imported into the keystore database. This signature

will now be used to sign Applications JAR files whenever

they are patched.

  IMPORTANT: If you have multiple web servers, you must copy

  files to each of the remaining web servers on your site.

  See the documentation reference for more information.

adjkey is complete.

Additional Information

If required, you can view the contents of adkeystore.dat by running the following command:

$ keytool -list -v -keystore adkeystore.dat

Create a Certificate Signing Request

Create a "Certificate Signing Request" (named adkeystore.csr in this example) to send to your CA provider for signing. This will be created using the same alias name from previous step

Command to Create a Certificate Signing Request

$ keytool -sigalg SHA256withRSA -certreq -keystore /t01/testerp/fs_ne/EBSapps/appl/ad/admin/adkeystore.dat -file /t01/testerp/fs_ne/EBSapps/appl/ad/admin/adkeystore.csr –alias erpprd_erplxprdap1

Enter keystore password:      ----à puneet

Enter key password for <SAMICORP>   --à myxuan

Now Submit your Certificate Signing Request


Submit your certificate signing request 'adkeystore.csr' to your official certificate authority, for example, Verisign, Thawte etc. or to your own in-house certificate authority as applicable. 


Now Import the Root Certificate to the Java Keystore Certificate Store 'cacerts' (if required)

/t01/testerp/fs1/EBSapps/comn/util/jdk32/jre/lib/security

[user@ed-olapplin security]$ keytool -import -alias Root -file Root.cer -trustcacerts -v -keystore cacerts

Trust this certificate? [no]:  yes

Certificate was added to keystore

[Storing cacerts]



Keystore Password: defaultpwdchangeit


Import the Code Signing Certificate into the Keystore

/t01/testerp/fs_ne/EBSapps/appl/ad/admin

[appluat@erplxprdap1 admin]$ keytool -import -file SAMICORPEBS.cer -trustcacerts -alias erpprd_erplxprdap1 -keystore adkeystore.dat

Enter keystore password:  à defaultpuneet

Enter key password for <SAMICORP> à myxuan

Certificate reply was installed in keystore


Regenerate the Jar File

Shutdown the application services.

Source your APPS env file and shutdown your application tier services.


Source your APPS env File

On the Application tier as the file system owner source your APPS env file.

Shutdown the Application Tier

Shut down the application tier services:

$ adstpall.sh <apps_user>/<apps_pwd>

 Regenerate the jar files through adadmin

Regenerate all JAR Files using the force option through adadmin:

Run ADADMIN, and select the following from the AD Administration Main Menu:

Choose Generate Applications Files menu
From this menu choose Generate product JAR files


Enter yes when prompted with: Do you wish to force regeneration of all jar files? [No] ? yes

Once your jar files have been successfully generated, restart the application tier.

Restart the Application Tier

Restart the application tier services:

$ adstrtal.sh <apps_user>/<apps_pwd>

Remove the cache from browser and old jar files from the control tab -> java- > click on General -> Click on View -> Select all files -> And delete -> Apply and Ok.



Open the java forms 1st time you will get the blow page. Click on the Do not show and Run.

Java Forms opened.