Saturday, September 9, 2017

Weblogic 10.3.6 Patching Error Using BSU - "java.lang.OutOfMemoryError: Java heap space"

Using BSU (Smart Update) to apply a patch to Weblogic Server 10.3.6, and the following errors are seen:

[applmgr@ip-10-0-0-161 bsu]$ ./bsu.sh -install -patch_download_dir=/u01/oracle/PROD/fs1/FMW_Home/utils/bsu/cache_dir -patchlist=EQDE -prod_dir=/u01/oracle/PROD/fs1/FMW_Home/wlserver_10.3
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
        at java.util.HashMap.createEntry(HashMap.java:897)
        at java.util.HashMap.addEntry(HashMap.java:884)
        at java.util.HashMap.put(HashMap.java:505)
        at com.bea.cie.common.dao.xbean.XBeanDataHandler.loadPropertyMap(XBeanDataHandler.java:778)
        at com.bea.cie.common.dao.xbean.XBeanDataHandler.(XBeanDataHandler.java:99)
        at com.bea.cie.common.dao.xbean.XBeanDataHandler.createDataHandler(XBeanDataHandler.java:559)
        at com.bea.cie.common.dao.xbean.XBeanDataHandler.getComplexValue(XBeanDataHandler.java:455)
        at com.bea.plateng.patch.dao.cat.PatchCatalogHelper.getPatchDependencies(PatchCatalogHelper.java:442)
        at com.bea.plateng.patch.dao.cat.PatchCatalogHelper.getPatchDependencies(PatchCatalogHelper.java:464)
        at com.bea.plateng.patch.dao.cat.PatchCatalog.getPatchDependencies(PatchCatalog.java:56)
        at com.bea.plateng.patch.dao.cat.PatchCatalogHelper.getInvalidatedPatchMap(PatchCatalogHelper.java:1621)
        at com.bea.plateng.patch.PatchSystem.updatePatchCatalog(PatchSystem.java:436)
        at com.bea.plateng.patch.PatchSystem.refresh(PatchSystem.java:130)
        at com.bea.plateng.patch.PatchSystem.setCacheDir(PatchSystem.java:201)
        at com.bea.plateng.patch.Patch.main(Patch.java:281)
[applmgr@ip-10-0-0-161 bsu]$

CAUSE

 bsu.sh is not configured with high enough values when applying larger patches such as a Patch Set Update.

SOLUTION

Add Xms and Xmx in bsu script to resolve the out of memory issue.

Example:

bsu.sh (UNIX):
"$JAVA_HOME/bin/java" -Xms2048m -Xmx2048m -jar patch-client.jar $*

Wednesday, September 6, 2017

R12.2 rapidwiz fails for missing libXi

R12.2 rapidwiz fails with following error :

Missing - libXi

[root@ip-10-0-0-126 Downloads]# yum install libXi.i686
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
Resolving Dependencies
--> Running transaction check
---> Package libXi.i686 0:1.7.9-1.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================================================================================================================================
 Package                              Arch                                Version                                      Repository                                                       Size
=============================================================================================================================================================================================
Installing:
 libXi                                i686                                1.7.9-1.el7                                  rhui-REGION-rhel-server-releases                                 40 k

Transaction Summary
=============================================================================================================================================================================================
Install  1 Package

Total download size: 40 k
Installed size: 68 k
Is this ok [y/d/N]: y
Downloading packages:
libXi-1.7.9-1.el7.i686.rpm                                                                                                                                            |  40 kB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : libXi-1.7.9-1.el7.i686                                                                                                                                                    1/1
  Verifying  : libXi-1.7.9-1.el7.i686                                                                                                                                                    1/1

Installed:
  libXi.i686 0:1.7.9-1.el7

Complete!
[root@ip-10-0-0-126 Downloads]#

12.2 buildstage.sh fails with /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory

12.2 buildstage.sh fails with /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory

buildstage.sh fails with following error :

-bash: /u01/download/startCD/Disk1/rapidwiz/bin/../jre/Linux_x64/1.6.0/bin/java: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory

The error is during to missing rpm.

Install following as root :

[root@ip-10-0-0-126 download]# yum install glibc.i686
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
Resolving Dependencies
--> Running transaction check
---> Package glibc.i686 0:2.17-196.el7 will be installed
--> Processing Dependency: libfreebl3.so for package: glibc-2.17-196.el7.i686
--> Processing Dependency: libfreebl3.so(NSSRAWHASH_3.12.3) for package: glibc-2.17-196.el7.i686
--> Running transaction check
---> Package nss-softokn-freebl.i686 0:3.28.3-8.el7_4 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

=============================================================================================================================================================================================
 Package                                       Arch                            Version                                     Repository                                                   Size
=============================================================================================================================================================================================
Installing:
 glibc                                         i686                            2.17-196.el7                                rhui-REGION-rhel-server-releases                            4.2 M
Installing for dependencies:
 nss-softokn-freebl                            i686                            3.28.3-8.el7_4                              rhui-REGION-rhel-server-releases                            199 k

Transaction Summary
=============================================================================================================================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 4.4 M
Installed size: 15 M
Is this ok [y/d/N]: y
Downloading packages:
(1/2): nss-softokn-freebl-3.28.3-8.el7_4.i686.rpm                                                                                                                     | 199 kB  00:00:00
(2/2): glibc-2.17-196.el7.i686.rpm                                                                                                                                    | 4.2 MB  00:00:00
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                                        9.9 MB/s | 4.4 MB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : glibc-2.17-196.el7.i686                                                                                                                                                   1/2
  Installing : nss-softokn-freebl-3.28.3-8.el7_4.i686                                                                                                                                    2/2
  Verifying  : nss-softokn-freebl-3.28.3-8.el7_4.i686                                                                                                                                    1/2
  Verifying  : glibc-2.17-196.el7.i686                                                                                                                                                   2/2

Installed:
  glibc.i686 0:2.17-196.el7

Dependency Installed:
  nss-softokn-freebl.i686 0:3.28.3-8.el7_4

Complete!
[root@ip-10-0-0-126 download]# /u01/download/startCD/Disk1/rapidwiz/bin/../jre/Linux_x64/1.6.0/bin/java -version
java version "1.6.0_31"
Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
Java HotSpot(TM) Client VM (build 20.6-b01, mixed mode)
[root@ip-10-0-0-126 download]#

Monday, July 17, 2017

Automatic Tuning of Undo Retention and Optimization

Oracle Database automatically tunes the undo retention period based on how the undo tablespace is configured.
If the undo tablespace is fixed size, the database tunes the retention period for the best possible undo retention for that tablespace size and the current system load. This tuned retention period can be significantly greater than the specified minimum retention period.
If the undo tablespace is configured with the AUTOEXTEND option, the database tunes the undo retention period to be somewhat longer than the longest-running query on the system at that time. Again, this tuned retention period can be greater than the specified minimum retention period.
Determine the current retention period by querying the TUNED_UNDORETENTION column of the V$UNDOSTAT view. This view contains one row for each 10-minute statistics collection interval over the last 4 days. (Beyond 4 days, the data is available in the DBA_HIST_UNDOSTAT view.) TUNED_UNDORETENTION is given in seconds.
select to_char(begin_time, 'DD-MON-RR HH24:MI') begin_time,
 to_char(end_time, 'DD-MON-RR HH24:MI') end_time, tuned_undoretention
 from v$undostat order by end_time;
BEGIN_TIME END_TIME TUNED_UNDORETENTION
 --------------- --------------- -------------------
 04-FEB-05 00:01 04-FEB-05 00:11 12100
 ...
 07-FEB-05 23:21 07-FEB-05 23:31 86700
 07-FEB-05 23:31 07-FEB-05 23:41 86700
 07-FEB-05 23:41 07-FEB-05 23:51 86700
 07-FEB-05 23:51 07-FEB-05 23:52 86700
576 rows selected.
Calculating UNDO_RETENTION for given UNDO Tabespace
The following query will helps to optimize the UNDO_RETENTION parameter:
Otimal Undo Retention




Because these following queries use the V$UNDOSTAT statistics, run the queries only after the database has been running with UNDO for a significant and representative time!
Actual Undo Size
SELECT SUM(a.bytes) "UNDO_SIZE"
 FROM v$datafile a,
 v$tablespace b,
 dba_tablespaces c
 WHERE c.contents = 'UNDO'
 AND c.status = 'ONLINE'
 AND b.name = c.tablespace_name
 AND a.ts# = b.ts#;
UNDO_SIZE
----------
 1572864000
Undo Blocks per Second
SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
 "UNDO_BLOCK_PER_SEC"
 FROM v$undostat;
UNDO_BLOCK_PER_SEC
------------------
 249.398333333333333333333333333333333333
DB Block Size
SELECT TO_NUMBER(value) "DB_BLOCK_SIZE [KByte]"
 FROM v$parameter
WHERE name = 'db_block_size';
DB_BLOCK_SIZE [Byte]
--------------------
 8192
Optimal Undo Retention
770 [Sec]
Using Inline Views:
SELECT d.undo_size/(1024*1024) "ACTUAL UNDO SIZE [MByte]",
 SUBSTR(e.value,1,25) "UNDO RETENTION [Sec]",
 ROUND((d.undo_size / (to_number(f.value) *
 g.undo_block_per_sec))) "OPTIMAL UNDO RETENTION [Sec]"
 FROM (
 SELECT SUM(a.bytes) undo_size
 FROM v$datafile a,
 v$tablespace b,
 dba_tablespaces c
 WHERE c.contents = 'UNDO'
 AND c.status = 'ONLINE'
 AND b.name = c.tablespace_name
 AND a.ts# = b.ts#
 ) d,
 v$parameter e,
 v$parameter f,
 (
 SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
 undo_block_per_sec
 FROM v$undostat
 ) g
WHERE e.name = 'undo_retention'
 AND f.name = 'db_block_size'
/
ACTUAL UNDO SIZE [MByte]
------------------------
1500
UNDO RETENTION [Sec]
--------------------
900
OPTIMAL UNDO RETENTION [Sec]
----------------------------
770
Calculating required UNDO Size for given Database Activity
If you are not limited by disk space, then it would be better to choose the UNDO_RETENTION time that is best for you (for FLASHBACK, etc.). Allocate the appropriate size to the UNDO tablespace according to the database activity:
Again, all in one query:
SELECT d.undo_size/(1024*1024) "ACTUAL UNDO SIZE [MByte]",
 SUBSTR(e.value,1,25) "UNDO RETENTION [Sec]",
 (TO_NUMBER(e.value) * TO_NUMBER(f.value) *
 g.undo_block_per_sec) / (1024*1024) 
 "NEEDED UNDO SIZE [MByte]"
 FROM (
 SELECT SUM(a.bytes) undo_size
 FROM v$datafile a,
 v$tablespace b,
 dba_tablespaces c
 WHERE c.contents = 'UNDO'
 AND c.status = 'ONLINE'
 AND b.name = c.tablespace_name
 AND a.ts# = b.ts#
 ) d,
 v$parameter e,
 v$parameter f,
 (
 SELECT MAX(undoblks/((end_time-begin_time)*3600*24))
 undo_block_per_sec
 FROM v$undostat
 ) g
 WHERE e.name = 'undo_retention'
 AND f.name = 'db_block_size'
/

ACTUAL UNDO SIZE [MByte]
------------------------
1500
UNDO RETENTION [Sec] 
--------------------
900
NEEDED UNDO SIZE [MByte]
------------------------
1753.582031249999999999999999999999999998
The previous query may return a “NEEDED UNDO SIZE” that is less than the “ACTUAL UNDO SIZE”. If this is the case, you may be wasting space. You can choose to resize your UNDO tablespace to a lesser value or increase your UNDO_RETENTION parameter to use the additional space.

Monday, June 19, 2017

How To diagnose the "Root Cause" of OPP (java) consuming High CPU

Steps To Collect The Required Details

1. Use top/prstat (find the analogous parameter for your platform) OS command to identify the pid of OPP (java). This is the of the OPP Java process which consumes High CPU.

2. Generate the java thread dump. Use the from step1. This is required to write additional log details in the OPP log which helps us to narrow down the potential cause.
$ kill -3

3. Identify the relevant OPP file (Get absolute Path & file name using below command)
$ ps -ef | grep

4. Get the below details to get thread details. Use the from step1.
$ ps -eLo pid,ppid,tid,pcpu,comm | grep <

5. Once we know the “thread id” from step 4, we can find out the report which could be the potential “Root Cause" of OPP (java) to consume High CPU
Note: - Collect all the details in one go, to get complete picture.


$ top
top - 00:52:29 up 149 days, 22:08, 4 users, load average: 4.57, 4.42, 3.79
Tasks: 1633 total,  5 running, 1628 sleeping,  0 stopped,  0 zombie
Cpu(s): 51.7%us, 7.0%sy, 0.0%ni, 29.5%id, 10.7%wa, 0.0%hi, 1.1%si, 0.0%st
Mem: 65932544k total, 65201636k used,  730908k free,  94028k buffers
Swap: 16777208k total, 11123908k used, 5653300k free, 6651120k cached

 PID USER   PR NI VIRT RES SHR S %CPU %MEM  TIME+ COMMAND
38144 appl  20  0 2360m 790m 1940 S 100.0 1.2 97217:33 java

$ ps -ef| grep 38144
appl 30965 30856 0 00:52 pts/0  00:00:00 grep 38144
appl 38144 37993 71 Mar17 ?    67-12:17:44 /u01/app/appl/PROD/apps/tech_st/10.1.3/appsutil/jdk/bin/java -DCLIENT_PROCESSID=38144 -server -Xmx384m -XX:NewRatio=2 -XX:+UseSerialGC -Doracle.apps.fnd.common.Pool.leak.mode=stderr:off -verbose:gc -mx2048m -Ddbcfile=/u01/app/appl/PROD/inst/apps/context_name/appl/fnd/12.0.0/secure/PROD.dbc -Dcpid=1300518 -Dconc_queue_id=1132 -Dqueue_appl_id=0 -Dlogfile=/u01/app/appl/PROD/inst/apps/CONTEXT_NAME/logs/appl/conc/log/FNDOPP1300518.txt -DLONG_RUNNING_JVM=true -DOVERRIDE_DBC=true -DFND_JDBC_BUFFER_MIN=1 -DFND_JDBC_BUFFER_MAX=2 oracle.apps.fnd.cp.gsf.GSMServiceController
[applprod@psmlferro01 ~]$

OPP file name : /u01/app/appl/PROD/inst/apps/$CONTEXT_NAME/logs/appl/conc/log/FNDOPP1300518.txt

$ kill -3 38144
$

$ ps -eLo pid,ppid,tid,pcpu,comm | grep 38144
38144 37993 38144 0.0 java
38144 37993 38145 0.0 java
38144 37993 38146 0.0 java
38144 37993 38147 0.0 java
38144 37993 38148 0.0 java
38144 37993 38155 0.0 java
38144 37993 38156 0.0 java
38144 37993 38157 0.0 java
38144 37993 38158 0.0 java
38144 37993 38159 0.0 java
38144 37993 38307 0.0 java
38144 37993 38313 0.0 java
38144 37993 38480 0.0 java
38144 37993 38494 0.0 java
38144 37993 38495 0.0 java
38144 37993 38594 0.0 java
38144 37993 38596 0.0 java
38144 37993 38597 0.0 java
38144 37993 38604 0.0 java
38144 37993 38614 0.0 java
38144 37993 38615 0.0 java
38144 37993 38620 0.0 java
38144 37993 38622 0.0 java
38144 37993 38625 0.0 java
38144 37993 38640 0.0 java
38144 37993 38641 0.0 java
38144 37993 40083 0.0 java
38144 37993 40087 0.0 java
38144 37993 40150 0.0 java
38144 37993 40152 0.0 java
38144 37993 40211 0.0 java
38144 37993 40214 0.0 java
38144 37993 40282 0.0 java
38144 37993 40288 0.0 java
38144 37993 40577 0.0 java
38144 37993 40582 0.0 java
38144 37993 7785 104 java
38144 37993 36170 0.0 java
38144 37993 36171 0.0 java
38144 37993 36172 0.0 java
38144 37993 36175 0.0 java
38144 37993 36176 0.0 java
38144 37993 36181 0.0 java
38144 37993 11152 0.0 java
38144 37993 20426 0.0 java
38144 37993 64706 0.0 java
38144 37993 47140 0.0 java
$

Thread id : 7785

Hexa Value : 1E69

From logfile :

"1300518:RT50811221" daemon prio=10 tid=0xf76f3400 nid=0x1e69 runnable [0x6d883000]
  java.lang.Thread.State: RUNNABLE
    at oracle.xdo.parser.v2.XSLTContext.reset(XSLTContext.java:346)
    at oracle.xdo.parser.v2.XSLProcessor.processXSL(XSLProcessor.java:285)
    at oracle.xdo.parser.v2.XSLProcessor.processXSL(XSLProcessor.java:155)
    at oracle.xdo.parser.v2.XSLProcessor.processXSL(XSLProcessor.java:192)
    at sun.reflect.GeneratedMethodAccessor389.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at oracle.apps.xdo.common.xml.XSLT10gR1.invokeProcessXSL(XSLT10gR1.java:677)
    at oracle.apps.xdo.common.xml.XSLT10gR1.transform(XSLT10gR1.java:425)
    at oracle.apps.xdo.common.xml.XSLT10gR1.transform(XSLT10gR1.java:244)
    at oracle.apps.xdo.common.xml.XSLTWrapper.transform(XSLTWrapper.java:182)
    at oracle.apps.xdo.template.fo.util.FOUtility.generateFO(FOUtility.java:1044)
    at oracle.apps.xdo.template.fo.util.FOUtility.generateFO(FOUtility.java:997)
    at oracle.apps.xdo.template.fo.util.FOUtility.generateFO(FOUtility.java:212)
    at oracle.apps.xdo.template.FOProcessor.createFO(FOProcessor.java:1665)
    at oracle.apps.xdo.template.FOProcessor.generate(FOProcessor.java:975)
    at oracle.apps.xdo.oa.schema.server.TemplateHelper.runProcessTemplate(TemplateHelper.java:5936)
    at oracle.apps.xdo.oa.schema.server.TemplateHelper.processTemplate(TemplateHelper.java:3459)
    at oracle.apps.xdo.oa.schema.server.TemplateHelper.processTemplate(TemplateHelper.java:3548)
    at oracle.apps.fnd.cp.opp.XMLPublisherProcessor.process(XMLPublisherProcessor.java:311)
    at oracle.apps.fnd.cp.opp.OPPRequestThread.run(OPPRequestThread.java:184)

Request id : 50811221

From logfile :

[4/13/17 11:45:37 AM] [OPPServiceThread1] Post-processing request 50811221.
[4/13/17 11:45:37 AM] [1300518:RT50811221] Executing post-processing actions for request 50811221.
[4/13/17 11:45:37 AM] [1300518:RT50811221] Starting XML Publisher post-processing action.
[4/13/17 11:45:37 AM] [1300518:RT50811221]
Template code: INVARAAS_XML
Template app: INV
Language:   en
Territory:   US
Output type:  EXCEL

Template Name : INVARAAS_XML

Sunday, June 11, 2017

ORA-01078 ORA-29701: unable to connect to Cluster Synchronization Service

SQL> startup
ORA-01078: failure in processing system parameters
ORA-29701: unable to connect to Cluster Synchronization Service
SQL>

$ cd $ORACLE_HOME
$ cd bin
$ ./crsctl status resource ora.cssd
NAME=ora.cssd
TYPE=ora.cssd.type
TARGET=ONLINE
STATE=OFFLINE

$ ./crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....DATA.dg ora....up.type ONLINE    OFFLINE
ora....ER.lsnr ora....er.type ONLINE    ONLINE    nsml...db01
ora....DATA.dg ora....up.type ONLINE    OFFLINE
ora....DATA.dg ora....up.type ONLINE    OFFLINE
ora.asm        ora.asm.type   OFFLINE   OFFLINE
ora.cssd       ora.cssd.type  ONLINE    OFFLINE
ora.diskmon    ora....on.type OFFLINE   OFFLINE
ora.evmd       ora.evm.type   ONLINE    ONLINE    nsml...db01
ora.ons        ora.ons.type   OFFLINE   OFFLINE
$ ./crsctl start resource ora.asm
CRS-2672: Attempting to start 'ora.cssd' on '*****'
CRS-2672: Attempting to start 'ora.diskmon' on '*****'
CRS-2676: Start of 'ora.diskmon' on '*****' succeeded
CRS-2676: Start of 'ora.cssd' on '*****' succeeded
CRS-2672: Attempting to start 'ora.asm' on '*****'
CRS-2676: Start of 'ora.asm' on '*****' succeeded
CRS-2672: Attempting to start 'ora.*****.dg' on '*****'
CRS-2672: Attempting to start 'ora.*****.dg' on '*****'
CRS-2672: Attempting to start 'ora.*****.dg' on '*****'
CRS-2676: Start of 'ora.*****.dg' on '*****' succeeded
CRS-2676: Start of 'ora.*****.dg' on '*****' succeeded
CRS-2676: Start of 'ora.*****.dg' on '*****' succeeded
$ sqlplus '/as sysasm'

SQL*Plus: Release 11.2.0.4.0 Production on Sun Jun 11 08:58:46 2017

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Automatic Storage Management option

SQL>

Saturday, February 4, 2017

ADOP (Application DBA Online Patching) Tool

How ADOP works ?

The online patching cycle consists of five phases which are executed in order. Example of a typical online patching cycle:
source /EBSapps.env run
adop phase=prepare
adop phase=apply patches=123456
adop phase=finalize
adop phase=cutover
source /EBSapps.env run
adop phase=cleanup
Note that after cutover the command line environment should be re-loaded as the run edition file system has changed.
In a multi-node deployment, adop commands are only executed from the primary node. The primary adop session uses remote execution to automatically perform required actions on any secondary node.
Multiple phases of adop can be executed in a single line command. Example of combined finalize/cutover/cleanup:
adop phase=finalize,cutover,cleanup
Prior to cutover, it is possible to execute additional “apply” and “finalize” phases as needed. Example of applying multiple patches using separate apply commands:
source /EBSapps.env run
adop phase=prepare
adop phase=apply patches=123456
adop phase=apply patches=223456
adop phase=finalize
adop phase=apply patches=323456
adop phase=finalize
adop phase=cutover
source /EBSapps.env run
adop phase=cleanup
Note that it is possible to apply additional patches after running the finalize phase, but if you do so then you will need to run the finalize phase again. Finalize must always be run immediately prior to cutover.

ADOP Common Parameters

workers= [default: computed]
Number of parallel workers used to execute tasks. Default value is computed principally according to number of available CPU cores.
input_file=
adop parameters can be specified in a text file, with one
=
on each line of the file. Command line parameters override input file parameters.
loglevel=(statement|procedure|event|warning|error|unexpected) [default: event]
Controls the level of diagnostic log detail displayed on the console output. Each log message is tagged with a level:
1) statement – is only used for debugging.
2) procedure – is only used for debugging high level procedures.
3) event – is used to display informational messages in normal processing. This is the default value.
4) warning – is used to indicate an internal error that is handled by the system and does not affect processing.
5) error – indicates an action failed and will need to be reviewed by the user, but the system was able to continue processing.
6) unexpected – indicates an unrecoverable error that halts processing and requires user intervention before processing can continue.
Setting loglevel will display messages at that level and higher.
prompt=(yes|no) [default: yes]
Specifies whether adop should prompt for user input on warnings. By default adop will ask user whether to continue or exit on some warning messages. If this parameter is set to “no” adop will remain fully non-interactive, and will continue past any warning messages without user confirmation.
Below is the list of Diagnostic Parameters. Normally these parameters are not used, until unless directed by Oracle Support:
allowcoredump=(yes|no) [default: no]
Specifies whether adop should create a core dump if it crashes. This option should only be used if directed by support.
analytics=(yes|no) [default: no]
Controls whether adop writes additional reports with information that might be helpful in some diagnostic situations. This option should not be used unless directed by Support.
defaultsfile= [default: adalldefaults.txt]
Name of the response file providing default parameter values for non-interactive execution of adadmin and adop. The file must be in the $APPL_TOP/admin/$TWO_TASK directory in both run edition and patch edition file systems. The default file “adalldefaults.txt” is maintained by AutoConfig and normally you should not need to change any values.

ADOP Prepare Phase

Prepare phase will be create a new Online Patching Cycle ID and start with Syncronizing the File System of Run into Patch. This will be followed by creation of Patch Edition in database.
The phase has below specific parameter:
skipsyncerror=(yes|no) [default: no]
It specifies whether to ignore errors that may occur during incremental file system synchronization. This might happen if you applied a patch in the previous patching cycle that had errors but decided to continue with the cutover. When the patch is synchronized on the next patching cycle, the apply errors may occur again, but can be ignored.
After complition of Prepare Phase you can start with migration of customization to the Patch Edition File System and you can apply Application Technology Stack Patches i.e. Oracle Home (10.1.2) Patches and Weblogic Patches. This can be done until you are completed with Cutover Phase.

ADOP Apply Phase

This is the phase where in patches are actully applied.
The phase has below specific parameters:
apply=(yes|no) [default: yes]
Controls whether adop actually applies the patch. You can specify “apply=no” to run adop in test mode, where the patch will not actually be applied, and adop will record what it would have done in the log.
patches=[,…]
patches=:[,:…]
This parameter specifies a comma-separated list of patches to be applied. Patches can be specified either as the patch number or by the patch directory and driver file. All patches are expected to be in the $PATCH_TOP directory on all tiers. Patches are applied serially unless the merge=yes parameter is specified.
patchtop= [default: $PATCH_TOP]
Path to a user-specified directory where patches are unzipped. The default and recommend location is the $PATCH_TOP directory automatically created by the install. When using an alternate patchtop you must ensure that the location is not within the editioned file systems (fs1, fs2) and is accessible by the same path for all nodes of a multi-node deployment.
apply_mode=(online|downtime|hotpatch) [default: online]
It is used to specify how the patch will be applied. There 3 option can be explained as below:
online – It will apply a patch to the patch edition during an online patching cycle.
downtime – It will apply a patch to the run edition when application services are down. When using this mode, you only run the apply phase.
hotpatch – apply a patch to the run edition when application services are up. When using this mode, you only run the apply phase
In downtime mode, adop will validate that application services are shutdown before apply the patch. The patch will be applied to the run edition of the system. Downtime mode patching does not use an online patching cycle and hence if there is an online patching cycle in progress. The process of applying a patch in downtime mode completes more quickly than in online mode, but at the cost of increased system downtime.
In hotpatch mode, adop will apply the patch to the run edition of the system while application services are still running. Patches that can be safely applied in hotpatch mode (such as NLS and Online Help patches) will document this in the patch readme. Hotpatch mode cannot be used if there is an online patching cycle in progress.
merge=(yes|no) [default: no]
Indicates whether adop should merge a list of patches before applying. By default, adop will apply a list of patches serially in the order specified. You can also use AD Merge Patch to merge multiple patches ahead of the apply command.
restart=(yes|no) [default: no]
Use restart=yes to resume the previous failed apply command from where processing terminated. If an apply command fails, check the log files for further information. If the problem can be corrected, you can then restart the apply command where it left off using the restart parameter.
When restarting a failed apply it is important to use the same parameters as the failed command, with only the addition of the restart=yes parameter.
abandon=(yes|no) [default: no]
Use abandon=yes to abandon the previous failed apply command and start a new apply command. Note that any changes made to the system by the failed command will remain in effect. The abandon flag is most useful when applying a replacement patch for the failing patch. If a patch fails to apply and there is no replacement patch, you may also abort the online patching cycle. See abort phase later in this blog.
options=[,…]
Options can be specified in a comma-separated list to control advanced features when a patch is applied. These options are normally not needed unless specified by documentation or support. Note that these options can be prefixed with “no”, e.g. “nocheckfile”, to disable the behavior, and for some options “no” is the default.
checkfile [default: checkfile] – Skip running exec, SQL, and exectier commands if they are recorded as already run.
compiledb [default: compiledb] – Compile invalid objects in the database after running actions in the database driver.
compilejsp [default: compilejsp] – Compile out-of-date JSP files, if the patch has copy actions for at least one JSP file.
copyportion [default: copyportion] – Run commands found in a copy driver.
databaseportion [default: databaseportion] – Run commands found in a database driver.
generateportion [default: generateportion] – Run commands found in a generate driver.
integrity [default: nointegrity] – Perform patch integrity checking
autoconfig [default: autoconfig] – Run AutoConfig.
actiondetails [default: actiondetails] – Turn off display of action details.
parallel [default: parallel] – Run actions that update the database or actions that generate files in parallel.
prereq [default: noprereq] – Perform prerequisite patch checking prior to running patch driver files.
validate [default: novalidate] – Connect to all registered Oracle E-Business Suite schemas at the start of patch application.
phtofile [default: nophtofile] – Save patch history to file
forceapply [default: noforceapply] – Reapply a patch that has already been applied. Useful in combination with “nocheckfile” option to rerun files that have already been executed.
flags=[,…]
Flags can be specified in a comma-separated list to control advanced features when applying a patch. Note that these flags can be prefixed with “no”, e.g. “nologging”, to disable the behavior and for some flags “no” is the default.
hidepw [default: hidepw] – Omit the “HIDEPW:” comments in the log file.
trace [default: notrace] – Log all database operations to a trace file.
logging [default: nologging] – Create indexes in LOGGING or NOLOGGING mode.
autoskip [default: noautoskip] – To proceed with adpatch execution even if some driver actions failed. Failed actions are recorded in a log file.
preinstall=(yes|no) [default: no]
Allows a patch to be applied to the file system without connecting to the database. Do not use this parameter unless directed by Oracle.
wait_on_failed_job=(yes|no) [default: no]
Controls whether adop apply command exits when all workers have failed. Instead of exiting, you can force adop to wait, and use the “adctrl” to retry failed jobs.
printdebug=(yes|no) [default: no]
Controls whether to display additional debugging information.
uploadph=(yes|no) [default: yes]
Controls whether to upload patch history information to database after applying the patch.

ADOP Finalize Phase

Finalize Phase is performed to keep the system ready for Cutover phase. This phase perform various activities like:
1. Compiling Invalid Objects
2. Generating driverd objects
3. Pre-compute DDL to be run during Cutover
Finalize Phase have below specific parameters:
finalize_mode=(full|quick) [default: quick]
Quick mode will provide the shortest execution time, by skipping non-essential actions. Full mode performs additional actions such as gathering statistics that may improve performance after cutover.

ADOP Cutover Phase

Cutover phase perform below activities:
1. Bring down Application services
2. Promote Patch File System to the Run File System.
3. Promote Patch Database Edition to the Run Database Edition.
4. Perform Maintenance task
5. Bring up application services
Cutover Phase have below specific parameters:
mtrestart=(yes|no) [default: yes]
Specifies whether to restart application tier servers after cutover. Leave at default unless you need to perform any manual steps during downtime.
cm_wait= [default: forever]
Specifies the number of minutes to wait for Concurrent Manager shutdown. Adop cutover starts by requesting a concurrent manager shutdown and then waits for in-progress requests to complete. If Concurrent Manager does not shutdown within the specified time limit, remaining concurrent requests will be killed and cutover will proceed.
Note that any concurrent requests killed during forced shutdown may need to be manually re-submitted after cutover. To avoid killing concurrent requests, schedule cutover at a time of minimal user activity or manually shutdown Concurrent Manager in advance of cutover.

ADOP Cleanup Phase

This phase will cleanup the Application and Database for the next Patching Cycle.
Cleanup phase specific parameters are:
cleanup_mode=(full|standard|quick) [default: standard]
Quick mode provides the shortest execution time, by skipping non-essential actions. Standard mode performs additional processing to drop obsolete code objects from old editions. Full mode performs additional processing to drop empty database editions and unused table columns.

Cloning the Patch Edition File System

The patch edition file system is normally synchronized with the run edition file system during the prepare phase. There are some cases where it is helpful or required to manually re-clone the patch edition file system from the run edition.
1) After aborting an online patching cycle.
2) After manually changing the run edition file system.
3) After patching middle-tier technology components.
4) After applying an EBS RUP.
By re-cloning the patch edition file system, you can be certain that it is correctly synchronized, and also minimize any synchronization delay that would normally occur on the next prepare command. This can be down by below command:
adop phase=fs_clone
If there is any error you must examine log files and correct the problem, then restart the fs_clone by running the command again. User below command if fs_clone does not restart correctly and you want to force the process to restart from the beginning.
adop phase=fs_clone force=yes

Aborting an online patching cycle

If an online patching cycle encounters problems that cannot be fixed immediately you can abort the patching cycle and return to normal runtime operation. Aborting an online patching cycle can be issue as below:
adop phase=abort
Note that once you are done with Cutover phase, you can abort ADOP Cycle.
The abort command drops the database patch edition and returns the system to normal runtime state. Immediately following abort, you must also run a full cleanup and fs_clone operation to fully remove effects of the failed online patching cycle.

Dropping old database editions

As online patching cycles are completed, the system will build up a number of old database editions. When the number of old database editions reaches about 25, you should consider running a special maintenance operation to drop old database editions. This can be down as below:
adop phase=prepare
adop phase=actualize_all
adop phase=finalize
adop phase=cutover
adop phase=cleanup cleanup_mode=full
This maintenance operation will take much longer than a typical online patching cycle, and should only be performed when there is no immediate need to start a new online patching cycle. The actualize all and full cleanup can be done separately as shown above, or can be executed in conjunction with an online patching cycle.

Log File Location

The adop log files are located on the non-editioned file system (fs_ne), under:
$NE_BASE/EBSapps/log/adop//__

Session

The adop utility maintains a session for each online patching cycle. A new session is created when you run the prepare phase. Each session is given a numeric ID number. The session is used to maintain the state of the online patching cycle across the various adop phases and commands. You can only run one adop session at a time on a particular Oracle E-Business Suite system.