Feed aggregator

Date format

Tom Kyte - Mon, 2019-03-11 21:06
Hi, I have data in the format 2019-02-21T13:58:18.000+00:00 in a varchar2 column(loaded into staging table from a CSV file). When merging this staging table I need to put this value into a DATE data type column. How to do this ? Format of the dat...
Categories: DBA Blogs

TO_DATE() on SYSDATE failed in insert all query.

Tom Kyte - Mon, 2019-03-11 21:06
Hello, Yesterday I used insert all query to enter 8 rows in two tables directly from toad. At that time I used TO_DATE(SYSDATE, 'DD-MON-YYYY HH24:MI') for inserting sysdate in my table (I know sysdate was enough but 'coz of since last four m...
Categories: DBA Blogs

Hardening and Securing The Oracle Database Training in London

Pete Finnigan - Mon, 2019-03-11 21:06
I posted last week that I will teach my two day class " How to Perform a Security Audit of an Oracle Database " with Oracle University in London on the 29th and 30th April 2019. We have now added....[Read More]

Posted by Pete On 11/03/19 At 11:52 AM

Categories: Security Blogs

Schedule Management for Oracle Scheduler Integration

Anthony Shorten - Mon, 2019-03-11 11:15

One of the most common questions I get from my product users is how to manage your batch schedules when using the Oracle Scheduler Integration.  As Oracle Scheduler is part of the Oracle Database, Oracle provides a number of ways of managing your schedule:

  • Command Line. If you are an administrator that manages your database using commands and PL/SQL calls then you can use the DBMS_SCHEDULER interface directly from any SQL tool. You have full access to the scheduler objects.
  • Oracle SQL Developer. The latest versions of Oracle SQL Developer include capabilities to manage your schedule directly from that tool. The advantage of this is that the tool supports techniques such as drag and drop to simplify the management of scheduler objects. For example, you can create a chain and then drop the programs into the chain and "wire" them together. This interface generates the direct DBMS_SCHEDULER calls to implement your changes. Refer to the Oracle SQL Developer documentation for details of maintaining individual scheduler objects. For example:

SQL Developer Interface

  • Oracle Enterprise Manager. From Oracle Database 12c and above, Oracle Enterprise Manager automatically includes DBA functions and is the recommended tool for all database work. Most DBA's will use this capability to manage the database. This includes Oracle Scheduler management. For example:

Enterprise Manager Interface

Implementations have a range of options for managing your schedule. Customers on the cloud use the Oracle Utilities Cloud Service Foundation to manage their schedule in a similar interface to Enterprise Manager via our Scheduler API.

 

HTTPOnly Cookie Flag Now Available for EBS 12.2

Steven Chan - Mon, 2019-03-11 08:55

We are pleased to announce an enhancement to Oracle E-Business Suite security whereby the HTTPOnly cookie flag is set automatically for the EBS session cookie (sometimes also called ICX session cookie) when the requirements listed below are met.  Setting the HTTPOnly cookie flag provides additional security by concealing the cookie from client-side scripts. 

Requirements

This feature is automatically available to all EBS 12.2.x customers who have met the following requirements:

References Related Articles
Categories: APPS Blogs

sys_op_lbid

Jonathan Lewis - Mon, 2019-03-11 08:23

I’ve made use of the function a few times in the past, for example in this posting on the dangers of using reverse key indexes, but every time I’ve mentioned it I’ve only been interested in the “leaf blocks per key” option. There are actually four different variations of the function, relevant to different types of index and controlled by setting a flag parameter to one of 4 different values.

The call to sys_op_lbid() take 3 parameters: index (or index [sub]partition object id, a flag vlaue, and a table “rowid”, where the flag value can be one of L, R, O, or G. The variations of the call are as follows:

  • L – the function will return the row directory address  (i.e. something that look like a rowid) of the first index entry in the leaf block that holds the index entry for the referenced table rowid. The effect of this is that the number of distinct values returned by calling the function for every row in the table is equal to the number of index leaf blocks which current hold an active entry.
  • R – Relevant only to bitmap indexes; the function will return the row directory address of the bitmap index entry for the referenced table rowid. The effect of this is that the number of distinct values returned by calling the function for every row in the table is equal to the number of index entries in the bitmap index.
  • O – Relevent only to the primary key index of an index organized table with an overflow. The function is used with a non-key column instead of a rowid and returns a rowid that corresponds to the row directory entry in the overflow segment. An interesting detail of the overflow entries is that there is an “nrid” (next rowid) pointer in the primary key index entry that does not get deleted when all the columns in the related overflow entry are set null – so you can delete all the data from the overflow (set every overflow column in every row to null) and the primary key clustering factor would not change.
  • G – Relevent only to secondary indexes on an index organized table. Like the L and R options this function takes a rowid (which is a special case for IOTs) as one of its inputs and uses the block guess from the secondary index to construct a row directory entry for the first entry in the primary key leaf block that corresponds to that block guess. This serves two purposes – it allows Oracle to calculate the clustering factor of the secondary index (as you walk the secondary index in order how much do you jump around the leaf blocks of the primary key), and it allows Oracle to produce the pct_direct_access figure for the secondary index by joining the secondary index to the primary key index on primary key, and comparing the ‘G’ result for the secondary with the ‘L’ result from the primary, which gives a count of the number of times the guess is correct.

These observations can be confirmed by gathering stats on different structures with trace enabled, and doing a couple of block dumps. For reference the following is just a simple script to create an index organized table with overflow and secondary index:


rem
rem     Script:         sys_op_lbid_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2018
rem

create table t1(
        id      constraint t1_pk primary key,
        v1      ,
        v2      ,
        v3      ,
        padding 
)
organization index
pctthreshold 2
overflow
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum,
        lpad(rownum,30),
        lpad(rownum,30),
        lpad(rownum,40),
        rpad('x',100,'x')
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;

create index t1_i1 on t1(v3);

alter session set sql_trace true;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                method_opt  => 'for all columns size 1'
        );
end;
/

alter session set sql_trace false;

select
        object_id, object_name
from
        user_objects
order by
        object_id
;

The significance of the query for object_id and data_object_id shows up in the trace file (and subsequent dumps) when Oracle uses one or other of the values in its SQL and rowid construction.

Here are the interesting SQL statements generated as the stats are gathered – but cosmetically altered to be reader-friendly. In order they are:

  1. Stats for primary key of IOT: using the ‘L’ option for counting leaf blocks and the ‘O’ option for the clustering factor into overflow segment.
  2. Stats for secondary index of IOT: using the ‘L’ option for counting leaf blocks and the ‘G’ option for the clustering factor into the primary key index
  3. Calculate pct_direct_access: the ‘L’ option gives the actual leaf block in the primary key index, the ‘G’ option gives the leaf block guessed by the secondary index

select 
        /*+ index(t,t1_pk) */ 
        count(*) as nrw,
        count(distinct sys_op_lbid(351334,'L',t.rowid)) as nlb,
        null as ndk,
        (sys_op_lbid(351334,'O',V1),1) as clf
from
        t1 t 
where 
        id is not null
;


select 
        /*+ index(t,t1_i1) */ 
        count(*) as nrw,
        count(distinct sys_op_lbid(351335,'L',t.rowid)) as nlb,
        null as ndk,
        sys_op_countchg(sys_op_lbid(351335,'G',t.rowid),1) as clf
from
        t1 t 
where 
        v3 is not null
;


select
        case when count(*) = 0
                then 100
                else round(
                        count(
                                case when substr(gdba,7,9)=substr(lbid,7,9)
                                        then 1
                                        else null
                                end
                        )/count(*)*100
                )
        end
from    (
        select
                /*+
                        ordered
                        use_hash(i.t1 t2)
                        index_ffs(t2,t1_pk)
                */
                sys_op_lbid(351334,'L',t2.rowid) lbid,
                gdba
        from (
                select
                        /*+ index_ffs(t1,t1_i1) */
                        sys_op_lbid(351335,'G',t1.rowid) gdba,
                        t1.ID
                from
                        t1 t1
                ) i,
`               t1 t2
        where
                i.id = t2.id
        )
;

The strange substr(,7,9) that appears in the join between the primary key index and the secondary index is needed because the ‘G’ option uses the object_id of the table to turn an absolute block guess into a rowid while the ‘L’ option is using the data_object_id of the primary key index to turn its block addrss into a rowid. (This means there may be variants of this SQL for IOTs using partitioning.)

 

Database operations monitor 12c

Tom Kyte - Mon, 2019-03-11 02:46
Team, Was reading about Real time operations monitor in Oracle 12c @ <u>https://docs.oracle.com/en/database/oracle/oracle-database/12.2/tgsql/monitoring-database-operations.html#GUID-DC573FB7-40C5-4D6D-BE81-EF356900B444 </u> and here is a quick...
Categories: DBA Blogs

SQL fetch phase need library cache lock/pin

Tom Kyte - Mon, 2019-03-11 02:46
Hi , thks for all your works here,i've learnd much a lot! now, i have some question on <b>SQL fetch phase</b> and <b>library cache lock/pin </b>which stuck me for days. here is something i find from some website, <i> SELECT statement follow...
Categories: DBA Blogs

Need to find the Oracle EBS Version 12 Data Model

Tom Kyte - Mon, 2019-03-11 02:46
Need to find the Oracle EBS Version 12 Data Model
Categories: DBA Blogs

Information related fro AWR/ASH

Tom Kyte - Mon, 2019-03-11 02:46
Dear Sir's, Please help me with good link or book for AWR/ASH report analyzer and various wait events including RAC. Also want to seen behind rollback segment header contention in oracle. Please help me to sort this. Oracle version: 12.2....
Categories: DBA Blogs

Oracle 19c Automatic Indexing - How well it's understood?

Syed Jaffar - Sun, 2019-03-10 13:05
I wrote an article about Oracle 19c Automatic Indexing. Go to the below link for more details:

https://medium.com/@sjaffarhussain/oracle-19c-automatic-indexing-how-well-its-understood-1798d1b3d097

Jamie Loeb, JC Aragone Receive Annual Oracle US Tennis Awards

Oracle Press Releases - Sat, 2019-03-09 14:00
Press Release
Jamie Loeb, JC Aragone Receive Annual Oracle US Tennis Awards Former collegians win grants to aid career development

INDIAN WELLS, Calif.—Mar 9, 2019

Oracle CEO Mark Hurd, along with tennis Hall-of-Famer Chris Evert, announced today that tennis professionals Jamie Loeb and JC Aragone are the recipients of the third annual Oracle US Tennis Awards.

Loeb and Aragone each will receive a $100,000 grant to help develop their tennis careers. The two young professionals were selected by a six-member advisory council comprised of former tennis professionals and executives.  The Intercollegiate Tennis Association, the governing body of college tennis, will administer the grants.

Previous winners were Danielle Collins and Mackenzie McDonald in 2017, and Francesca Di Lorenzo and Chris Eubanks in 2018.

“JC and Jamie are talented young players with bright futures,’’ said Oracle’s Hurd. “We created these awards to help young players with college experience as they transition into professional tennis. We’re very proud of the success of our previous award winners, and hope that these grants help Jamie and JC in their continued development.’’

Loeb, 24, turned professional in 2015 after playing two years at the University of North Carolina, where she won the 2015 NCAA Division I Tennis Championships. She amassed an 84-9 singles record while at UNC. Loeb, a native of New York, has won seven ITF singles events and six doubles events since 2012. As a junior player, Loeb won the singles and doubles 18s championships at the 2012 USTA National Winter Championship. She currently has a UTR of 12.13 and is ranked No. 219 on the WTA tour.

“This means so much to me; I was in absolute shock when I heard,’’ Loeb said. “The success of past recipients proves how big of an impact an award like this can have. It’s great that Oracle recognizes the challenges we face. I’m am beyond excited to get to work!’’

Juan Cruz “JC” Aragone, 23, turned pro in 2017 after playing tennis at the University of Virginia.  A California native, Aragone was a member of three teams that won the NCAA Men’s Tennis Championships. His collegiate record was 109-22, and he was named to the NCAA All-Tournament team twice in singles and doubles. Aragone has won two ITF singles titles and one doubles title. He currently has a UTR of 14.69 and is No. 243 in the ATP rankings.

“This is such an honor,’’ Aragone said. “Having the right support and coaching makes a huge impact on a tennis player’s career. Oracle’s generosity will allow me to use that help to make it to the next level.’’

All four previous award winners have found professional success and improved their rankings since receiving the award. Collins, ranked No. 237 when she received the award, today is ranked No. 25. McDonald today is ranked No. 62, up from No. 257 in 2017.  Eubanks is No. 154, compared to 284 when he received the award a year ago. Francesca Di Lorenzo is up from No. 259 to No. 167.

Lindsay Davenport, on behalf of the advisory council, congratulated Loeb and Aragone and applauded Oracle for its continuing commitment to tennis.

“Jamie and JC are exciting players and the advisory council is confident these grants will help them reach their potential,’’ Davenport said. “We’re all grateful to Oracle for these awards and everything else it is doing to support American tennis.’’

The Oracle US Tennis Awards Advisory Council includes:

  • Lindsay Davenport:  Former singles world No. 1; 1996 Olympic gold medalist,  current Tennis Channel analyst.
  • Ilana Kloss: Former singles world No. 19; former CEO and Commissioner of World Team Tennis.
  • Peggy Michel: three-time grand slam doubles champion; played college tennis at Arizona State; current Assistant Tournament Director & Vice President of Sales and Sponsorship, BNP Paribas Open.
  • Dr. Timothy Russell: CEO ITA; college educator for three decades.
  • Martin Blackman: General Manager, USTA Player Development; played college tennis at Stanford.
  • Todd Martin: former singles world No. 4; CEO International Tennis Hall of Fame and Tournament Director, Dell Technologies Hall of Fame Open; played college tennis at Northwestern University.
Contact Info
Deborah Hellinger
Oracle
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Deborah Hellinger

  • +1.212.508.7935

Stop The DBA Reading Data in Subtle Ways

Pete Finnigan - Fri, 2019-03-08 18:46
The Problem: Dan asked me a question about whether the DBA can be stopped from accessing views such as V$SQL or V$SQL_BIND_CAPTURE with Database Vault because these views can be used to read data from the SGA. I have covered....[Read More]

Posted by Pete On 08/03/19 At 03:41 PM

Categories: Security Blogs

Effective PeopleSoft Performance Monitoring

David Kurtz - Fri, 2019-03-08 12:00
This advice note describes how to configure PeopleSoft systems on Oracle so that performance metrics are collected that are useful performance monitoring and subsequent performance tuning.
Contents
  • Oracle RDBMS Instrumentation
    • On-Line Component/Page Information
    • Application Engine Step Information
    • Cobol/nVision Instrumentation
    • nVision Layout Instrumentation
    • 2nd Database Connection Instrumentation
  • PeopleTools Performance Metrics
    • Cobol and Application Engine Batch Timings
    • PeopleSoft Performance Monitor
Summary of Recommendations
  • Set EnableAEMonitoring=1 in all Application Server and process scheduler domains in order to enable PeopleSoft instrumentation on-line, in the integration broker, and in Application Engine programs.
  • Implement the PSFTAPI package and trigger described above to set module and action at the start of all processes.
  • Implement Fine-Grained Audit policy and handler to instrument nVision processes if required.
  • Reduce AWR snapshot frequency from the default of 1 hour to every 15 minutes.
  • Enable Application Engine batch timings to at least database and preferably also file by setting AETrace=1152 in all Application Server and Process Scheduler domain configuration files.
  • Enable Cobol statement timings report should be written to log file by setting TraceSQL=128 in all Process Scheduler domain configuration files.
  • Performance Monitor is complex to set up and the delivered analytics are limited.  Nonetheless, the information that can be obtained from the data collected can be worth the effort of configuration to address on-line configuration and performances issues.
Oracle RDBMS InstrumentationOracle provides the dbms_application_info package to set certain attributes on the current database session.  These attributes are visible in some of the dynamic performance version, and are picked up by Active Session History (ASH) and can also be seen in AWR reports, Enterprise Manager screens, SQL trace files, and other performance utilities such as EDB360.  The package was first documented in Oracle 7.3.3
"Application developers can use the DBMS_APPLICATION_INFO package to record the name of the executing module or transaction in the database for use later when tracking the performance of various modules… System administrators can also use this information to track resource usage by module. When an application registers with the database, its name and actions are recorded in the V$SESSION and V$SQLAREA views. Your applications should set the name of the module and name of the action automatically each time a user enters that module. The module name could be the name of a … code segment in an … application. The action name should usually be the name or description of the current transaction within a module." Oracle 7 Tuning, release 7.3.3 ©Oracle 1997, Chapter 23 Registering Applications See also One of my Favourite Database Things: DBMS_APPLICATION_INFO
If module and action are consistently set to meaningful values as the application executes it is then possible to determine from where SQL statements originated and how different parts of the application are performing.
ASH is separately licenced as part the Diagnostics Pack, that is only available on Enterprise Edition of the database.  However, most PeopleSoft customers running on Oracle do so on Enterprise Edition and are licenced for the diagnostics pack.
PeopleSoft has set client_info since PeopleTools 7.53 so the PeopleSoft operator ID can be associated with the database session, mainly to allow auditing to be done with database triggers.  However, this attribute is not persisted to the ASH data.  However, client_id is also set to the operator ID, and this is collected by ASH.
PeopleTools has set module and action since PeopleTools 8.50, though the exact values used have changed sometimes with the PeopleTools version.
A new application server/process scheduler domain parameter EnableAEMonitoring was introduced in PeopleTools 8.54 to control this behaviour and it is not enabled by default.  If monitoring is not enabled module defaults to the program name and action remains blank.
Recommendation: Set EnableAEMonitoring=1 in all application server and process scheduler domains in order to enable PeopleSoft instrumentation on-line, in the integration broker, and in Application Engine programs.
See also:
On-Line Component/Page InformationIn the online application, module and action are set to the component and page name respectively.  In a search dialogue, Action is set to 'xyzzy'.
In the Integration Broker module and action are set to service name and queue name.
Application Engine Step InformationIn Application Engine, module is set to a string that includes the name of the Application Engine main program the was called and the sessionid_num for the current process instance number recorded on the process scheduler request table PSPRCSRQST.  For example: PSAE.PSPMCSOSUM.1448
Later on, it may be necessary to use regular expressions in SQL to process this string before profiling the ASH data.
Action is set to string concatenated from the Application Engine program, section, step name and step type.  For example: PSPMCSOSUM.GETCNT.CNT.P
The program name may be different from that shown in module if one AE program calls another.
Note: Application Engine still doesn't reset ACTION on DO SELECT steps
Cobol/nVision InstrumentationCobol, nVision and SQR do not set module and action.  Instead, they can be set at the start of every program initiated by Process Scheduler using a trigger on PSPRCSRQST.  The first thing a process does when it is initiated is to set the run status on its scheduler request record to 7, indicating that it is processing.  This is visible in the Process Monitor component.  A trigger on this transition can set module and action for the session to the program and process instance number on that row of data.
This technique was used prior to PeopleSoft adding instrumentation to PeopleTools.  It can still be applied to all processes, including Application Engine, because any PeopleSoft instrumentation will simply overwrite the value set by the trigger.
Recommendation: Implement the PSFTAPI package and trigger described above in order to set module and action at the start of all processes.
The same technique was also used prior to the introduction of ASH to enable Oracle SQL Trace if a particular run control was specified, and is still occasionally useful.
nVision Layout InstrumentationOne of the challenges of tuning and monitoring nVision is to be able to identify each report being run.   nVision reports always run as the same process name, either NVSRUN for a single report, RPTBOOK for a report book of many reports, or DRILLDWN for a nVision drill-down query.  Knowing the process instance is useful because then we can look up the operator and run control ID
However, it would also be useful to know the report ID being run.  When each individual nVision report starts it queries the runtime parameters from the PS_NVS_REPORT PeopleTools table.  There is no update, so it is not possible to capture this with a DML trigger.  Instead, it is possible to define a fine-grained audit policy on the query and set module within a PL/SQL handler package that is invoked by the fine-grained audit.
Recommendation: Implement Fine-Grained Audit policy and handler to instrument nVision processes if required
2nd Database Connection InformationPeopleSoft programs use a second database connection to increment sequence numbers to minimise the row level locking on such tables.  It is like an AUTONOMOUS_TRANSACTION in PL/SQL.  There is no PeopleSoft instrumentation on this session.  It is possible to use an AFTER LOGON trigger to set client_info, module and action.
Oracle Automatic Workload Repository (AWR) SnapshotsPeopleSoft generates a lot of non-shareable SQL.
  • Dynamically generated SQL, often in PeopleCode, concatenates strings of SQL with bind variables, thus the bind variables become literals in the final SQL statement.  Statements with different literal values are considered to be different statements with different SQL_IDs.
  • Similarly, dynamic Cobol statements result in literal values in the SQL statement.
  • %BIND() variables in Application Engine will also become literal values in the SQL Statement unless the ReUseStatement attribute is set on the AE step, however, this cannot be set on statements with dynamic fragments code are introduced with %BIND(…,NOQUOTES).
  • Application Engine programs that use temporary records can use different non-shared instances of the record in different executions, and this also results in different statements with different SQL_IDs.
  • See also Performance Benefits of ReUse Statement Flag in Application Engine
Consequently, the library caching is not particularly efficient in PeopleSoft, and dynamic SQL statements are often quickly aged out of the library cache.  AWR snapshots can only capture the SQL that is in the library cache at the time of the snapshot.  If the SQL statement, or at least a statement with the same force matching signature or plan hash value, cannot be found the AWR it cannot be identified or analysed.  Therefore, it is advantageous to increase the snapshot frequency on PeopleSoft systems.
Recommendation: Reduce AWR snapshot frequency from the default of 1 hour to every 15 minutes.  This change results in only a modest increase overhead in processing and space on AWR, but it is worth the additional information that is captured.
This advice also applies to Statspack that may be used if you are not licenced for the Diagnostics Pack.
PeopleTools Performance MetricsBatch TimingsPeopleSoft Application Engine and Cobol programs can emit batch timings reports on successful completion.
Application EngineApplication Engine batch timings are controlled by the AETrace flag in the Process Scheduler domain configuration file and for on-line AE programs in the Application Server domain configuration files.
  • AETrace=128: batch timings report is written to the AE Trace file to
  • AETrace=1024: batch timings are written to PS_BAT_TIMINGS% tables in the database
The overhead of batch timings is negligible while the program is running because it is accounted in memory and only written to file or database when the process completes successfully.
Recommendation: Enable Application Engine batch timings to at least database and preferably also file by setting AETrace=1152 in all Application Server and Process Scheduler domain configuration files.
The trace setting in the process scheduler configuration can be overridden by setting process specific command line parameter overrides in the process definition.  This is often done to set other trace settings, it is also common to see these unintentionally left in place longer than necessary.  If trace is set in this way it should always also set the batch timings flags.
See PeopleBooks -> Development Tools -> Application Engine -> Enabling Application Engine Tracing
CobolPeopleSoft Cobol programs can only write batching timings reports to file and not to the database.  This is controlled by a different parameter.
  • TraceSQL = 128: Enable Cobol statement timings report 
Recommendation: Enable Cobol statement timings report should be written to log file by setting TraceSQL=128 in all Process Scheduler domain configuration files.
This trace setting can also be overridden by setting process specific command line parameter overrides in the process definition.  If trace is set in this way it should always also set the batch timings flags.
PeopleSoft Performance MonitorThis provides information about the performance of the PIA including response times for the online transactions.  Metrics are stored in a separate monitoring PeopleSoft system to minimize the effect of measurement intrusion.  It optionally samples the state of each web server, application server and process scheduler collecting operating system and Tuxedo metrics.  It also has a PIA session trace capability.
The sampled data includes the number of busy application server processes and length of inbound Tuxedo service queues.  This data can be used to validate the sizing of the application servers.
Recommendation: Performance Monitor is complex to set up and the delivered analytics are limited.  Nonetheless, the information that can be obtained from the data collected can be worth the effort of configuration to address on-line configuration and performances issues.

How to patch your ODA lite to 18.3.0.0.0

Yann Neuhaus - Fri, 2019-03-08 11:00

Even you don’t need 18c, or you’re not ready for this release, this patch will also update your 11gR2, 12cR1 and 12cR2 databases to the latest PSU available on ODA, it means for example the patchset from last July if you’re using 12cR1. Here is how to apply this latest patch for your ODA lite. In this example, the patch was applied on an X6-2S ODA running on previous release: 12.2.1.4.0.

1) Download the patch

The patch number is 28864490. As usual, this patch will update the following components: dcs (odacli), operating system, bios/firmwares, storage (on lite it means data disks firmwares), ilom, GI, dbhomes and databases.
Download and copy the patch to a temporary folder on the server, for example /opt/patch. You’ll have to be root to apply the patch.

2) Check the actual versions and free space on disk

First check the current version:

odacli describe-component

System Version
---------------
12.2.1.4.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       12.2.1.4.0            up-to-date
GI                                        12.2.0.1.180417       up-to-date
DB                                        12.1.0.2.180417       up-to-date
DCSAGENT                                  18.2.1.0.0            up-to-date
ILOM                                      3.2.9.23.r116695      up-to-date
BIOS                                      38070200              up-to-date
OS                                        6.9                   up-to-date
FIRMWARECONTROLLER {
[ c2 ]                                    4.650.00-7176         up-to-date
[ c0,c1 ]                                 KPYAGR3Q              up-to-date
}
FIRMWAREDISK                              0R3Q                  up-to-date

For the moment, the “available version” column doesn’t know that a newer patch has been released.

Check that folders /, /u01 and /opt have enough free GB to process (>=20GB). This 18c patch is quite big, so don’t forget that you can extend the /u01 and /opt logical volumes online quite easily, for example if you need to increase the /opt:

lvextend -L +30G /dev/VolGroupSys/LogVolOpt
resize2fs /dev/VolGroupSys/LogVolOpt

If you never changed the logical volume configuration, about 200GB are available on local disks on this kind of ODA.

3) Prepare the patch files

You need to unzip and register the 3 unzipped files (they are zip files, too).

cd /opt/patch
unzip p28864490_183000_Linux-x86-64_1of3.zip
unzip p28864490_183000_Linux-x86-64_2of3.zip
unzip p28864490_183000_Linux-x86-64_3of3.zip
odacli update-repository -f /opt/patch/oda-sm-18.3.0.0.0-181205-server1of3.zip
odacli update-repository -f /opt/patch/oda-sm-18.3.0.0.0-181205-server2of3.zip
odacli update-repository -f /opt/patch/oda-sm-18.3.0.0.0-181205-server3of3.zip

Updating the repository, as other tasks through odacli, will generate a job. Check if the 3 latest jobs are OK:

odacli list-jobs | head -n 3;  odacli list-jobs | tail -n 4
ID                                       Description               Created                             Status
---------------------------------------- ------------------------- ----------------------------------- ----------
add82ae5-3295-49ad-811d-c8c57ebb0cb1     Repository Update         March 7, 2019 1:11:13 PM CET        Success
a057e961-2584-467f-9fc3-d8951dcae213     Repository Update         March 7, 2019 1:11:49 PM CET        Success
0483dbf8-7562-424e-b7bb-3786558d62b1     Repository Update         March 7, 2019 1:15:31 PM CET        Success

4) Run the prepatch report

It’s strongly advised to run the prepatch report before patching:

odacli create-prepatchreport -s -v 18.3.0.0.0
odacli describe-prepatchreport -i 9112e726-62f1-4e85-9d9c-aec46e8e8210
Patch pre-check report
------------------------------------------------------------------------
                 Job ID:  9112e726-62f1-4e85-9d9c-aec46e8e8210
            Description:  Patch pre-checks for [OS, ILOM, GI]
                 Status:  SUCCESS
                Created:  March 7, 2019 1:41:23 PM CET
                 Result:  All pre-checks succeeded

Node Name
---------------
dbi02

Pre-Check                      Status   Comments
------------------------------ -------- --------------------------------------
__OS__
Validate supported versions     Success   Validated minimum supported versions
Validate patching tag           Success   Validated patching tag: 18.3.0.0.0
Is patch location available     Success   Patch location is available
Verify OS patch                 Success   Verified OS patch

__ILOM__
Validate supported versions     Success   Validated minimum supported versions
Validate patching tag           Success   Validated patching tag: 18.3.0.0.0
Is patch location available     Success   Patch location is available
Checking Ilom patch Version     Success   Successfully verified the versions
Patch location validation       Success   Successfully validated location

__GI__
Validate supported GI versions  Success   Validated minimum supported versions
Validate available space        Success   Validated free space under /u01
Verify DB Home versions         Success   Verified DB Home versions
Validate patching locks         Success   Validated patching locks

Success on all the pre-checked elements is not a patching guarantee: as for each patch you’ll need to manually remove extra rpms and unsupported configurations, like public yum repository (all the package updates have to be done through ODA patches only).

5) Update the dcs-agent

It seems that it’s no more mandatory to update the dcs-agent before patching the server, it will probably be updated in the same time, but if you want it’s still working:

/opt/oracle/dcs/bin/odacli update-dcsagent -v 18.3.0.0.0

odacli update-dcsagent -v 18.3.0.0.0
{
  "jobId" : "5cd58876-3db5-48d5-880e-2ce934545d2f",
  "status" : "Created",
  "message" : "Dcs agent will be restarted after the update. Please wait for 2-3 mins before executing the other commands",
  "reports" : [ ],
  "createTimestamp" : "March 07, 2019 13:35:12 PM CET",
  "resourceList" : [ ],
  "description" : "DcsAgent patching",
  "updatedTime" : "March 07, 2019 13:35:12 PM CET"
}
odacli list-jobs | head -n 3;  odacli list-jobs | tail -n 2

ID                                       Description                         Created                             Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
5cd58876-3db5-48d5-880e-2ce934545d2f     DcsAgent patching                   March 7, 2019 1:35:12 PM CET        Success

6) Update the server

Updating the server is the biggest and longest part of the patch, make sure that everything is OK before patching.

odacli update-server -v 18.3.0.0.0
odacli describe-job -i "3a568fb1-8517-405f-9a60-0a1ee285e1ff"

Job details
----------------------------------------------------------------
                     ID:  3a568fb1-8517-405f-9a60-0a1ee285e1ff
            Description:  Server Patching
                 Status:  Running
                Created:  March 7, 2019 4:48:32 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Patch location validation                March 7, 2019 4:48:32 PM CET        March 7, 2019 4:48:32 PM CET        Success
dcs-controller upgrade                   March 7, 2019 4:48:32 PM CET        March 7, 2019 4:48:32 PM CET        Success
Patch location validation                March 7, 2019 4:48:32 PM CET        March 7, 2019 4:48:32 PM CET        Success
dcs-cli upgrade                          March 7, 2019 4:48:33 PM CET        March 7, 2019 4:48:33 PM CET        Success
Creating repositories using yum          March 7, 2019 4:48:33 PM CET        March 7, 2019 4:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 4:48:38 PM CET        March 7, 2019 4:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 4:48:38 PM CET        March 7, 2019 4:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 4:48:38 PM CET        March 7, 2019 4:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 4:48:38 PM CET        March 7, 2019 4:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 4:48:38 PM CET        March 7, 2019 4:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 4:48:38 PM CET        March 7, 2019 4:48:38 PM CET        Success
Updating YumPluginVersionLock rpm        March 7, 2019 4:48:38 PM CET        March 7, 2019 4:48:39 PM CET        Success
Applying OS Patches                      March 7, 2019 4:48:39 PM CET        March 7, 2019 4:51:31 PM CET        Success
Creating repositories using yum          March 7, 2019 4:51:31 PM CET        March 7, 2019 4:51:32 PM CET        Success
Applying HMP Patches                     March 7, 2019 4:51:32 PM CET        March 7, 2019 4:51:32 PM CET        Success
Patch location validation                March 7, 2019 4:51:32 PM CET        March 7, 2019 4:51:32 PM CET        Success
oda-hw-mgmt upgrade                      March 7, 2019 4:51:33 PM CET        March 7, 2019 4:51:33 PM CET        Success
Applying Firmware Disk Patches           March 7, 2019 4:51:33 PM CET        March 7, 2019 4:51:51 PM CET        Success
Applying Firmware Expander Patches       March 7, 2019 4:51:51 PM CET        March 7, 2019 4:52:03 PM CET        Success
Applying Firmware Controller Patches     March 7, 2019 4:52:03 PM CET        March 7, 2019 4:52:16 PM CET        Success
Checking Ilom patch Version              March 7, 2019 4:52:17 PM CET        March 7, 2019 4:52:19 PM CET        Success
Patch location validation                March 7, 2019 4:52:19 PM CET        March 7, 2019 4:52:20 PM CET        Success
Save password in Wallet                  March 7, 2019 4:52:21 PM CET        March 7, 2019 4:52:21 PM CET        Success
Apply Ilom patch                         March 7, 2019 4:52:21 PM CET        March 7, 2019 4:52:22 PM CET        Success
Copying Flash Bios to Temp location      March 7, 2019 4:52:22 PM CET        March 7, 2019 4:52:22 PM CET        Success
Starting the clusterware                 March 7, 2019 4:52:22 PM CET        March 7, 2019 4:52:22 PM CET        Success
Creating GI home directories             March 7, 2019 4:52:22 PM CET        March 7, 2019 4:52:22 PM CET        Success
Cloning Gi home                          March 7, 2019 4:52:22 PM CET        March 7, 2019 4:54:55 PM CET        Success
Configuring GI                           March 7, 2019 4:54:55 PM CET        March 7, 2019 4:55:54 PM CET        Success
Running GI upgrade root scripts          March 7, 2019 4:55:54 PM CET        March 7, 2019 5:08:19 PM CET        Failure

Bad news, GI upgrade failed in my case. Hopefully it’s possible to relaunch the patching and it will skip the already patched components. But another attempt failed 2 steps before (Cloning the Gi home: for sure GI home is already deployed on disk).

Actually if you already patched your ODA with the previous release, there is a bug with this previous patch:

ODA GI Patching from 12.2 to 18.3 Failed, CLSRSC-697: Failed to get the value of environment variable ‘TZ’ from the environment file (Doc ID 2502972.1)

An XML file incorrectly describing the previous Grid Infrastructure home needs to be deleted before applying this newest patch:

cat '/u01/app/grid/crsdata/@global/crsconfig/ckptGridHA_global.xml' | grep 12
         <PROPERTY NAME="VERSION" TYPE="STRING" VAL="12.2.0.1.0"/>
         <PROPERTY NAME="OLD_CRS_HOME" TYPE="STRING" VAL="/u01/app/12.1.0.2/grid"/>
         <PROPERTY NAME="OLD_CRS_VERSION" TYPE="STRING" VAL="12.1.0.2.0"/>
         <PROPERTY NAME="MANUAL_BACKUP_FILE_NAME" TYPE="STRING" VAL="+DATA:/dbi02-c/OCRBACKUP/dbi02-c_backup12.1.0.2.0.ocr.261.987245221"/>
rm '/u01/app/grid/crsdata/@global/crsconfig/ckptGridHA_global.xml'

Before running the patch again, cloned GI home needs to be deleted and its reference in the oraInventory needs to be removed:

vi /u01/app/oraInventory/ContentsXML/inventory.xml
Delete this line ==>  <HOME NAME="OraGrid180" LOC="/u01/app/18.0.0.0/grid" TYPE="O" IDX="7" CRS="true"/> 

rm -rf /u01/app/18.0.0.0

Now the patching will work:

odacli update-server -v 18.3.0.0.0
odacli describe-job -i "3a568fb1-8517-405f-9a60-0a1ee285e1ff"

Job details
----------------------------------------------------------------
                     ID:  3a568fb1-8517-405f-9a60-0a1ee285e1ff
            Description:  Server Patching
                 Status:  Success
                Created:  March 7, 2019 5:48:32 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Patch location validation                March 7, 2019 5:48:32 PM CET        March 7, 2019 5:48:32 PM CET        Success
dcs-controller upgrade                   March 7, 2019 5:48:32 PM CET        March 7, 2019 5:48:32 PM CET        Success
Patch location validation                March 7, 2019 5:48:32 PM CET        March 7, 2019 5:48:32 PM CET        Success
dcs-cli upgrade                          March 7, 2019 5:48:33 PM CET        March 7, 2019 5:48:33 PM CET        Success
Creating repositories using yum          March 7, 2019 5:48:33 PM CET        March 7, 2019 5:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 5:48:38 PM CET        March 7, 2019 5:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 5:48:38 PM CET        March 7, 2019 5:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 5:48:38 PM CET        March 7, 2019 5:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 5:48:38 PM CET        March 7, 2019 5:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 5:48:38 PM CET        March 7, 2019 5:48:38 PM CET        Success
Creating repositories using yum          March 7, 2019 5:48:38 PM CET        March 7, 2019 5:48:38 PM CET        Success
Updating YumPluginVersionLock rpm        March 7, 2019 5:48:38 PM CET        March 7, 2019 5:48:39 PM CET        Success
Applying OS Patches                      March 7, 2019 5:48:39 PM CET        March 7, 2019 5:51:31 PM CET        Success
Creating repositories using yum          March 7, 2019 5:51:31 PM CET        March 7, 2019 5:51:32 PM CET        Success
Applying HMP Patches                     March 7, 2019 5:51:32 PM CET        March 7, 2019 5:51:32 PM CET        Success
Patch location validation                March 7, 2019 5:51:32 PM CET        March 7, 2019 5:51:32 PM CET        Success
oda-hw-mgmt upgrade                      March 7, 2019 5:51:33 PM CET        March 7, 2019 5:51:33 PM CET        Success
Applying Firmware Disk Patches           March 7, 2019 5:51:33 PM CET        March 7, 2019 5:51:51 PM CET        Success
Applying Firmware Expander Patches       March 7, 2019 5:51:51 PM CET        March 7, 2019 5:52:03 PM CET        Success
Applying Firmware Controller Patches     March 7, 2019 5:52:03 PM CET        March 7, 2019 5:52:16 PM CET        Success
Checking Ilom patch Version              March 7, 2019 5:52:17 PM CET        March 7, 2019 5:52:19 PM CET        Success
Patch location validation                March 7, 2019 5:52:19 PM CET        March 7, 2019 5:52:20 PM CET        Success
Save password in Wallet                  March 7, 2019 5:52:21 PM CET        March 7, 2019 5:52:21 PM CET        Success
Apply Ilom patch                         March 7, 2019 5:52:21 PM CET        March 7, 2019 5:52:22 PM CET        Success
Copying Flash Bios to Temp location      March 7, 2019 5:52:22 PM CET        March 7, 2019 5:52:22 PM CET        Success
Starting the clusterware                 March 7, 2019 5:52:22 PM CET        March 7, 2019 5:52:22 PM CET        Success
Creating GI home directories             March 7, 2019 5:52:22 PM CET        March 7, 2019 5:52:22 PM CET        Success
Cloning Gi home                          March 7, 2019 5:52:22 PM CET        March 7, 2019 5:54:55 PM CET        Success
Configuring GI                           March 7, 2019 5:54:55 PM CET        March 7, 2019 5:55:54 PM CET        Success
Running GI upgrade root scripts          March 7, 2019 5:55:54 PM CET        March 7, 2019 6:08:19 PM CET        Success
Resetting DG compatibility               March 7, 2019 6:08:19 PM CET        March 7, 2019 6:08:29 PM CET        Success
Running GI config assistants             March 7, 2019 6:08:29 PM CET        March 7, 2019 6:11:50 PM CET        Success
restart oakd                             March 7, 2019 6:11:53 PM CET        March 7, 2019 6:12:03 PM CET        Success
Updating GiHome version                  March 7, 2019 6:12:03 PM CET        March 7, 2019 6:12:09 PM CET        Success
Setting AUDIT SYSLOG LEVEL               March 7, 2019 6:12:30 PM CET        March 7, 2019 6:14:55 PM CET        Success
Update System version                    March 7, 2019 6:14:55 PM CET        March 7, 2019 6:14:55 PM CET        Success
preRebootNode Actions                    March 7, 2019 6:14:55 PM CET        March 7, 2019 6:15:37 PM CET        Success
Reboot Ilom                              March 7, 2019 6:15:37 PM CET        March 7, 2019 6:15:37 PM CET        Success

Once this part of the patch is successfully applied, let the server reboot automatically 5 minutes after and then check again the components:

odacli describe-component

System Version
---------------
18.3.0.0.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       18.3.0.0.0            up-to-date
GI                                        18.3.0.0.180717       up-to-date
DB                                        12.1.0.2.180417       12.1.0.2.180717
DCSAGENT                                  18.3.0.0.0            up-to-date
ILOM                                      4.0.4.22.r126940      up-to-date
BIOS                                      38110100              up-to-date
OS                                        6.10                  up-to-date
FIRMWARECONTROLLER {
[ c2 ]                                    4.650.00-7176         up-to-date
[ c0,c1 ]                                 KPYAGR3Q              kpyair3q
}
FIRMWAREDISK                              0R3Q                  up-to-date

7) Patch the storage

You now need to patch the storage, understand firmware of data disks:

odacli update-storage -v 18.3.0.0.0
{
  "jobId" : "38a779cf-ac6c-4514-a838-f5cd1bec637c",
  "status" : "Created",
  "message" : "Success of Storage Update may trigger reboot of node after 4-5 minutes. Please wait till node restart",
  "reports" : [ ],
  "createTimestamp" : "March 07, 2019 17:28:34 PM CET",
  "resourceList" : [ ],
  "description" : "Storage Firmware Patching",
  "updatedTime" : "March 07, 2019 17:28:35 PM CET"
}

odacli describe-job -i "38a779cf-ac6c-4514-a838-f5cd1bec637c"
Job details
----------------------------------------------------------------
                     ID:  38a779cf-ac6c-4514-a838-f5cd1bec637c
            Description:  Storage Firmware Patching
                 Status:  Success
                Created:  March 7, 2019 5:28:34 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Applying Firmware Disk Patches           March 7, 2019 5:28:35 PM CET        March 7, 2019 5:28:49 PM CET        Success
Applying Firmware Controller Patches     March 7, 2019 5:28:49 PM CET        March 7, 2019 5:33:02 PM CET        Success
preRebootNode Actions                    March 7, 2019 5:33:02 PM CET        March 7, 2019 5:33:02 PM CET        Success
Reboot Ilom                              March 7, 2019 5:33:02 PM CET        March 7, 2019 5:33:02 PM CET        Success

odacli describe-component
System Version
---------------
18.3.0.0.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       18.3.0.0.0            up-to-date
GI                                        18.3.0.0.180717       up-to-date
DB                                        12.1.0.2.180417       12.1.0.2.180717
DCSAGENT                                  18.3.0.0.0            up-to-date
ILOM                                      4.0.4.22.r126940      up-to-date
BIOS                                      38110100              up-to-date
OS                                        6.10                  up-to-date
FIRMWARECONTROLLER {
[ c2 ]                                    4.650.00-7176         up-to-date
[ c0,c1 ]                                 KPYAIR3Q              up-to-date
}
FIRMWAREDISK                              0R3Q                  up-to-date

The server will also reboot after this update.

8) Patch the dbhomes

You now need to patch the dbhomes separately. First of all, list them:

odacli list-dbhomes

ID                                       Name                 DB Version        Home Location                                 Status
---------------------------------------- -------------------- ----------------- ------------------------------------------- ----------
2c28acde-f041-4283-b984-fe6b73dd724d     OraDB12102_home2     12.1.0.2.180417   /u01/app/oracle/product/12.1.0.2/dbhome_2     Configured
ebac9543-337b-4edd-8e00-d593abd52ca6     OraDB12102_home10    12.1.0.2.180417   /u01/app/oracle/product/12.1.0.2/dbhome_10    Configured
0ab15f33-e1b7-4193-a110-fa4aee01cc21     OraDB12102_home21    12.1.0.2.180417   /u01/app/oracle/product/12.1.0.2/dbhome_21    Configured

Patch the dbhomes one by one. Target version is actually version of the patch. Remember that updating the ODA through a patch will never upgrade your database to a newer release. Only the PSU number will change (the fifth number which is actually a date).

odacli update-dbhome -i 2c28acde-f041-4283-b984-fe6b73dd724d -v 18.3.0.0.0
{
  "jobId" : "bc3e5564-7283-4497-ac70-9e41c834183d",
  "status" : "Created",
  "message" : null,
  "reports" : [ ],
  "createTimestamp" : "March 07, 2019 17:45:46 PM CET",
  "resourceList" : [ ],
  "description" : "DB Home Patching: Home Id is 2c28acde-f041-4283-b984-fe6b73dd724d",
  "updatedTime" : "March 07, 2019 17:45:46 PM CET"
}
odacli describe-job -i "bc3e5564-7283-4497-ac70-9e41c834183d"

Job details
----------------------------------------------------------------
                     ID:  bc3e5564-7283-4497-ac70-9e41c834183d
            Description:  DB Home Patching: Home Id is 2c28acde-f041-4283-b984-fe6b73dd724d
                 Status:  Success
                Created:  March 7, 2019 5:45:46 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
clusterware patch verification           March 7, 2019 5:45:47 PM CET        March 7, 2019 5:45:52 PM CET        Success
Patch location validation                March 7, 2019 5:45:52 PM CET        March 7, 2019 5:46:03 PM CET        Success
Opatch updation                          March 7, 2019 5:48:05 PM CET        March 7, 2019 5:48:09 PM CET        Success
Patch conflict check                     March 7, 2019 5:48:09 PM CET        March 7, 2019 5:49:45 PM CET        Success
db upgrade                               March 7, 2019 5:49:45 PM CET        March 7, 2019 5:55:04 PM CET        Success
SqlPatch upgrade                         March 7, 2019 5:55:04 PM CET        March 7, 2019 5:55:48 PM CET        Success
Update System version                    March 7, 2019 5:55:48 PM CET        March 7, 2019 5:55:48 PM CET        Success
updating the Database version            March 7, 2019 5:55:51 PM CET        March 7, 2019 5:55:54 PM CET        Success

...

odacli list-dbhomes

ID                                       Name                 DB Version        Home Location                                 Status
---------------------------------------- -------------------- ----------------- ------------------------------------------ ----------
2c28acde-f041-4283-b984-fe6b73dd724d     OraDB12102_home2     12.1.0.2.180717   /u01/app/oracle/product/12.1.0.2/dbhome_2     Configured
ebac9543-337b-4edd-8e00-d593abd52ca6     OraDB12102_home10    12.1.0.2.180717   /u01/app/oracle/product/12.1.0.2/dbhome_10    Configured
0ab15f33-e1b7-4193-a110-fa4aee01cc21     OraDB12102_home21    12.1.0.2.180717   /u01/app/oracle/product/12.1.0.2/dbhome_21    Configured

9) Optional: deploy the latest db clone files

If you’ll never deploy a new dbhome this is not necessary. If you will, or simply if you don’t know if you’ll have to do that later, download and register the new db clone files in the repository to be able to create a new dbhome at the same patch level than the one already deployed. With this release, you can choose dbhome from various versions: 11gR2, 12cR1, 12cR2 and 18c.

10) Control the final version of the components

Now the patching is done. ±2.5 hours were needed, if everything is running fine.

oodacli describe-component

System Version
---------------
18.3.0.0.0

Component                                Installed Version    Available Version
---------------------------------------- -------------------- --------------------
OAK                                       18.3.0.0.0            up-to-date
GI                                        18.3.0.0.180717       up-to-date
DB                                        12.1.0.2.180717       up-to-date
DCSAGENT                                  18.3.0.0.0            up-to-date
ILOM                                      4.0.4.22.r126940      up-to-date
BIOS                                      38110100              up-to-date
OS                                        6.10                  up-to-date
FIRMWARECONTROLLER {
[ c2 ]                                    4.650.00-7176         up-to-date
[ c0,c1 ]                                 KPYAIR3Q              up-to-date
}
FIRMWAREDISK                              0R3Q                  up-to-date

11) Latest steps

Don’t forget to remove the patch files and the unzipped files too and check the remaining space on your local disks. Remember to keep your ODA clean and up-to-date.

Cet article How to patch your ODA lite to 18.3.0.0.0 est apparu en premier sur Blog dbi services.

Support for Mixed Platforms in EBS 12.2

Steven Chan - Fri, 2019-03-08 08:40

The Oracle E-Business Suite architecture comprises a client tier, an application tier with one or more nodes, and a database tier with one or more nodes. The different tiers can run on various hardware and operating systems, together often referred to as the platform.

Many customers standardize on a single platform for both the application and database tiers. This has the advantage that they only need to follow maintenance guidelines from a single vendor.

However, some customers choose to deploy the EBS application and database tiers on different platforms. This deployment is referred to as mixed platforms or database tier only platform or split tier configuration. There can be various reasons for doing this, for example to run the Oracle Database on a platform to which the applications code was not ported.

For releases earlier than EBS 12.2, it was also supported to deploy mixed platforms in a multi-node application tier environment. However, it is important to be aware that using different platforms for the application tier is not supported in EBS Release 12.2, where key architectural features need the application code to be common (shared) across all nodes.

Most notably, the use of the adop patching utility in EBS 12.2 is optimized by running patching activities on the primary node, with remote processes triggered on secondary nodes as applicable. This distributed patching mechanism requires all the binaries to run on the same hardware and operating system.

References Related Articles
Categories: APPS Blogs

Append hint

Jonathan Lewis - Fri, 2019-03-08 07:13

One of the questions that came up on the CBO Panel Session at the UKOUG Tech2018 conference was about the /*+ append */ hint – specifically how to make sure it was ignored when it came from a 3rd party tool that was used to load data into the database. The presence of the hint resulted in increasing amounts of space in the table being “lost” as older data was deleted by the application which then didn’t reuse the space the inserts always went above the table’s highwater mark; and it wasn’t possible to change the application code.

The first suggestion aired was to create an SQL Patch to associate the hint /*+ ignore_optim_embedded_hints */ with the SQL in the hope that this would make Oracle ignore the append hint. This won’t work, of course, because the append hint is not an optimizer hint, it’s a “behaviour” hint.

There are, however, various things that will make the append hint invalid – sometimes to the great surprise of the people using it. The three things I can think of at present are:

  • row-level triggers
  • enabled foreign key constraints
  • non-unique indexes enforcing unique constraints

It’s been some time since I last ran a detailed check so I’m not going to guarantee that the following claims are still true – if it matters to you then it’s easy enough to create a little test of (say) 10,000 rows inserted into a non-empty, indexed table.

  • Row level triggers turn array processing into single row processing, so there’s a significant increase in redo generation.
  • Non-unique indexes enforcing unique constraints have (small but) potentially undesirable effects on the optimizer and on run-time and may turn array processing into single row processing.
  • Foreign key constraint require checking which may have some impact, but doesn’t turn array processing into single row processing.

Of the three options the foreign key constraint seemed to me to be the best strategy to disable the hint with minimal side effects, so my answer was:

“Create a new table with no data and a primary key constraint; add an invisible column to the target table, index it (online) and add a foreign key constraint from the column to the new table”.

My thinking on this was that the foreign key will disable the append hint. The column will always be null which means it will always satisfy the foreign key constraint without being checked and it won’t introduce any index maintenance overheads. Net effect: /*+ append */ is disabled with no extra overheads.

Footnote

The append hint is also ignored if the table is an Index Organized Table (IOT), but changing a heap table to an IOT is probably not a sensible choice if all you want to do is disable the hint – the potential for unexpected client side anomalies is too high, and if the table is heavily indexed the processing overhead for the secondary indexes could be fairly significant.

If I recall correctly the person asking the question said that the “do nothing” trigger option sounded like something they would be most comfortable with and they’d live with the overheads. I understand the feeling – an invisible column with an invisible index and extra foreign key constraint sounds too close to the boundary where mixing and matching simple features ends up hitting some unexpected behaviour (i.e. a bug).

 

 

[BLOG] How to Access My Services Dashboard: Oracle Cloud Updated Feature

Online Apps DBA - Fri, 2019-03-08 04:57

Facing issues while accessing My Services Dashboard? If Yes! Check the blog which will show the steps to access My Services Dashboard at https://k21academy.com/oci31 Want more useful Oracle updates weekly? Subscribe here: http://k21academy.com/subscribe Facing issues while accessing My Services Dashboard? If Yes! Check the blog which will show the steps to access My Services Dashboard […]

The post [BLOG] How to Access My Services Dashboard: Oracle Cloud Updated Feature appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Comparison between #Oracle and #Exasol

The Oracle Instructor - Fri, 2019-03-08 04:41

After having worked with both databases for quite some time, this is what I consider to be the key differences between Oracle and Exasol. Of course the two have much in common: Both are relational databases with a transaction management system that supports the ACID model and both follow the ANSI SQL standard – both with some enhancements. Coming from Oracle as I do, much in Exasol looks quite familiar. But let’s focus on the differences:

Strengths

Oracle is leading technology for Online Transaction Processing (OLTP). If you have a high data volume with many users doing concurrent changes, this is where Oracle shines particularly.

Exasol is leading technology for analytical workloads. If you want to do real-time ad hoc reporting on high data volume, this is where Exasol shines particularly.

Architecture Data Format & In-Memory processing

Oracle uses a row-oriented data format, which is well suited for OLTP but not so much for analytical workloads. That’s why Hybrid Columnar Compression (only available on Engineered Systems respectively on Oracle proprietary storage) and the In-Memory Column Store (extra charged option) have been added in recent years.

Exasol uses natively a compressed columnar data format and processes this format in memory. That is very good for analytical queries but bad for OLTP because one session that does DML on a table locks that table against DML from other sessions. Read Consistent SELECT is possible for these other sessions, though.

Oracle was designed for OLTP at times when memory was scarce and expensive. Exasol was designed to process analytical workloads in memory.

Clustering

Oracle started as a non-clustered (single instance) system. Real Application Clusters (RAC) have been added much later. The majority of Oracle installations is still non-clustered. RAC (extra charged option) is rather an exception than the rule. Most RAC installations are 2-node clusters with availability as the prime reason, scalability being rather a side aspect.

Exasol was designed from the start to run on clustered commodity Intel servers. Prime reasons were MPP performance and scalability with availability being rather a side aspect.

Data Distribution

This doesn’t matter for most Oracle installations, only for RAC. Here, Oracle uses a shared disk architecture while Exasol uses a shared nothing architecture, which is optimal for performance because every Exasol cluster node can operate on a different part of the data in parallel. Drawback is that after adding nodes to an Exasol cluster, the data has to be re-distributed.

With Exadata, Oracle tries to compensate the performance disadvantage of the shared disk architecture by enabling the storage servers to filter data locally for analytical workloads. This approach leads to better performance than Oracle can deliver on other (non-proprietary) platforms.

Availability & Recoverability

Clearly, Oracle is better in this area. A non-clustered Oracle database running in archive log mode will enable you to recover every single committed transaction you did since you took the last backup. With Exasol, you can only restore the last backup and all changes since then are lost. You can safeguard an Oracle database against site failure with a standby database at large distance without performance impact. Exasol doesn’t have that. With RAC, you can protect an Oracle database against node failure. The database stays up (the Global Resource Directory is frozen for a couple of seconds, though) upon node failure with no data loss.

If an Exasol cluster node fails, this leads to a database restart. Means no availability for a couple of seconds and all sessions get disconnected. But also no data loss. Optionally, Exasol can be configured as Synchronous Dual Data Center – similar to Oracle’s Extended RAC.

Complexity & Manageability

I realized that there’s a big difference between Exasol and Oracle in this area when I was teaching an Exasol Admin class recently: Some seasoned Oracle DBAs in the audience kept asking questions like “We can do this and that in Oracle, how does that work with Exasol?” (e.g. creating Materialized Views or Bitmap Indexes or an extra Keep Cache) and my answer was always like “We don’t need that with Exasol to get good performance”.

Let’s face it, an Oracle database is probably one of the most complex commercial software products ever developed. You need years of experience to administer an Oracle database with confidence. See this recent Oracle Database Administration manual to get an impression. It has 1690 pages! And that’s not yet Real Application Clusters, which is additionally 492 pages. Over 2100 pages of documentation to dig through, and after having worked with Oracle for over 20 years, I can proudly say that I actually know most of it.

In comparison, Exasol is very easy to use and to manage, because the system takes care of itself largely. Which is why our Admin class can have a duration of only two days and attendees feel empowered to manage Exasol afterwards.

That was intentionally so from the start: Exasol customers are not supposed to study the database for years (or pay someone who did) in order to get great performance. Oracle realized that being complex and difficult to manage is an obstacle and came out with the Autonomous Database – but that is only available in the proprietary Oracle Cloud.

Performance

Using comparable hardware and processing the same (analytical) workload, Exasol outperforms any competitor. That includes Oracle on Exadata. Our Presales consultants regard Exadata as a sitting duck, waiting to get shot on a POC. I was personally shocked to learn that, after drinking the Oracle Kool-Aid myself for years.

In my opinion, these two points are most important: Exasol is faster and at the same time much easier to manage! I mean anything useless could be easy to manage, so that’s not an asset on its own. But together with delivering striking performance, that’s really a big deal.

Licensing

This is and has always been a painpoint for Oracle customers: The licensing of an Oracle database is so complex and fine granular that you always wonder “Am I allowed to do this without violating my license? Do we really need these features that we paid for? Are we safe if Oracle does a License Audit?” With Exasol, all features are always included and the two most popular license types are totally easy to understand: You pay either for the data volume loaded into the cluster or for the amount of memory assigned to the database. No sleepless nights because of that!

Cloud

This topic becomes increasingly important as many of our new customers want to deploy Exasol in the cloud. And you may have noticed that Oracle pushes going cloud seriously over the last years.

Exasol runs with all features enabled in the cloud: You can choose between Amazon Web Services, (AWS), Microsoft Azure and ExaCloud

AWS

This is presently the most popular way our customers run Exasol in the cloud. See here for more details.

MS Azure

Microsoft’s cloud can also be used to run Exasol, which gives you the option to choose between two major public cloud platforms. See here for more details.

ExaCloud

Hosted and managed by Exasol, ExaCloud is a full database-as-a-service offering. See here for more details.

Hybrid Exasol deployments that combine cloud with on-prem can also be used, just depending on customer requirements.

Oracle offers RAC only on the Oracle Cloud platform, not on public clouds. Various other features are also restricted to be available only in Oracle’s own cloud. The licensing model has been tweaked to favor the usage of Oracle’s own cloud over other public clouds generally.

Customer Experience

Customers love Exasol, as the recent Dresner report confirms. We get a perfect recommendation score. I can also tell that from personal encounters: Literally every customer I met is pleased with our product and our services!

Conclusion

Oracle is great for OLTP and okay for analytical workloads – especially if you pay extra for things like Partitioning, RAC, In-Memory Column Store and Exadata. Then the performance you get for your analytical workload might suit your present demand.

Exasol is totally bad for OLTP but best in the world for analytical workloads. Do you think your data volume and your analytic demands will grow?

Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator