Feed aggregator

12c Snapshots

Jonathan Lewis - Wed, 2019-03-06 04:35

I published a note a few years ago about using the 12c “with function” mechanism for writing simple SQL statements to takes deltas of dynamic performance views. The example I supplied was for v$event_histogram but I’ve just been prompted by a question on ODC to supply a couple more – v$session_event and v$sesstat (joined to v$statname) so that you can use one session to get an idea of the work done and time spent by another session – the first script reports wait time:


rem
rem     Program:        12c_with_function_2.sql
rem     Dated:          July 2013
rem
rem     See also
rem     12c_with_function.sql
rem     https://jonathanlewis.wordpress.com/2013/06/30/12c-fun/
rem
rem     Notes:
rem             Reports session WAIT time
rem             Modify the list of SIDs of interest
rem             Set the time in seconds
rem

define m_snap_time = 60
define m_sid_list  = '3, 4, 121, 127'

set timing on
set sqlterminator off

set linesize 180

break on sid skip 1

with
        function wait_row (
                i_secs  number, 
                i_return        number
        ) return number
        is
        begin
                dbms_lock.sleep(i_secs);
                return i_return;
        end;
select
        sid, 
        sum(total_waits),
        sum(total_timeouts), 
        sum(time_waited), 
        event
from    (
        select
                sid, event_id, 
                -total_waits total_waits, 
                -total_timeouts total_timeouts, 
                -time_waited time_waited, 
                -time_waited_micro time_waited_micro, 
                event
        from    v$session_event
        where   sid in ( &m_sid_list )
        union all
        select
                null, null, null, null, null, wait_row(&m_snap_time, 0), null
        from    dual
        union all
        select
                sid, event_id, total_waits, total_timeouts, time_waited, time_waited_micro, event
        from    v$session_event
        where   sid in ( &m_sid_list )
        )
where
        time_waited_micro != 0
group by
        sid, event_id, event
having
        sum(time_waited) != 0
order by
        sid, sum(time_waited) desc
/


And this one reports session activity:

rem
rem     Program:        12c_with_function_3.sql
rem     Dated:          July 2013
rem
rem     See also
rem     12c_with_function.sql
rem     https://jonathanlewis.wordpress.com/2013/06/30/12c-fun/
rem
rem     Notes:
rem             Reports session stats
rem             Modify the list of SIDs of interest
rem             Set the time in seconds
rem

define m_snap_time = 60
define m_sid_list  = '3, 4, 13, 357'


set timing on
set sqlterminator off

set linesize 180

break on sid skip 1
column name format a64

with
        function wait_row (
                i_secs  number, 
                i_return        number
        ) return number
        is
        begin
                dbms_lock.sleep(i_secs);
                return i_return;
        end;
select
        sid, 
        name,
        sum(value)
from    (
        select
                ss.sid, 
                ss.statistic#,
                sn.name,
                -ss.value value
        from
                v$sesstat       ss,
                v$statname      sn
        where   ss.sid in ( &m_sid_list )
        and     sn.statistic# = ss.statistic#
        union all
        select
                null, null, null, wait_row(&m_snap_time, 0)
        from    dual
        union all
        select
                ss.sid, ss.statistic#, sn.name, ss.value value
        from
                v$sesstat       ss,
                v$statname      sn
        where   ss.sid in ( &m_sid_list )
        and     sn.statistic# = ss.statistic#
        )
where
        value != 0
group by
        sid, statistic#, name
having
        sum(value) != 0
order by
        sid, statistic#
/


You’ll notice that I’ve used dbms_lock.sleep() in my wait function – and the session running the SQL can be granted the execute privilege on the package through a role to make this work – but if you’re running Oracle 18 then you’ve probably noticed that the sleep() function and procedure have been copied to the dbms_session package.

 

Prepare Your Data for Machine Learning Training

Andrejus Baranovski - Wed, 2019-03-06 02:56
The process to prepare data for Machine Learning model training to me looks somewhat similar to the process of preparing food ingredients to cook dinner. You know in both cases it takes time, but then you are rewarded with tasty dinner or a great ML model.

I will not be diving here into data science subject and discussing how to structure and transform data. It all depends on the use case and there are so many ways to reformat data to get the most out of it. I will rather focus on simple, but a practical example — how to split data into training and test datasets with Python.

Make sure to check my previous post, today example is based on a notebook from this post — Jupyter Notebook — Forget CSV, fetch data from DB with Python. It is explained there how to load data from DB and construct a data frame.

This Python code snippet builds train/test datasets:

The first thing is to assign X and Y. Data columns assigned to X array are the ones which produce decision encoded in Y array. We assign X and Y by extracting columns from the data frame.

In the next step train X/Y and test X/Y sets are constructed by function train_test_split from sklearn module. You must import this function in Python script:

from sklearn.model_selection import train_test_split

One of the parameters for train_test_split function — test_size. This parameter controls the proportion of test data set size taken from the entire data set (~30% in this example).

Parameter stratify is enforcing equal distribution of Y data across train and test data sets.

Parameter random_state ensures data split will be the same in the next run too. To change the split, it is enough to change this parameter value.

Function train_test_split returns four arrays. Train X/Y and test X/Y pairs can be used for train and test ML model. Data set shape and structure can be printed out too for the convenience purpose.

Sample Jupyter notebook available on GitHub. Sample credentials JSON file.

Oracle EBS(R12) on OCI for Apps DBAs & Architects Training: Step By Step Activity Guides/Hands-On Lab Exercise

Online Apps DBA - Wed, 2019-03-06 01:55

Visit: https://k21academy.com/ebscloud05 & learn Stepwise How you can start your EBS (R12) On OCI Journey & Get: ✔ The answer to What skills you must require being an AppsDBa or cloud DBA ✔ 7 Activity Guide to learn Step-by-Step ✔ FREE guide You Must Read being an Oracle AppsDBA to mage and migrate EBS R12 […]

The post Oracle EBS(R12) on OCI for Apps DBAs & Architects Training: Step By Step Activity Guides/Hands-On Lab Exercise appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Kubernetes and the "Platform Engineer"

OTN TechBlog - Tue, 2019-03-05 22:56

In recent conversations with Enterprise customers, it is becoming clear that a separation of concerns is emerging for those delivering production applications on top of Kubernetes infrastructure.  The application developers building the containerized apps driven by business requirements, and the “Platform Engineers”, owning and running the supporting Kubernetes infrastructure, and platform components.  For those familiar with DevOps, SRE (pick your term) – this is arguably nothing new, but the consolidation of these teams around the Kubernetes API is leading to something altogether different.  In short, the Kubernetes YAML file (via the Kubernetes API) is becoming the contract or hand-off between application developers and the platform team (or more succinctly between dev and ops).

In the beginning, there was PaaS

Well, actually there was infrastructure! – but for application developers, there was an awful lot of pieces to assemble (compute, network, storage) to deliver an application.  Technologies like Virtualization and Infrastructure as Code (Terraform et al) made it easier to automate the infrastructure part, but still, a lot of moving parts.  Early PaaS (Platform as a Service) pioneers, recognizing this complexity for developers, created (PaaS) platforms, abstracting away much of the infrastructure (and complexity), albeit for a very targeted (or “opinionated”) set of application use cases or patterns – which is fine if your application fits into that pattern, but if not, you are back to dealing with infrastructure.

Then Came CaaS

Following the success of Container technology popularized in recent years by Docker, so called “Containers as a Service” offerings emerged a few years back, sitting somewhere between IaaS and PaaS, CaaS services abstract some of the complexity of dealing with raw infrastructure, allowing teams to deploy and operate container based applications without having to build, setup and maintain their own container orchestration tooling and supporting infrastructure.

The emergence of CaaS also coincided largely with the rise of Kubernetes as the de facto standard in container orchestration.  The majority of CaaS offerings today are managed Kubernetes offerings (not all offerings are created equal though, see The Journey to Enterprise Managed Kubernetes for more details).  As discussed previously, Kubernetes has essentially become the new Operating System for the Cloud, and arguably the modern application server, as Kubernetes continues to move up the stack.  At a practical level, this means that in addition to the benefits of a CaaS described above, customers benefit from standardization, and portability of their container applications across multiple cloud providers and on-prem (assuming those providers adhere to and are conformant with upstream Kubernetes).

Build your Own PaaS?

Despite CaaS and the standardization of Kubernetes for delivering these, there is still a lot of potential complexity for developers.  With “complexity”, “cultural changes” and “lack of training” recently cited as some of the most significant inhibitors to container and Kubernetes adoption, we can see there’s still work to do.  An interesting talk at KubeCon Seattle played on this with the title: “Kubernetes is Not for Developers and Other Things the Hype Never Told You”.

Enter the platform engineer.  Kubernetes is broad and deep, and only a subset of it ultimately needs be exposed to end developers in many cases.   As an enterprise that wants to offer a modern container platform to its developers, there are a lot of common elements/tooling that every end developer/application team consuming the platform shouldn’t have to reinvent.  Examples include (but are not limited to): monitoring, logging, service mesh, secure communication/TLS, ingress controllers, network policies, admission controllers etc…  In addition to common services being presented to developers, the platform engineer can even extend Kubernetes (via extension APIs), with things like the Service Catalog/Open Service Broker to facilitate easier integration for developers with other existing cloud services, or by providing Kubernetes Operators, helpers essentially that developers can consume for creating (stateful) services in their clusters (see examples here and here).

The platform engineer then in essence, has an opportunity to carve out the right cross section of Kubernetes (hence build your own PaaS) for the business, both in terms of the services that are exposed to developers to promote reuse, but also in enforcement of business policy (security and compliance).

Platform As Code

And the fact that you can leverage the same Kubernetes API or CLI (“Kubectl”) and deployment (YAML) file to drive the above platform, has led some to talk about the approach as “Platform as code” – essentially an evolution of Infrastructure as Code, but in this case, native Kubernetes interfaces are driving the entire creation of a complete Kubernetes based application platform for enterprise consumption.

The platform engineer and the developer now have a clear separation of concerns (with the appropriate Kubernetes RBAC roles and role bindings in place!).  The platform engineer can check the complete definition of the platform described above into source control.  Similarly, the developer consuming the platform, checks their Kubernetes application definition into source control – and the Kubernetes YAML file/definition becomes the contract (and enforcement point) between the developer and platform engineer

Platform engineers ideally have a strong background in infrastructure software, networking and systems administration.  Essentially, they are working on the (Kubernetes) platform to deliver a product/service to (and in close collaboration with) end development teams.

In the future, we would expect there to be additional work in the community around both sides of this contract.  Both for developers, and how they can discover what common services are provided by the platform being offered, and for platform engineers in how they can provide (and enforce) a clear contract to their development team customers.

Spring Initializr new look and feel

Pas Apicella - Tue, 2019-03-05 20:38
Head to http://start.spring.io and the new look and feel UI is now available


Categories: Fusion Middleware

Four New Oracle Cloud Native Services in General Availability

OTN TechBlog - Tue, 2019-03-05 17:45

This post was jointly written by Product Management and Product Marketing for Oracle Cloud Native Services. 

For those who participated in the Cloud Native Services Limited Availability Program, a sincere thank you! We have an important update: four more Cloud Native Services have just gone into General Availability.

Resource Manager for DevOps and Infrastructure as Code

Resource Manager is a fully managed service that uses open source HashiCorp Terraform to provision, update, and destroy Oracle Cloud Infrastructure resources at-scale. Resource Manager integrates seamlessly with Oracle Cloud Infrastructure to improve team collaboration and enable DevOps. It can be useful for repetitive deployment tasks such as replicating similar architectures across Availability Domains or large numbers of hosts. You can learn more about Resource Manager through this blog post.

Streaming for Event-based Architectures

Streaming Service provides a “pipe” to flow large volumes of data from producers to consumers. Streaming is a fully managed service with scalable and durable storage for ingesting large volumes of continuous data via a publish-subscribe (pub-sub) model. There are many use cases for Streaming: gathering data from mobile and IoT devices for real-time analytics, shipping logs from infrastructure and applications to an object store, and tracking current financial information to trigger stock transactions, to name a few. Streaming is accessible via the Oracle Cloud Infrastructure Console, SDKs, CLI, and REST API, and provides Terraform integration. Additional information on Streaming is available on this blog post.

Monitoring and Notifications for DevOps

Monitoring provides a consistent, integrated method to obtain fine-grained telemetry and notifications for your entire stack. Monitoring allows you to track infrastructure utilization and respond to anomalies in real-time. Besides performance and health metrics available out-of-the-box for infrastructure, you can get custom metrics for visibility across the stack, real-time alarms based on triggers and Notifications via email and PagerDuty. The Metrics Explorer provides a comprehensive view across your resources. You can learn more through these blog posts for Monitoring and Notifications. In addition, using the Data Source for Grafana, users can create Grafana dashboards for monitoring metrics. 

Next Steps

We would like to invite you to try these services and provide your feedback below. A free $300 trial is available at cloud.oracle.com/tryit. To evaluate other Cloud Native Services in Limited Availability, including Functions for serverless applications, please complete this sign-up form.

Caching for PLSQL packages over ORDS

Tom Kyte - Tue, 2019-03-05 17:26
I need to cache few values when the plsql procedure is called through a rest service multiple times i.e. when it is executed from a same user multiple times for optimization. I am calling below package procedure through ORDS rest service. Belo...
Categories: DBA Blogs

Replacing card number with *

Tom Kyte - Tue, 2019-03-05 17:26
I have a 16 digit card number where I need to replace the 3rd digit of the card number to 9th digit card number with *.Tried regular expressions,replace and translate nothing worked.Need guidance in this tom!!
Categories: DBA Blogs

Declare a dynamic table type

Tom Kyte - Tue, 2019-03-05 17:26
Hello, I have a stored procedure which takes 2 input paramaters - owner and table_name. I would like to create a TYPE variable based on what is passed. This is the code: <code> create or replace PROCEDURE proc ( in_owner IN VARCHA...
Categories: DBA Blogs

Rittman Mead at Analytics and Data Summit 2019

Rittman Mead Consulting - Tue, 2019-03-05 09:49
Rittman Mead at Analytics and Data Summit 2019

The Analytics and Data Summit 2019 (formerly known as BIWA) is happening next week in Oracle HQ in Redwood Shores. I'm excited to participate since it's one of the best conferences for the Oracle Analytics crowd where you can get three days full of content from experts as well as hints on products future developments directly from the related Product Managers!

I'll be representing Rittman Mead in two sessions:

This two-hour workshop will cover all the details of OAC: Product Overview, Instance Creation and Management, Moving from on-prem OBIEE to OAC, Data Preparation and Data Visualization, Advanced Analytics.  With interactive labs where participants can experience Data Visualization and Data Flows.

Rittman Mead at Analytics and Data Summit 2019

Become a Data Scientist with OAC! This session will explain how Oracle Analytics Cloud acts as an  enabler for the transformation from a Data Analyst to a Data Scientist. Connection to the Data, Cleaning, Transformation, and Analysis will be the intermediate steps before training of several machine learning models which then will be evaluated and used to predict outcomes on unseen data. With a demo showing all the steps in a real example based on a wine dataset!

Rittman Mead at Analytics and Data Summit 2019

There is a full list of all sessions here. You can follow the conference on twitter with the hashtag #AnDSummit2019, and I'll be tweeting about it too as @ftisiot.

The presentations that I'm delivering will be available to download on speakerdeck.

Categories: BI & Warehousing

Oracle JDeveloper Patches Required for Creating OA Extensions in EBS 12.2.8

Steven Chan - Tue, 2019-03-05 08:51

We have recently been asked which Oracle JDeveloper patches are required for use with Oracle E-Business Suite Release 12.2.8. The short answer is that the patches required for Release 12.2.8 are the same as those for Release 12.2.7.

You can learn more by referring to the recently updated master document on this subject, MOS Note 416708.1:

This note also lists the Oracle JDeveloper patches required for 12.2.8, 12.2.7, 12.2.6, 12.2.5, and many other Oracle E-Business Suite releases.

Background

These patches are intended for creating OA Extensions to OA Framework-based pages in Oracle E-Business Suite.

When you create such extensions, you must use the version of Oracle JDeveloper shipped by the Oracle E-Business Suite product development team, which is  specific to the EBS Applications Technology patch level. This means that there is a new version of Oracle JDeveloper with each new release of the EBS Applications Technology patchset.

References Related Articles

 

Categories: APPS Blogs

Oracle Participates in NSF Cloud for Scientific Research Project

Oracle Press Releases - Tue, 2019-03-05 07:00
Press Release
Oracle Participates in NSF Cloud for Scientific Research Project Organizations foster use of commercial cloud for heavy-duty scientific projects to speed answers and improve data integrity

Redwood Shores, Calif.—Mar 5, 2019

Oracle is participating in a project funded by the National Science Foundation (NSF) and facilitated by Internet2 to create innovative cloud computing capabilities for science applications and scientific computing research.

Internet2, a non-profit advanced technology community founded by colleges and universities, will manage two phases of the project. Known as Exploring Clouds for Acceleration of Science (E-CAS), the project will analyze cloud platforms to determine their strengths and capabilities for research and computational science across several academic disciplines.

Phase 1 will support six scientific and engineering applications and workflows with cloud computing capabilities. Then, two projects from the original six will be funded for a further year, measuring the scientific impact and results.

“Oracle is committed to ensuring that campus challenges are met, and the research and education communities are provided with the best technology to be successful,” said Jenny Tsai-Smith, vice president, Curriculum Development of Oracle. “Working with Internet2 and the National Science Foundation on E-CAS we are able to help these institutions gain access to the cutting-edge cloud computing technology they need to further ground-breaking discoveries through research.”

“The E-CAS project intends to accelerate scientific discovery through the integration and optimization of commercial cloud service advancements with cyberinfrastructure resources,” said Jamie Sunderland, executive director of service development at Internet2. “It also aims at identifying gaps between cloud provider capabilities and the potential for what they could provide to enhance academic research; and provide initial steps in documenting patterns and leading deployment practices to share with the community.”

Oracle partners with educators, researchers, students and university-affiliated entrepreneurship programs to create solutions that deliver significant positive impact to humanity and our world. Through the Oracle Cloud Innovation Accelerator program, Oracle grants cloud credits and provides technical support in support of those efforts. For example, one accelerator participant, The University of Bristol, is using cloud credits their cloud credits for various research projects in chemistry and life sciences.

“The University of Bristol is proud to be working so closely with Oracle to explore the prospects of doing things in a different way. It is undoubtedly the case that high capability computing is ever more central to the science endeavor,” said Professor Guy Orpen, Deputy Vice-Chancellor for the Temple Quarter Enterprise Campus at the University of Bristol. “Being able to access these capabilities in Oracle Cloud through the Oracle Cloud Innovation Accelerator program lowers barriers to entry so that more exploratory research, more groundbreaking and innovative research can be done.”

Oracle is in collaboration with the Internet2 NET+ program through an agreement with Mythics, providing services on a provisional basis to Internet2 member institutions at a discounted community pricing structure. As such, research institutions can leverage Oracle’s computing capabilities in conjunction with Mythics to access a dedicated network connecting universities with federal and state agencies.

As an Internet2 industry member, Oracle has been contributing to this collaborative environment and working with a large cross-section of the research and education community to ensure that the Oracle Cloud meets campus challenges and benefits all member institutions’ teaching, learning and research needs.

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Subnets Are Now Regional (OCI: New Feature)

Online Apps DBA - Tue, 2019-03-05 04:42

Subnets & Load Balancers in the Oracle Cloud now changed to Regional! This is a big step towards simplifying the network design and eventually means a lot less Subnets. Are You Aware About this? Watch [Video] Subnets Are Now Regional (OCI: New Feature) at https://k21academy.com/oci30 & know: ✔ What is Subnets? ✔ What was Subnet […]

The post Subnets Are Now Regional (OCI: New Feature) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

How To Create A Recovery Services Vault In Azure

Yann Neuhaus - Tue, 2019-03-05 02:54
Creating A Recovery Services Vault In Azure

To create a recovery services vault, we first need to create a resource group by using either the CLI or the Azure interface.

Creating a resource group using the command line interface

$resourceGroup = “recovvault”
$location = “westeurope”
az group create -n $resourceGroup -l $location

The resource recovvault has been created successfully.

Now we need to setup  the recovery services vault.

From Azure portal, select the resource group created then click on Add, and search for Backup and Site recovery (OMS), click on it and hit create.
Specify the following information as described in the image below:

Recovery Services Backup

Hit create.

The recovery services vault has been created successfully.

Set the vault storage redundancy to Locally-redundant
  • Select the recovery vault created
  • From the options column, select Properties, then Backup configuration
  • Choose Locally-redundant (cost effective than Geo-redundant)
  • Hit save
Configuring the Azure Backup Agent.

We need to create a virtual machine in order to simulate an on-premise server, so click on Backup from the left menu.
For the purpose of this example, we will choose On-premises for simulation and File and Folders to backup.
Hit Prepare the infrastructure.

Then download the agent for windows server and the vault credentials by clicking on the link provided (see below):

Recovery_Services_Agent

Open the Azure Recovery Services Agent service, leave the sections Installation Settings and Proxy Configuration by default then click on Install.

When the installation is done, click on Proceed to Registration.

Let’s associate the vault credential with the agent.

Browse to select the agent credential in order to associate the server with the backup vault in Azure.
Enter and confirm the passphrase, choose a location to save it then hit finish.

The server is now registered with the Azure backup and the agent properly installed.

Schedule backup for Files and Folders.
  • Click on close to launch Microsoft Azure Recovery Services Agent.
  • Click on Schedule Backup from the right column
  • Select items to backup (my docs folder)
  • Specify backup schedule (outside normal working hours)
  • Leave by default the retention policy
  • In our example, we choose over the network due to our small data.
  • Hit Finish.

If you have small data, choose the option Over the network, otherwise choose Offline for large data.

The backup schedule is now successful.

Executing the backup.

Click on Back up Now from the right column
Hit back up and close (see below):

Backup_R

Recovering Data
  • Click on Recover data from the right column of the window backup
  • Choose This server
  • Choose Volume
  • Select the volume and date
  • Choose another recovery destination
  • Hit recover

The recovery is now successful, the folder has been properly recovered.

Information related to the cost of resources used during the implementation of the recovery vault:

Use the below PowerShell command to get an overview of the cost of the resources used:

Get-AzConsumptionUsageDetail -BillingPeriodName 201903 -ResourceGroup recovvault

If you are in a test environment, to avoid paying extra cost, make sure to delete the resource group (command-line below) created if not used.

  • az group delete -n $resourceGroup -y

Need further details about managing and monitoring a recovery service vault, click here.
Need further details about deleting a recovery service vault, click here.

Cet article How To Create A Recovery Services Vault In Azure est apparu en premier sur Blog dbi services.

Why are Index Organized tables (IOTs) not supported by interval partitioning

Tom Kyte - Mon, 2019-03-04 23:06
Hi tom, Could you please let me know why we can not use interval partitioning on index organized tables. Thanks in advance
Categories: DBA Blogs

Batch Architecture - Designing Your Submitters - Part 3

Anthony Shorten - Mon, 2019-03-04 16:13

If you are using the command line submission interface, the last step in the batch architecture is to configure the Submitter configuration settings.

Note: Customers using the Oracle Scheduler Integration or Online Submission (the latter is for non-production only) need to skip this article as the configuration outlined in this article is not used by those submission methods.

As with the cluster and threadpool configurations, the use of the Batch Edit (bedit.sh) utility is recommended to save costs and reduce risk with the following set of commands:

$ bedit.sh -s
 
$ bedit.sh -b <batch_control>
  • Default option (-s). This sets up a default configuration file used for any batch control where no specific batch properties file exists. This creates a submitbatch.properties file located in the $SPLEBASE/splapp/standalone/config subdirectory.
  • Batch Control Specific configuration (-b). This will create a batch control specific configuration file named process.<batchcontrol>.properties where <batchcontrol> is the Batch Control identifier (Note: Make sure it is the same case as the identified in the meta-data). This file is located in the $SPLEBASE/splapp/standalone/config/cm subdirectory. In this option, the soft parameters on the batch control can be configured as well.

Batch Submitter Configuration

Use the following guidelines:

  • Use the -s option where possible. Setup a global configuration to cover as many processes as possible and then create specific process parameter files for batch controls that require specific soft parameters.
  • Minimize the use command Line overrides. The advantage of setting up submitter configuration files is to reduce your maintenance costs. Whilst it is possible to use command line overrides to replace all the settings in the configuration, avoid overuse of overrides to stabilize your configuration and minimize your operational costs.
  • Set common batch parameters. Using the -s option specify the parameters for the common settings.
  • Change the User used. The default user AUSER is not a valid user. This is intention to force the appropriate configuration for your site. Avoid using SYSUSER as that is only to be used to load additional users into the product.
  • Setup soft parameters in process specific configurations. For batch controls that have parameters, these need to be set in the configuration file or as overrides on the command line. To minimize maintenance costs and potential command line issues, it is recommended to set the values in a process specific configuration file. using the add soft command in bedit.sh with the following recommendations:
    •  The parameter name in the parm setting must match the name and case of the Parameter Name on the Batch Control.
    •  The value set in the value setting must be the same or valid for the Parameter Value on the Batch Control.
    •  Optional parameters do not not need to be specified unless used.

For example:

$ bedit.sh -s

Editing file /u01/demo/splapp/standalone/config/submitbatch.properties using template /u01/demo/etc/submitbatch.be
Batch Configuration Editor [submitbatch.properties]
---------------------------------------------------------------
Current Settings
  poolname (DEFAULT)
  threads (1)
  commit (10)
  maxtimeout (15)
  user (AUSER)
  lang (ENG)
  storage (false)
  role ({batchCode})
>

$ bedit.sh -b MYJOB

File /u01/demo/splapp/standalone/config/cm/job.MYJOB.properties does not exist. Create? (y/n) y
Editing file /u01/demo/splapp/standalone/config/cm/job.MYJOB.properties using template /u01/demo/etc/job.be
Batch Configuration Editor [job.MYJOB.properties]
---------------------------------------------------------------
Current Settings
  poolname (DEFAULT)
  threads (1)
  commit (10)
  user (AUSER)
  lang (ENG)
  soft.1
      parm (maxErrors)
      value (500)
>

For more advice on individual parameters use the help <parameter> command.

To use the configuration use the submitjob.sh -b <batch_code> command. Refer to the Server Administration Guide supplied with your product for more information.

Microsoft Azure / Ubuntu: Installation waagent

Dietrich Schroff - Mon, 2019-03-04 10:15
If you want to build your own Linux images for Microsoft Azure, you have to use waagent. So i created a virtual machine on my local host with ubuntu server.
The installation of waagent is easy, if you know, that the package is not called waagent on ubunut but walinuxagent:
# apt install walinuxagent
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following NEW packages will be installed:
  walinuxagent
0 upgraded, 1 newly installed, 0 to remove and 24 not upgraded.
Need to get 170 kB of archives.
After this operation, 1,075 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 walinuxagent amd64 2.2.32-0ubuntu1~18.04.1 [170 kB]
Fetched 170 kB in 0s (400 kB/s) 
Selecting previously unselected package walinuxagent.
(Reading database ... 66707 files and directories currently installed.)
Preparing to unpack .../walinuxagent_2.2.32-0ubuntu1~18.04.1_amd64.deb ...
Unpacking walinuxagent (2.2.32-0ubuntu1~18.04.1) ...
Processing triggers for ureadahead (0.100.0-20) ...
Setting up walinuxagent (2.2.32-0ubuntu1~18.04.1) ...
update-initramfs: deferring update (trigger activated)
Created symlink /etc/systemd/system/multi-user.target.wants/walinuxagent.service → /lib/systemd/system/walinuxagent.service.
Created symlink /etc/systemd/system/multi-user.target.wants/ephemeral-disk-warning.service → /lib/systemd/system/ephemeral-disk-warning.service.
Processing triggers for ureadahead (0.100.0-20) ...
Processing triggers for initramfs-tools (0.130ubuntu3.6) ...
update-initramfs: Generating /boot/initrd.img-4.15.0-45-generic
To get more information wether waagent is supported for your preferred distribution just check this github page: https://github.com/Azure/WALinuxAgent


Feb 2019 FND Recommended Patch Collection for EBS 12.1 Now Available

Steven Chan - Mon, 2019-03-04 08:19

The latest cumulative set of updates to the Oracle E-Business Suite Release 12.1 technology stack foundation utilities (FND) is now available in a newly released Feb 2019 Recommended Patch Collection (RPC). Oracle strongly recommends all E-Business Suite 12.1 customers apply this set of updates.

Issues Fixed in this Patch

This cumulative Recommended Patch Collection contains important fixes for issues with the Oracle EBS Application Object Library (FND) libraries that handle password hashing and resets, Forms-related interactions, key flexfields, descriptive flexfields, and more.

View the patch Readme for a complete listing of bugs fixed in this patch. 

Related Article
Categories: APPS Blogs

Thank You ALL

Michael Dinh - Mon, 2019-03-04 07:43

Oracle like like a box of chocolate, you never know what you are going to get. (Reference: Forrest Gump movie)

After spending countless hours over weekend, I am reminded of quote, “Curiosity killed the cat, but satisfaction brought it back.”

Basically, I have been unsuccessful in rebuilding 12.1.0.1 RAC VM to test and validate another Upgrade BUG

The finding looks to match – root.sh failing with CLSRSC-507 while installing 12c grid on Linux 7.3

Thank you to all who share their experiences !!!

==============================================================================================================
+++ FAILED STEP: TASK [oraswgi-install : Run root script after installation (Other Nodes)] ******
==============================================================================================================

Line 771: failed: [racnode-dc1-2] /u01/app/12.1.0.1/grid/root.sh", ["Check /u01/app/12.1.0.1/grid/install/root_racnode-dc1-2_2019-03-04_05-17-39.log for the output of root script"]
TASK [oraswgi-install : Run root script after installation (Other Nodes)] ******


[oracle@racnode-dc1-2 ~]$ cat /u01/app/12.1.0.1/grid/install/root_racnode-dc1-2_2019-03-04_05-17-39.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/12.1.0.1/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.1/grid/crs/install/crsconfig_params
2019/03/04 05:17:39 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

2019/03/04 05:18:06 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

2019/03/04 05:18:07 CLSRSC-363: User ignored prerequisites during installation

OLR initialization - successful
2019/03/04 05:18:44 CLSRSC-507: The root script cannot proceed on this node racnode-dc1-2 because either the first-node operations have not completed on node racnode-dc1-1 or there was an error in obtaining the status of the first-node operations.

Died at /u01/app/12.1.0.1/grid/crs/install/crsutils.pm line 3681.
The command '/u01/app/12.1.0.1/grid/perl/bin/perl -I/u01/app/12.1.0.1/grid/perl/lib -I/u01/app/12.1.0.1/grid/crs/install /u01/app/12.1.0.1/grid/crs/install/rootcrs.pl ' execution failed
[oracle@racnode-dc1-2 ~]$

[oracle@racnode-dc1-2 ~]$ tail /etc/oracle-release
Oracle Linux Server release 7.3
[oracle@racnode-dc1-2 ~]$

[root@racnode-dc1-1 ~]# crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]

[root@racnode-dc1-1 ~]# crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-1] is [12.1.0.2.0]

[root@racnode-dc1-1 ~]# crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0].
[root@racnode-dc1-1 ~]#

==============================================================================================================
+++ CLSRSC-507: The root script cannot proceed on this node NODE2 because either the first-node operations have not completed on node NODE1 or there was an error in obtaining the status of the first-node operations.
==============================================================================================================

https://community.oracle.com/docs/DOC-1011444

Created by 3389670 on Feb 24, 2017 6:41 PM. Last modified by 3389670 on Feb 24, 2017 6:41 PM.
Visibility: Open to anyone

Problem Summary 
--------------------------------------------------- 
root.sh failing with CLSRSC-507 while installing 12c grid on Linux 7.3

Problem Description 
--------------------------------------------------- 
root.sh failing with CLSRSC-507 while installing 12c grid on Linux 7.3 
OLR initialization - successful 
2017/02/23 05:28:25 CLSRSC-507: The root script cannot proceed on this node NODE2 because either the first-node operations have not completed on node NODE1 or there was an error in obtaining the status of the first-node operations.

Died at /u01/app/12.1.0.2/grid/crs/install/crsutils.pm line 3681. 
The command '/u01/app/12.1.0.2/grid/perl/bin/perl -I/u01/app/12.1.0.2/grid/perl/lib -I/u01/app/12.1.0.2/grid/crs/install /u01/app/12.1.0.2/grid/crs/install/roo

Error Codes 
--------------------------------------------------- 
CLSRSC-507

Problem Category/Subcategory 
--------------------------------------------------- 
Database RAC / Grid Infrastructure / Clusterware/Install / Upgrade / Patching issues

Solution 
---------------------------------------------------

1. Download latest JAN 2017 PSU 12.1.0.2.170117 (Jan 2017) Grid Infrastructure Patch Set Update (GI PSU) - 24917825

https://updates.oracle.com/download/24917825.html 

Platform or Language Linux86-64

2. Unzip downloaded patch as GRID user to directory

unzip p24917825_121020_Linux-x86-64.zip -d 

3. Run deconfig on both nodes

In the 2nd node as root user, 

/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -deconfig -force

In the 1st node as root user, 
/u01/app/12.1.0.2/grid/crs/install/rootcrs.sh -deconfig -force -lastnode

4. Once deconfig is completed then move forward on applying patching on both nodes in GRID Home

5. Move to unzip patch directory and apply patch using opatch manual

In 1st node, as grid user

cd /24917825/24732082 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/24828633 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/24828643 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/21436941 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

In 2nd node, as grid user

cd /24917825/24732082 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/24828633 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/24828643 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

cd /24917825/21436941 
/u01/app/12.1.0.2/grid/OPatch/opatch apply -local

6. Once Patch apply is completed on both node then move forward on invoking config.sh  

/u01/app/12.1.0.2/grid/crs/config/config.sh

or run root.sh directly on node1 and node2

/u01/app/12.1.0.2/grid/root.sh

Cartesian Join

Jonathan Lewis - Mon, 2019-03-04 07:37

I wrote this note a little over 4 years ago (Jan 2015) but failed to publish it for some reason. I’ve just rediscovered it and it’s got a couple of details that are worth mentioning, so I’ve decided to go ahead and publish it now.

A recent [ed: 4 year old] question on the OTN SQL forum asked for help in “multiplying up” data – producing multiple rows from a single row source. This is something I’ve done fairly often when modelling a problem, for example by generating an orders table and then generating an order_lines table from the orders table, and there are a couple of traps to consider.

The problem the OP had was that their base data was the result set from a complex query – which ran “fine”, but took 10 minutes to complete when a Cartesian join to a table holding just three rows was included. Unfortunately the OP didn’t supply, or even comment on, the execution plans. The obvious guess, of course, is that the extra table resulted in a completely different execution plan rather than the expected “do the original query then multiply by 3” plan, in which case the solution to the problem is (probably) simple – stick the original query into a non-mergeable view before doing the join.

Assume we have the following tables, t1 has 100,000 rows (generated from the SQL in this article), t2 has 4 rows where column id2 has the values from 1 to 4, t3 is empty – we can model the basic requirement with the query shown below:


SQL> desc t1
 Name                    Null?    Type
 ----------------------- -------- ----------------
 ID                               NUMBER
 C1                               CHAR(2)
 C2                               CHAR(2)
 C3                               CHAR(2)
 C4                               CHAR(2)
 PADDING                          VARCHAR2(100)

SQL> desc t2
 Name                    Null?    Type
 ----------------------- -------- ----------------
 ID2                              NUMBER

SQL> desc t3
 Name                    Null?    Type
 ----------------------- -------- ----------------
 ID                               NUMBER
 ID2                              NUMBER
 C1                               CHAR(2)
 C2                               CHAR(2)
 C3                               CHAR(2)
 C4                               CHAR(2)
 PADDING                          VARCHAR2(100)


insert into t3
select
        t1.id, t2.id2, t1.c1, t1.c2, c3, t1.c4, t1.padding
from
       (select * from t1) t1,
        t2
;

If we “lose” the plan for the original “select * from t1” (assuming t1 was really a complicated view) when we extend to the Cartesian join all we need to do is the following:


insert into t3
select
        /*+ leading(t1 t2) */
        t1.id, t2.id2, t1.c1, t1.c2, c3, t1.c4, t1.padding
from
        (select /*+ no_merge */ * from t1) t1,
        t2
;

This is where the problem starts to get a little interesting. The /*+ no_merge */ hint is (usually) a winner in situations like this – but why have I included a /*+ leading() */ hint choosing to put t2 (the small table) second in the join order? It’s because of the way that Cartesian Merge Joins work, combined with an eye to where my most important resource bottleneck is likely to be. Here’s the execution plan taken from memory after executing this statement with statistics_level set to all. (11.2.0.4):


SQL_ID  azu8ntfjg9pwj, child number 0
-------------------------------------
insert into t3 select   /*+ leading(t1 t2) */  t1.id, t2.id2, t1.c1,
t1.c2, c3, t1.c4, t1.padding from  (select /*+ no_merge */ * from t1)
t1,   t2

Plan hash value: 1055157753

----------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |      |      1 |        |      0 |00:00:10.28 |   48255 |       |       |          |
|   1 |  LOAD TABLE CONVENTIONAL |      |      1 |        |      0 |00:00:10.28 |   48255 |       |       |          |
|   2 |   MERGE JOIN CARTESIAN   |      |      1 |    400K|    400K|00:00:06.30 |    1727 |       |       |          |
|   3 |    VIEW                  |      |      1 |    100K|    100K|00:00:00.94 |    1725 |       |       |          |
|   4 |     TABLE ACCESS FULL    | T1   |      1 |    100K|    100K|00:00:00.38 |    1725 |       |       |          |
|   5 |    BUFFER SORT           |      |    100K|      4 |    400K|00:00:01.78 |       2 |  3072 |  3072 | 2048  (0)|
|   6 |     TABLE ACCESS FULL    | T2   |      1 |      4 |      4 |00:00:00.01 |       2 |       |       |          |
----------------------------------------------------------------------------------------------------------------------

Let’s try that again (building from scratch, of course) with the table order reversed in the leading() hint:


SQL_ID  52qaydutssvn5, child number 0
-------------------------------------
insert into t3 select   /*+ leading(t2 t1) */  t1.id, t2.id2, t1.c1,
t1.c2, c3, t1.c4, t1.padding from  (select /*+ no_merge */ * from t1)
t1,   t2

Plan hash value: 2126214450

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | Writes |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |      |      1 |        |      0 |00:00:12.29 |   48311 |   6352 |   1588 |       |       |          |
|   1 |  LOAD TABLE CONVENTIONAL |      |      1 |        |      0 |00:00:12.29 |   48311 |   6352 |   1588 |       |       |          |
|   2 |   MERGE JOIN CARTESIAN   |      |      1 |    400K|    400K|00:00:06.64 |    1729 |   6352 |   1588 |       |       |          |
|   3 |    TABLE ACCESS FULL     | T2   |      1 |      4 |      4 |00:00:00.01 |       2 |      0 |      0 |       |       |          |
|   4 |    BUFFER SORT           |      |      4 |    100K|    400K|00:00:04.45 |    1727 |   6352 |   1588 |    13M|  1416K| 9244K (0)|
|   5 |     VIEW                 |      |      1 |    100K|    100K|00:00:00.80 |    1725 |      0 |      0 |       |       |          |
|   6 |      TABLE ACCESS FULL   | T1   |      1 |    100K|    100K|00:00:00.28 |    1725 |      0 |      0 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

There’s one important detail that’s not explicit in the execution plans – I’ve set the workarea_size_policy to manual and the sort_area_size to 10MB to demonstrate the impact of having a dataset that is too large for the session’s workarea limit.

The results, in terms of timing, are border-line. With the “correct” choice of order the completion time is 10.28 seconds compared to 12.29 seconds, though if you look at the time for the Merge Join Cartesian operation the difference is much less significant. The critical point, though, appears at operation 4 – the Buffer Sort. I set my sort_area_size to something that I know was smaller than the data set I needed to buffer – so the operation had to spill to disc. Ignoring overheads and rounding errors the data from the 1,727 blocks I read from the table at pctfree = 10 were dumped to the temporary space in 1,588 packed blocks (sanity check: 1,727 * 0.9 = 1,554); and then those blocks were read back once for each row from the driving t2 table (sanity check: 1,588 * 4 = 6,352).

With my setup I had a choice of bottlenecks:  scan a very small data set in memory 100,000 times to burn CPU, or scan a large data set from disc 4 times. There wasn’t much difference in my case: but the difference could be significant on a full-scale production system.  By default the optimizer happened to pick the “wrong” path with my data sets.

But there’s something even more important than this difference in resource usage to generate the data: what does the data look like after it’s been generated.  Here’s a simple query to show you the first few rows of the stored result sets in the two different test:


SQL> select id, id2, c1, c2, c3, c4 from t3 where rownum <= 8;

Data from leading (t1 t2)
=========================
        ID        ID2 C1 C2 C3 C4
---------- ---------- -- -- -- --
         1          1 BV GF JB LY
         1          2 BV GF JB LY
         1          3 BV GF JB LY
         1          4 BV GF JB LY
         2          1 YV LH MT VM
         2          2 YV LH MT VM
         2          3 YV LH MT VM
         2          4 YV LH MT VM


Data from leading (t2 t1)
=========================
        ID        ID2 C1 C2 C3 C4
---------- ---------- -- -- -- --
         1          1 BV GF JB LY
         2          1 YV LH MT VM
         3          1 IE YE TS DP
         4          1 DA JY GS AW
         5          1 ZA DC KD CF
         6          1 VJ JI TC RI
         7          1 DN RY KC BE
         8          1 WP EQ UM VY

If we had been using code like this to generate an order_lines table from an orders table, with  leading(orders t2) we would have “order lines” clustered around the “order number” – which is a realistic model; when we have leading(t2 orders) the clustering disappears (technically the order numbers are clustered around the order lines). It’s this aspect of the data that might have a much more important impact on the suitability (and timing) of any testing you may be doing rather than a little time lost or gained in generating the raw data.

Footnote

If you try to repeat this test on your own system don’t expect my timing to match yours. Bear in mind, also, that with statistics_level set to all there’s going to be a CPU overhead that varies between the two options for the leading() hint – the CPU usage on rowsource execution stats could be much higher for the case where one of the operations starts 100,000 times.

 

Pages

Subscribe to Oracle FAQ aggregator