Feed aggregator

Oracle Database 19c Automatic Indexing: Predicted Back In 2008 (A Better Future)

Richard Foote - Mon, 2019-03-18 19:24
I’ve recently received a number of correspondences regarding one of my most popular blog posts, dating back to February 2008: Index Rebuild vs. Coalesce vs. Shrink Space (Pigs – 3 Different Ones). In the comments section, there’s an interesting discussion where I mention: “If Oracle19 does everything for you and all the various indexes structures get […]
Categories: DBA Blogs

How to count pairs in a consecutive number of rows

Tom Kyte - Mon, 2019-03-18 18:06
I have the following example: COLUMN 19 20 26 28 29 32 33 34 I need to count the rows based on pairs, 19-20, 28-29, 32-33. I'm having difficulty to check if a pair is already counted or not, any sugestions ? The result should be s...
Categories: DBA Blogs

Error generating DUMP: ORA-39006: internal error

Tom Kyte - Mon, 2019-03-18 18:06
Hi, I have a problem creating a dump with SQL Developer, the PL/SQL generated is: <code> set scan off set serveroutput on set escape off whenever sqlerror exit DECLARE h1 number; s varchar2(1000):=NULL; errorvarchar varchar2(1...
Categories: DBA Blogs

ORA-06533: Subscript Beyond Count error

Tom Kyte - Mon, 2019-03-18 18:06
Hi I have the following PLSQL code - if run 1st time - it works fine - running 2nd or 3rd time it fails with "Subscript beyond count" error If I make the declaration of g_response private to the procedure (not globally in the package) - it works...
Categories: DBA Blogs

Partner Webcast – Oracle Cloud Business Analytics Data Visualizations

Providing fast and flexible analysis of any data from any source is a business requirement these days. Oracle Analytics Cloud is a cloud-first analytics platform, built on the industry-leading Oracle...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Insights into IT Governance and Security

Chris Warticki - Mon, 2019-03-18 11:52
CXOTALK with Oracle’s Brennan Baybeck

 

Businesses continue to expand their digital footprint while cyber risks are rapidly increasing in quantity and complexity. Which is why it’s more important than ever to embed security solutions and IT governance processes into the fabric of your business—to protect your digital assets, customers, and your bottom line.

As an IT professional, you should make it a priority to understand how these crucial relationships, combined with ongoing support from a trusted source, can help you develop a solid IT governance program to protect your business now and in the future.

Good Governance Program

“Having a good governance program gives you many benefits,” says Oracle’s Vice President of Global IT Risk Management Brennan Baybeck. “The first one is protecting the business and the critical assets of that business. One of the main critical assets is the data. Good governance also helps drive compliance with contractual requirements with customers [and] regulatory compliance with regulators.”

Brennan continues, “additionally, it also helps with ensuring that the various components of your security program are covered, whether it's security operations, change management, configuration management, patching, or threat intelligence.”

Hear from Brennan Baybeck, Oracle VP Global, IT Risk Management

In this short CXOTALK, Baybeck and industry analyst and CXOTALK host, Michael Krigsman further explore why having a strong security governance program is absolutely essential for all businesses today. Including:

  • What is excellent security
  • Top three benefits of a strong security governance program
  • Security measures you need to think about for the future
  • Best practices for managing threats and vulnerabilities
  • The growing role of governance and risk management in DevOps

 

Watch the CXOTALK: Insights into IT Governance and Security to learn more.

 

 

Brennan Baybeck

Oracle VP,

Global IT Risk Management

 

 

 

 

 

 

 

Resources:
  • See the CXOTALK #1: Why Every Business Needs Trusted Support.
  • Learn more about Oracle Premier Support.
  • Protect Your Business From Cybercrime

Data Pump Exit Codes

Learn oracle 12c database management - Mon, 2019-03-18 11:48


oracle@Linux01:[/u01/oracle/DPUMP] $ exp atoorpu file=abcd.dmp logfile=test.log table=sys.aud$
About to export specified tables via Conventional Path ...
. . exporting table                           AUD$     494321 rows exported
Export terminated successfully without warnings.

oracle@qpdbuat211:[/d01/oracle/DPUMP] $ echo $?
0


oracle@Linux01:[/u01/oracle/DPUMP] $ imp atoorpu file=abcd.dmp logifle=test.log
LRM-00101: unknown parameter name 'logifle'

IMP-00022: failed to process parameters, type 'IMP HELP=Y' for help
IMP-00000: Import terminated unsuccessfully

oracle@Linux01:[/u01/oracle/DPUMP] $ echo $?
1
Can be used in export shell scripts for status verification:
if test $status -eq 0
then
echo "export was successfull."
else
echo "export was not successfull."
fi
Also check below page fore reference :
Categories: DBA Blogs

Automate recyclebin purge in oracle

Learn oracle 12c database management - Mon, 2019-03-18 11:46


Setup this simple scheduler job as sysdba to purge the objects in the recycbin.
This is one of the most space cosuming location that often dba's forget to cleanup and the
objects get piled up occupying lot of space. Based on how long you want to save these dropped object setup a job under scheduler to run below plsql block either daily, weekly or monthly. 


 I suggest to run weekly.


--For user_recycbin purge--
-- plsql --

declare
VSQL varchar2(500);
begin
VSQL:='purge user_recyclebin';
execute immediate VSQL;
dbms_output.put_line('USER RECYCLEBIN has been purged.');
end;

/




--For dba_recycbin purge--
-- plsql --

declare
VSQL varchar2(500);
begin
VSQL:='purge dba_recyclebin';
execute immediate VSQL;
dbms_output.put_line('DBA RECYCLEBIN has been purged.');
end;





Prerequisites
The database object must reside in your own schema or you must have the DROP ANY ... system privilege for the type of object to be purged, or you must have the SYSDBA system privilege. To perform the PURGE DBA_RECYCLEBIN operation, you must have the SYSDBA or PURGE DBA_RECYCLEBINsystem privilege.
Categories: DBA Blogs

Oracle Again Cited as a Leader in Data Management Solutions for Analytics

Oracle Press Releases - Mon, 2019-03-18 07:00
Press Release
Oracle Again Cited as a Leader in Data Management Solutions for Analytics Oracle positioned highest for ability to execute and furthest for completeness of vision in latest Gartner Magic Quadrant for Data Management Solutions for Analytics

Redwood Shores, Calif.—Mar 18, 2019

Oracle today announced it was positioned highest for ability to execute and furthest for completeness of vision in Gartner’s 2019 “Magic Quadrant for Data Management Solutions for Analytics” report1.

Oracle’s leadership in data management emanates from its deep portfolio of database management solutions, including the self-driving Oracle Autonomous Database. Oracle believes these innovations enabled the company to be positioned 13 consecutive times as a Leader in this report.

“Oracle is proud to be positioned highest for ability to execute and furthest for completeness of vision in Gartner's 2019 Magic Quadrant for Data Management Solutions for Analytics," said Andrew Mendelsohn, Executive Vice President, Oracle Database Server Technologies. "Oracle's self-driving Autonomous Data Warehouse combines the power of a Data Warehouse with the flexibility of Big Data to drive all analytic data management use cases."

Oracle Autonomous Database consists of Oracle Autonomous Transaction Processing, optimized for running transactions and mixed workloads, and Oracle Autonomous Data Warehouse for running analytic database workloads. Both options of autonomous database offer self-driving, self-securing, and self-repairing capabilities that can automatically discover threats and remediate them while the database is running. Oracle Autonomous Database can help customers dramatically improve data security, be more efficient in the face of budget constraints, and quickly drive innovation that creates a competitive advantage.

Download a complimentary copy of Gartner’s 2019 Magic Quadrant Data Management Solution for Analytics here.

[1] Source: Gartner, Magic Quadrant Data Management Solution for Analytics, Adam Ronthal, Roxane Edjlali, Rick Greenwald, 21 January 2019.

Contact Info
Nicole Maloney
Oracle
+1.650.506.0806
nicole.maloney@oracle.com
Gartner Disclaimer

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly-Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

[BLOG] The Heartbeat Table of Oracle GoldenGate (12.2)

Online Apps DBA - Mon, 2019-03-18 05:25

Are you learning GoldenGate but are unaware of the built-in Heartbeat Table feature that has been added in Oracle GoldenGate 12.2? If yes, then visit: https://k21academy.com/goldengate34 and learn all about: ✔Features of Heartbeat Table ✔How you can add it on Target and Source ✔Managing and Viewing Heartbeat Data & much more… Are you learning GoldenGate […]

The post [BLOG] The Heartbeat Table of Oracle GoldenGate (12.2) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

When you change the UNDO_RETENTION parameter, the LOB segment’s retention value is not modified

Yann Neuhaus - Mon, 2019-03-18 03:30

Below, I will try to explain, a particular case for the general error : ORA-01555 snapshot too old error..

Normally, when we have this error, we are trying to adapt the retention parameters or to tune our queries.

SQL> show parameter undo;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled                    boolean     FALSE
undo_management                      string      AUTO
undo_retention                       integer     3600 --extended from 900,
undo_tablespace                      string      UNDOTBS1

But, are some scenarios where the above rule is not working .

We got from the alert log file of the DB the sql id which caused the issue : pmrbk5fdfd665

But, when you want to search for it, in V$SQL/V$SQLAREA  is not there

SQL> select sql_fulltext from v$sql where sql_id like '%pmrbk5fdfd665%';

no rows selected

Why?

Seems that, sql_id is present in V$OPEN_CURSOR, with an input into the sqltext column.
The issue is coming from the fact that the statement is accessing a LOB column, which causes to Oracle to generate a new sql id.
The execution part related to the LOBs will not appear into V$SQL/V$SQLAREA and is not captured into the AWR reports.

SQL>  select distinct * from v$open_cursor
  2     where rownum < 25
  3     and sql_id like '%pmrbk5fdfd665%';

SADDR                   SID USER_NAME                      ADDRESS          HASH_VALUE SQL_ID        SQL_TEXT                                                     LAST_SQL SQL_EXEC_ID CURSOR_TYPE
---------------- ---------- ------------------------------ ---------------- ---------- ------------- ------------------------------------------------------------ -------- ----------- ---------------
0000000670A19780         74 my_user                   00000002EB91F1F0 3831220380 pmrbk5fdfd665 table_104_11_XYZT_0_0_0
00000006747F0478        131 my_user                   00000002EB91F1F0 3831220380 pmrbk5fdfd665 table_104_11_XYZT_0_0_0

Apparently, the string into the sql_text column is  a  HEX representation of the object_id that is being accessed.
In our case is :XYZT

SQL>    select owner, object_name, object_type
  2    from dba_objects
  3    where object_id = (select to_number('&hex_value','XXXXXX') from dual);
Enter value for hex_value: XYZT
old   3:   where object_id = (select to_number('&hex_value','XXXXXX') from dual)
new   3:   where object_id = (select to_number('XYZT','XXXXXX') from dual)

                                                                                                                    
OWNER                  OBJECT_TYPE                                               OBJECT_NAME
---------------------- --------------------------------------------------------------------------
my_user                TABLE                                                     my_table


SQL> desc my_user.my_table;
 Name                  Type
 -------------------   ----------------
 EXPERIMENT_ID          VARCHAR2(20)
 DOCUMENT               BLOB
............….

If we are looking for the retention on the ” DOCUMENT ” column, we will see :

SQL> select table_name, pctversion, retention,segment_name from dba_lobs where table_name in ('my_table');

TABLE_NAME                                                                               
                                                  PCTVERSION  RETENTION                  SEGMENT_NAME
---------------------------------------------------------------------------------------- ------------------------------------
my_table                                                       900                       SYS_LOB0000027039C00002$$

In order to fix it , try to run this column to adapt the retention of the blob column, related to the new value of the UNDO_RETENTION parameter,

ALTER TABLE my_table MODIFY LOB (DOCUMENT) (3600);

Cet article When you change the UNDO_RETENTION parameter, the LOB segment’s retention value is not modified est apparu en premier sur Blog dbi services.

Microsoft Azure: How to use waagent (Microsoft Azure Linux Agent)

Dietrich Schroff - Sat, 2019-03-16 15:35
After installation waagent on my ubunu server, i tried to use this tool.
First guess was to read the manpages, but there is no entry for waagent:
root@ubuntuserver:~# man waagent
No manual entry for waagent
See 'man 7 undocumented' for help when manual pages are not available.So for documentation you have to visit the Microsoft Azure portal:
https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/agent-linux



Here are some commands i tried:
root@ubuntuserver:~# waagent -show-configuration
AutoUpdate.Enabled = True
AutoUpdate.GAFamily = Prod
Autoupdate.Frequency = 3600
CGroups.EnforceLimits = False
CGroups.Excluded = customscript,runcommand
DVD.MountPoint = /mnt/cdrom/secure
DetectScvmmEnv = False
EnableOverProvisioning = True
Extension.LogDir = /var/log/azure
Extensions.Enabled = True
HttpProxy.Host = None
HttpProxy.Port = None
Lib.Dir = /var/lib/waagent
Logs.Verbose = False
OS.AllowHTTP = False
OS.CheckRdmaDriver = False
OS.EnableFIPS = False
OS.EnableFirewall = True
OS.EnableRDMA = False
OS.HomeDir = /home
OS.OpensslPath = /usr/bin/openssl
OS.PasswordPath = /etc/shadow
OS.RootDeviceScsiTimeout = 300
OS.SshClientAliveInterval = 180
OS.SshDir = /etc/ssh
OS.SudoersDir = /etc/sudoers.d
OS.UpdateRdmaDriver = False
Pid.File = /var/run/waagent.pid
Provisioning.AllowResetSysUser = False
Provisioning.DecodeCustomData = False
Provisioning.DeleteRootPassword = True
Provisioning.Enabled = False
Provisioning.ExecuteCustomData = False
Provisioning.MonitorHostName = False
Provisioning.PasswordCryptId = 6
Provisioning.PasswordCryptSaltLength = 10
Provisioning.RegenerateSshHostKeyPair = False
Provisioning.SshHostKeyPairType = rsa
Provisioning.UseCloudInit = True
ResourceDisk.EnableSwap = False
ResourceDisk.Filesystem = ext4
ResourceDisk.Format = False
ResourceDisk.MountOptions = None
ResourceDisk.MountPoint = /mnt
ResourceDisk.SwapSizeMB = 0
or list all commands:
root@ubuntuserver:~# waagent -help
usage: /usr/sbin/waagent [-verbose] [-force] [-help] -configuration-path:-deprovision[+user]|-register-service|-version|-daemon|-start|-run-exthandlers|-show-configuration]


Monitoring Database in AWS Aurora After Migrating from Oracle to PostgreSQL

Pakistan's First Oracle Blog - Fri, 2019-03-15 19:08
Suppose you have an Oracle database on-premise, which you have now moved over to AWS Cloud in AWS Aurora PostgreSQL. 
For your Oracle database, you have been using v$ views to monitor your runtime performance of instance, long running operations, top SQLs from ASH, blocking etc. How do you continue doing that when you migrate your database to cloud especially in AWS Aurora based PostgreSQL?

Well, PostgreSQL provides statistics collection views, which is a subsystem that collects runtime dynamic information about certain server activities such as statistical performance information. For example, you can use  pg_stat_activity view to check for long running queries.

There are various other statistics views too in PostgreSQL such as pg_stat_all_tables to see size of table, access method like FTS or index scan, and so on. There are other views to check IO on tables and indexes and plethora of others.

In addition to these statistics views, Aurora PostgreSQL provides a nifty tool called as Performance Insights. Performance insights monitors Amazon RDS or Aurora databases (both MySQL and PostgreSQL) and captures workloads so that you can analyze and troubleshoot database performance. Performance insights visualizes the database load and provides very useful filtering using various attributes such as: waits, SQL statements, hosts, or users.

As part of operational excellence, its imperative after a database migration that performance is monitored, documented and continuously improved. Performance Insights and the statistics views are great for proactive and reactive database tuning in AWS RDS and AWS Aurora.
Categories: DBA Blogs

Playing with oracleasm and ASMLib

Michael Dinh - Fri, 2019-03-15 19:02

Forgot about script I wrote some time ago: Be Friend With awk/sed | ASM Mapping

[root@racnode-dc1-1 ~]# cat /sf_working/scripts/asm_mapping.sh
#!/bin/sh -e
for disk in `/etc/init.d/oracleasm listdisks`
do
oracleasm querydisk -d $disk
#ls -l /dev/*|grep -E `oracleasm querydisk -d $disk|awk '{print $NF}'|sed s'/.$//'|sed '1s/^.//'|awk -F, '{print $1 ",.*" $2}'`
# Alternate option to remove []
ls -l /dev/*|grep -E `oracleasm querydisk -d $disk|awk '{print $NF}'|sed 's/[][]//g'|awk -F, '{print $1 ",.*" $2}'`
echo
done

[root@racnode-dc1-1 ~]# /sf_working/scripts/asm_mapping.sh
Disk "CRS01" is a valid ASM disk on device [8,33]
brw-rw---- 1 root    disk      8,  33 Mar 16 10:25 /dev/sdc1

Disk "DATA01" is a valid ASM disk on device [8,49]
brw-rw---- 1 root    disk      8,  49 Mar 16 10:25 /dev/sdd1

Disk "FRA01" is a valid ASM disk on device [8,65]
brw-rw---- 1 root    disk      8,  65 Mar 16 10:25 /dev/sde1

[root@racnode-dc1-1 ~]#

HOWTO: Which Disks Are Handled by ASMLib Kernel Driver? (Doc ID 313387.1)

[root@racnode-dc1-1 ~]# oracleasm listdisks
CRS01
DATA01
FRA01

[root@racnode-dc1-1 dev]# ls -l /dev/oracleasm/disks
total 0
brw-rw---- 1 oracle dba 8, 33 Mar 15 10:46 CRS01
brw-rw---- 1 oracle dba 8, 49 Mar 15 10:46 DATA01
brw-rw---- 1 oracle dba 8, 65 Mar 15 10:46 FRA01

[root@racnode-dc1-1 dev]# ls -l /dev | grep -E '33|49|65'|grep -E '8'
brw-rw---- 1 root    disk      8,  33 Mar 15 23:47 sdc1
brw-rw---- 1 root    disk      8,  49 Mar 15 23:47 sdd1
brw-rw---- 1 root    disk      8,  65 Mar 15 23:47 sde1

[root@racnode-dc1-1 dev]# /sbin/blkid | grep oracleasm
/dev/sde1: LABEL="FRA01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="205115d9-730d-4f64-aedd-d3886e73d123"
/dev/sdd1: LABEL="DATA01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="714e56a4-210c-4836-a9cd-ff2162c1dea7"
/dev/sdc1: LABEL="CRS01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="232e214d-07bb-4f36-aba8-fb215437fb7e"
[root@racnode-dc1-1 dev]#

Various commands to retrieved oracleasm info and more.

[root@racnode-dc1-1 ~]# cat /etc/oracle-release
Oracle Linux Server release 7.3

[root@racnode-dc1-1 ~]# cat /etc/system-release
Oracle Linux Server release 7.3

[root@racnode-dc1-1 ~]# uname -r
4.1.12-61.1.18.el7uek.x86_64

[root@racnode-dc1-1 ~]# rpm -q oracleasm-`uname -r`
package oracleasm-4.1.12-61.1.18.el7uek.x86_64 is not installed

[root@racnode-dc1-1 ~]# rpm -qa |grep oracleasm
oracleasmlib-2.0.4-1.el6.x86_64
oracleasm-support-2.1.8-3.1.el7.x86_64
kmod-oracleasm-2.0.8-17.0.1.el7.x86_64

[root@racnode-dc1-1 ~]# oracleasm -V
oracleasm version 2.1.9

[root@racnode-dc1-1 ~]# oracleasm -h
Usage: oracleasm [--exec-path=<exec_path>] <command> [ <args> ]
       oracleasm --exec-path
       oracleasm -h
       oracleasm -V

The basic oracleasm commands are:
    configure        Configure the Oracle Linux ASMLib driver
    init             Load and initialize the ASMLib driver
    exit             Stop the ASMLib driver
    scandisks        Scan the system for Oracle ASMLib disks
    status           Display the status of the Oracle ASMLib driver
    listdisks        List known Oracle ASMLib disks
    querydisk        Determine if a disk belongs to Oracle ASMlib
    createdisk       Allocate a device for Oracle ASMLib use
    deletedisk       Return a device to the operating system
    renamedisk       Change the label of an Oracle ASMlib disk
    update-driver    Download the latest ASMLib driver

[root@racnode-dc1-1 ~]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface [oracle]:
Default group to own the driver interface [dba]:
Start Oracle ASM library driver on boot (y/n) [y]:
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done

[root@racnode-dc1-1 ~]# oracleasm configure
ORACLEASM_ENABLED=true
ORACLEASM_UID=oracle
ORACLEASM_GID=dba
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=""
ORACLEASM_SCANEXCLUDE=""
ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"

[root@racnode-dc1-1 ~]# cat /etc/sysconfig/oracleasm
#
# This is a configuration file for automatic loading of the Oracle
# Automatic Storage Management library kernel driver.  It is generated
# By running /etc/init.d/oracleasm configure.  Please use that method
# to modify this file
#

# ORACLEASM_ENABLED: 'true' means to load the driver on boot.
ORACLEASM_ENABLED=true

# ORACLEASM_UID: Default user owning the /dev/oracleasm mount point.
ORACLEASM_UID=oracle

# ORACLEASM_GID: Default group owning the /dev/oracleasm mount point.
ORACLEASM_GID=dba

# ORACLEASM_SCANBOOT: 'true' means scan for ASM disks on boot.
ORACLEASM_SCANBOOT=true

# ORACLEASM_SCANORDER: Matching patterns to order disk scanning
ORACLEASM_SCANORDER=""

# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE=""

# ORACLEASM_USE_LOGICAL_BLOCK_SIZE: 'true' means use the logical block size
# reported by the underlying disk instead of the physical. The default
# is 'false'
ORACLEASM_USE_LOGICAL_BLOCK_SIZE=false

[root@racnode-dc1-1 ~]# oracleasm status
Checking if ASM is loaded: yes
Checking if /dev/oracleasm is mounted: yes

[root@racnode-dc1-1 ~]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...

[root@racnode-dc1-1 ~]# oracleasm querydisk -d DATA01
Disk "DATA01" is a valid ASM disk on device [8,49]

[root@racnode-dc1-1 ~]# oracleasm querydisk -p DATA01
Disk "DATA01" is a valid ASM disk
/dev/sdd1: LABEL="DATA01" TYPE="oracleasm" PARTLABEL="primary" PARTUUID="714e56a4-210c-4836-a9cd-ff2162c1dea7"

[root@racnode-dc1-1 ~]# oracleasm-discover
Using ASMLib from /opt/oracle/extapi/64/asm/orcl/1/libasm.so
[ASM Library - Generic Linux, version 2.0.4 (KABI_V2)]
Discovered disk: ORCL:CRS01 [104853504 blocks (53684994048 bytes), maxio 1024]
Discovered disk: ORCL:DATA01 [104853504 blocks (53684994048 bytes), maxio 1024]
Discovered disk: ORCL:FRA01 [104853504 blocks (53684994048 bytes), maxio 1024]

[root@racnode-dc1-1 ~]# lsmod | grep oracleasm
oracleasm              57344  1

[root@racnode-dc1-1 ~]# ls -la /etc/sysconfig/oracleasm
lrwxrwxrwx 1 root root 24 Mar  5 20:21 /etc/sysconfig/oracleasm -> oracleasm-_dev_oracleasm

[root@racnode-dc1-1 ~]# rpm -qa | grep oracleasm
oracleasmlib-2.0.4-1.el6.x86_64
oracleasm-support-2.1.8-3.1.el7.x86_64
kmod-oracleasm-2.0.8-17.0.1.el7.x86_64

[root@racnode-dc1-1 ~]# rpm -qi oracleasmlib-2.0.4-1.el6.x86_64
Name        : oracleasmlib
Version     : 2.0.4
Release     : 1.el6
Architecture: x86_64
Install Date: Tue 18 Apr 2017 10:56:40 AM CEST
Group       : System Environment/Kernel
Size        : 27192
License     : Oracle Corporation
Signature   : RSA/SHA256, Mon 26 Mar 2012 10:22:51 PM CEST, Key ID 72f97b74ec551f03
Source RPM  : oracleasmlib-2.0.4-1.el6.src.rpm
Build Date  : Mon 26 Mar 2012 10:22:44 PM CEST
Build Host  : ca-build44.us.oracle.com
Relocations : (not relocatable)
Packager    : Joel Becker <joel.becker@oracle.com>
Vendor      : Oracle Corporation
URL         : http://oss.oracle.com/
Summary     : The Oracle Automatic Storage Management library userspace code.
Description :
The Oracle userspace library for Oracle Automatic Storage Management
[root@racnode-dc1-1 ~]#

Explicitly providing values in a WHERE clause showing much better performance compared to using sub query

Tom Kyte - Fri, 2019-03-15 16:46
Hi I am new to oracle and not sure how to provide the liveSQL link. I have 2 tables to join huge_table contains about 1 billion rows big_table contains about 100 million rows and small tables contains 999 rows providing the condition to fil...
Categories: DBA Blogs

Compare columns in two tables and report which column is different

Tom Kyte - Fri, 2019-03-15 16:46
Compare columns in two tables and list out column names which are different. for ex:- create table t1(c1 number(2), c2 varchar2(10)); create table t2(c1 number(2), c2 varchar2(10)); insert into t1 values(1,'a'); insert into t2 values(1,'b'); result ...
Categories: DBA Blogs

Documentum : Dctm job locked after docbase installation

Yann Neuhaus - Fri, 2019-03-15 11:19

A correct configuration of Documentum jobs is paramount, that’s why it is the first thing we do after the docbase installation.
A few days ago, I configured the jobs on a new docbase using DQL, and I got an error because a job is locked by the user dmadmin.

The error message was:

DQL> UPDATE dm_job OBJECTS SET target_server=' ' WHERE target_server!=' ' ;
...
[DM_QUERY_F_UP_SAVE]fatal:  "UPDATE:  An error has occurred during a save operation."

[DM_SYSOBJECT_E_LOCKED]error:  "The operation on dm_FTQBS_WEEKLY sysobject was unsuccessful because it is locked by user dmadmin."

I checked the status of this job:

API> ?,c,select r_object_id from dm_job where object_name ='dm_FTQBS_WEEKLY';
r_object_id
----------------
0812D68780000ca6
(1 row affected)

API> dump,c,0812D68780000ca6
...
USER ATTRIBUTES

  object_name                     : dm_FTQBS_WEEKLY
  title                           :
  subject                         : qbs weekly job
...
  start_date                      : 2/28/2019 05:21:15
  expiration_date                 : 2/28/2027 23:00:00
...
  is_inactive                     : T
  inactivate_after_failure        : F
...
  run_now                         : T
...

SYSTEM ATTRIBUTES

  r_object_type                   : dm_job
  r_creation_date                 : 2/28/2019 05:21:15
  r_modify_date                   : 2/28/2019 05:24:48
  r_modifier                      : dmadmin
...
  r_lock_owner                    : dmadmin
  r_lock_date                     : 2/28/2019 05:24:48
...

APPLICATION ATTRIBUTES

...
  a_status                        :
  a_is_hidden                     : F
...
  a_next_invocation               : 3/7/2019 05:21:15

INTERNAL ATTRIBUTES

  i_is_deleted                    : F
...

The job was locked 3 minutes after the creation date… And still locked since (4 days).

Let’s check job logs:

[dmadmin@CONTENT_SERVER1 ~]$ ls -rtl $DOCUMENTUM/dba/log/repository1/agentexec/*0812D68780000ca6*
-rw-r--r--. 1 dmadmin dmadmin   0 Feb 28 05:24 /app/dctm/server/dba/log/repository1/agentexec/job_0812D68780000ca6.lck
-rw-rw-rw-. 1 dmadmin dmadmin 695 Feb 28 05:24 /app/dctm/server/dba/log/repository1/agentexec/job_0812D68780000ca6
[dmadmin@CONTENT_SERVER1 ~]$
[dmadmin@CONTENT_SERVER1 ~]$ cat /app/dctm/server/dba/log/repository1/agentexec/job_0812D68780000ca6
Thu Feb 28 05:24:50 2019 [ERROR] [LAUNCHER 20749] Detected while preparing job ? for execution: Command Failed: connect,repository1.repository1,dmadmin,'',,,try_native_first, 
status: 0, with error message [DM_DOCBROKER_E_NO_SERVERS_FOR_DOCBASE]error:  "The DocBroker running on host (CONTENT_SERVER1:1489) does not know of a server for the specified docbase (repository1)"
...NO HEADER (RECURSION) No session id for current job.
Thu Feb 28 05:24:50 2019 [FATAL ERROR] [LAUNCHER 20749] Detected while preparing job ? for execution: Command Failed: connect,repository1.repository1,dmadmin,'',,,try_native_first, status: 0, with error message .
..NO HEADER (RECURSION) No session id for current job.

I noted three important information here:
1. The DocBroker consider that the docbase is stopped when the AgentExec sent the request.
2. The timestamp corresponds to the installation date of the docbase.
3. LAUNCHER 20749.

I checked the install logs to confirm the first point:

[dmadmin@CONTENT_SERVER1 ~]$ egrep " The installer will s.*. repository1" $DOCUMENTUM/product/7.3/install/logs/install.log*
/app/dctm/server/product/7.3/install/logs/install.log.2019.2.28.8.7.22:05:03:24,757  INFO [main]  - The installer will start component process for repository1.
/app/dctm/server/product/7.3/install/logs/install.log.2019.2.28.8.7.22:05:24:39,588  INFO [main]  - The installer will stop component process for repository1.
/app/dctm/server/product/7.3/install/logs/install.log.2019.2.28.8.7.22:05:26:49,110  INFO [main]  - The installer will start component process for repository1.

The AgentExec logs:

[dmadmin@CONTENT_SERVER1 ~]$ ls -rtl $DOCUMENTUM/dba/log/repository1/agentexec/*agentexec.log*
-rw-rw-rw-. 1 dmadmin dmadmin    640 Feb 28 05:24 agentexec.log.save.02.28.19.05.27.54
-rw-rw-rw-. 1 dmadmin dmadmin    384 Feb 28 05:36 agentexec.log.save.02.28.19.05.42.26
-rw-r-----. 1 dmadmin dmadmin      0 Feb 28 05:42 agentexec.log.save.02.28.19.09.51.24
...
-rw-r-----. 1 dmadmin dmadmin 569463 Mar  8 09:11 agentexec.log
[dmadmin@CONTENT_SERVER1 ~]$
[dmadmin@CONTENT_SERVER1 ~]$ cat $DOCUMENTUM/dba/log/repository1/agentexec/agentexec.log.save.02.28.19.05.27.54
Thu Feb 28 05:17:48 2019 [INFORMATION] [LAUNCHER 19584] Detected during program initialization: Version: 7.3.0050.0039  Linux64
Thu Feb 28 05:22:19 2019 [INFORMATION] [LAUNCHER 20191] Detected during program initialization: Version: 7.3.0050.0039  Linux64
Thu Feb 28 05:22:49 2019 [INFORMATION] [LAUNCHER 20253] Detected during program initialization: Version: 7.3.0050.0039  Linux64
Thu Feb 28 05:24:19 2019 [INFORMATION] [LAUNCHER 20555] Detected during program initialization: Version: 7.3.0050.0039  Linux64
Thu Feb 28 05:24:49 2019 [INFORMATION] [LAUNCHER 20749] Detected during program initialization: Version: 7.3.0050.0039  Linux64

I found here the LAUNCHER 20749 noted above ;) So, this job corresponds to the last job executed by the AgentExec before being stopped.
The AgentExec was up, the Docbase should be up also, but the DocBroker said that the docbase is down :(

Now, the question is : when execatly the DocBroker was informed that the docbase is shut down?

[dmadmin@CONTENT_SERVER1 ~]$ cat $DOCUMENTUM/dba/log/repository1.log.save.02.28.2019.05.26.49
...
2019-02-28T05:24:48.644873      20744[20744]    0112D68780000003        [DM_DOCBROKER_I_PROJECTING]info:  "Sending information to Docbroker located on host (CONTENT_SERVER1) with port (1489).  
Information: (Config(repository1), Proximity(1), Status(Server shut down by user (dmadmin)), Dormancy Status(Active))."

To recapitulate:
– 05:24:48.644873 : Docbase shut down and DocBroker informed
– 05:24:49 : AgentExec sent request to DocBroker

So, we can say that the AgentExec was still alive after the docbase stop!

Now, to resolve the issue it is easy :D

API> unlock,c,0812D68780000ca6
...
OK

I didn’t find in the logs when exactly the docbase stop the AgentExec, I guess the docbase request the stop (kill) but don’t check if it has been really stopped.
I confess that I encounter this error many times after docbase installation, that’s why it is useful to know why and how to resolve it quickly. I advise you to configure Dctm jobs after each installation, at least check if the r_lock_date is set and if it is justified.

Cet article Documentum : Dctm job locked after docbase installation est apparu en premier sur Blog dbi services.

New OA Framework 12.2.6 Update 18 Now Available

Steven Chan - Fri, 2019-03-15 11:04

Web-based content in Oracle E-Business Suite Release 12 runs on the Oracle Application Framework (also known as OA Framework, OAF, or FWK) user interface libraries and infrastructure.

We periodically release updates to Oracle Application Framework to fix performance, security, and stability issues.

These updates are provided in cumulative Release Update Packs, and cumulative Bundle Patches that can be applied on top of the Release Update Packs. In this context, cumulative means that the latest RUP or Bundle Patch contains everything released earlier.

The latest OAF update for Oracle E-Business Suite Release 12.2.6 is now available:

Where is this update documented?

Instructions for installing this OAF Release Update Pack are in the following My Oracle Support knowledge document:

Who should apply this patch?

All Oracle E-Business Suite Release 12.2.6 users should apply this patch. Future OAF patches for EBS Release 12.2.6 will require this patch as a prerequisite. 

What's new in this update?

This bundle patch includes fixes for the following issues:

  • When a mandatory check error is encountered, the physical name of the checkbox is displayed on the error message.
  • After closing the dialog window, column hide on an advanced table does not work.

References

Related Articles

 

Categories: APPS Blogs

Enterprise applications meet cloud native

OTN TechBlog - Fri, 2019-03-15 08:54

Speaking with Enterprise customers, many are adopting a cloud-native strategy for new, in-house development projects. This approach of short development cycles, iterative functional delivery and automated CI/CD tooling is allowing them to deliver innovation for users and customers quicker than ever before. One of Oracle’s top 10 predictions for developers in 2019 is that legacy, enterprise applications jump to cloud-native development approaches.

The need to move to cloud-native is seated in the fact that, at heart, all companies are software companies. Those that can use software to their advantage, to speed, automate their business and make it easier for their customers to interact with them, win.  This is the nature of business today, and the reason that start-ups, such as Uber, can disrupt whole existing industries.

Cloud native technologies like Kubernetes, Docker containers, micro-services and functions provide the basis to scale, secure and enable these new solutions. 

However, enterprises typically have a complex stack of applications and infrastructure; this usually means monolithic custom or ISV applications that are anything but cloud-native. These new cloud-native solutions need to be able to interact with these legacy systems but are running in the cloud rather an on-premises and need delivery cycles of days rather than months. Enterprises need to address this technical debt in order to realise the full benefits of a cloud-native approach. Re-writing these monoliths is not practical in the short-term due to resource and time needed. So, what are the options to modernise enterprise applications?

Move the Monolith

Moving these applications to the cloud can realise the cloud economics of elasticity and pay for what you use. This thinks of infrastructure as code rather than physical compute, network and storage. Using tools such as Terraform – https://www.terraform.io – to create and delete infrastructure resources and Packer – https://www.packer.io – to manage machine images, means we can create environment when needed and tear down when not. Although this does not immediately address modernisation of the application itself, it does start to automate the infrastructure and begin to integrate them into cloud native development and delivery. https://blogs.oracle.com/developers/build-oracle-cloud-infrastructure-custom-images-with-packer-on-oracle-developer-cloud

Containerise and Orchestrate 

A cloud native strategy is largely based on running applications in Docker containers to give the flexibility of deployment on premises and across different cloud providers. A common approach is to containerise existing applications and run them on premises before moving to the cloud. 

Many enterprise applications, both in-house developed and ISV supplied, are Weblogic based and enterprises are looking to do the same with these. Weblogic now runs in docker containers, so the same approach can be taken – https://hub.docker.com/_/oracle-weblogic-server-12c.   

As initial, and suitable workloads (workloads that have less on-prem intergration points, or are good candidates from a compliance standpoint) become containerised and moved to the cloud, the management and orchestration of containers into solutions begins to become an issue. Container management or orchestration platforms such as Kubernetes, Docker Swarm etc are being adopted. Kubernetes is emerging as the platform of choice for enterprises to manage containers in the cloud. Oracle has developed a Weblogic Kubernetes operator that allows Kubernetes to understand and manage Weblogic domains, clustering, etc. https://github.com/oracle/weblogic-kubernetes-operator

Integrating with version control like Git Hub, secure docker repositories and using CI/CD tooling to deploy to Kubernetes, really brings these enterprise applications to the core of a cloud native strategy. It also means existing Weblogic and Java skills in the organisation continue to be relevant in the cloud. 

Breaking It Down

To fully benefit from running these applications in the could, the functionality needs to be integrated with the new cloud native services and also to become more agile. An evolving pattern is to take an agile approach, taking a series of iterations to refactoring the enterprise application. A first step is to separate the UI from the functional code and create API’s to access the business functionality. This will allow new cloud native applications access to the required functionality and facilitate the shorter delivery cycles enterprises are demanding. Over time, these services can be rebuilt and deployed as cloud services, eventually migrate away from the legacy application. Helidon is a collection of java libraries for writing microservices that helps to re-use existing java skills to re-developing the code behind the services. 

As more and more services are deployed management, versioning and monitoring become increasingly important. Using a tool like a service mesh is evolving as the way to do this. A service mesh is a dedicated infrastructure layer for handling service-to-service communication. It’s responsible for the reliable delivery of requests through the complex topology of services that comprise a modern, cloud native application. Istio is evolving as an enterprise choice and can easily be installed on Kubernetes. 

In Conclusion

More and more enterprises are adopting a cloud native approach for new development projects. They are also struggling with the technical debt of large monolithic enterprise applications when trying to modernise them. However, there are a number of strategies and technologies that be used to help migrate and modernise these legacy applications in the cloud. With the right approach existing skills can be maintained and evolved into a container based, cloud native environment.  

When Speed of Change Becomes a Competitive Advantage

Oracle Press Releases - Fri, 2019-03-15 07:00
Blog
When Speed of Change Becomes a Competitive Advantage

By Steve Miranda, executive vice president, Applications Product Development, Oracle—Mar 15, 2019

In my conversations with customers, they almost always talk about change. This change—in technology, in expectations, in their businesses, and in their teams—is not unusual, but as I am sure you have seen, its impact is growing.

That’s why we made a change ourselves. In the past, we held separate events for different business audiences. This year we wanted to create an applications community that reflects the way you work—with integrated systems across departments and business units, no matter the size, industry, or location of your organization.

And that’s exactly why we are hosting Modern Business Experience (MBX) and Modern Customer Experience (MCX) in Las Vegas next week. At these two co-located events, we are bringing together marketing, sales, and service with HR, finance, and supply chain, because we are seeing more and more similarities between each area. Of course, the specifics are incredibly important and change significantly, but whether I am speaking with finance, supply chain, or marketing leaders, the same themes keep coming up. See if they sound familiar to you:

1. Changing expectations: You’re trying to stay ahead of changing expectations. Not just from customers, but from employees, partners, and a myriad of other stakeholders. And we all know the catch. Expectations are not only skyrocketing, but are constantly evolving. It’s a moving target.

2. Adaptable Organizations: As things erupt externally, you are feeling some of the same turmoil internally. Are your teams in flux? Are your employees’ roles, skill sets, and positions changing to meet the needs of today’s experience and service-driven economy? If they aren’t, they will be soon.

3. Humans + Machines: Emerging technology is coming, but you’re not sure how or when. I see artificial intelligence and machine learning (AI/ML) reshaping workforces, improving decision making, accelerating processes, and driving efficiencies. Machines are freeing up business users to be more creative problem solvers, and this is changing the working relationship between people and technology.

Our customers (including you) are facing some very real challenges—staying ahead of changing expectations, building adaptable organizations, and realizing the potential of the latest innovations. To turn these challenges into opportunities, we are focused on five key areas:

Your experience: It’s all about you. We are committed to partnering with you to tackle your most complex challenges, help run your operations, and help you achieve the best business outcomes.

Most complete suite of apps: We don’t want you to have to compromise by choosing between breadth and depth. That’s why we are committed to providing one, integrated, best-of-breed applications suite, which brings teams together, creates a single source of truth, leverages data to its fullest, and drives end-to-end business innovation.

Best technology: We want to help you eliminate IT complexity so you can focus on what matters to your business. This means providing you with bullet-proof security, high-end scalability, mission-critical performance, and strong integration capabilities. You shouldn’t have to worry about how it all works.

Speed of innovation: We want to help you stay ahead of constantly changing expectations and business demands. To do this we send continuous product updates to keep you ahead of the latest innovation cycle. We’ve also infused the suite with AI/ML capabilities. The result: quick value and a leg up on the competition.

Modern UX: We want to make using enterprise technology as engaging and intuitive as possible. No matter the device, we deliver a simple yet powerful user experience. And across the organization structure, everyone gets a unified experience, which can be personalized.

We will be sharing the latest on each of these areas at the events through customers, partners, and sessions. We promise that you will walk away with a much better understanding of what emerging technologies can do for your business and why the time for being in the cloud is now. And, Magic Johnson promises to keep things lively—he and I will wrap things up on Thursday noon. I’ll be the short one.

 

Pages

Subscribe to Oracle FAQ aggregator