Feed aggregator

Unity and Difference

Greg Pavlik - 7 hours 24 min ago
One of the themes that traveled from Greek philosophy through until the unfolding of modernity was the neoplatonic notion of "the One". A simple unity in which all "transcendentals" - beauty, truth, goodness - both originate and in some sense coalesce. In its patristic and medieval development, these transcendentals were "en-hypostasized" or made present in persons - the idea of the Trinity, where a communion of persons exist in perfect love, perfect peace and mutual self-offering: most importantly, a perfect unity in difference. All cultures have their formative myths and this particular myth made its mark on a broad swath of humanity over the centuries - though I think in ways that usually obscured its underlying meaning (unfortunately).

Now I have always identified with this comment of Dostoevsky: "I will tell you that I am a child of this century, a child of disbelief and doubt. I am that today and will remain so until the grave": sometimes more strongly than others. But myths are not about what we believe is "real" at any point in time. The meaning of these symbols I think says something for all of us today - particularly in the United States: that the essence of humanity may be best realized in a unity in difference that can only be realized through self-offering love. In political terms we are all citizens of one country and our obligation as a society is to care for each other. This much ought to be obvious - we cannot exclude one race, one economic class, one geography, one party, from mutual care. The whole point of our systems, in fact, ought to be to realize, however imperfectly, some level of that mutual care, of mutual up-building and mutual support.

That isn't happening today. Too often this we are engaged in the opposite - mutual tearing down and avoiding our responsibilities to each other. I wish there was a magic fix for this: it clearly has been a problem that has plagued our history for a long, long time. The one suggestion I can make is to find a way to reach out across boundaries with care on a day by day basis. It may seem like a person cannot make a difference. No individual drop of rain thinks it is responsible for the flood.

Block Corruption

Tom Kyte - 9 hours 21 min ago
Oracle 8i has some new packages for detecting and solving issues related to block corruption. My question is how does block corruption happen when operating system most of the time would have bad cluster remapping?
Categories: DBA Blogs

unique index with null values

Tom Kyte - 9 hours 21 min ago
Hello Tom, I have this situation: With a table like create table test (id number not null, name varchar2(10) not null, source_id number); (actually the real tables have more columns, but for this question these are enough) and with th...
Categories: DBA Blogs

SQL*Loader - How can I put header info on each record

Tom Kyte - 9 hours 21 min ago
Hi, I have a question regarding SQL*Loader. I am parsing a datafile that has a header record that contains information that I need to store on each record being inserted into the database. My data file looks something like: BOF - 09/01/2003 ...
Categories: DBA Blogs

AWS RDS: 5 Must-Know Actions for Oracle DBAs

Pythian Group - 17 hours 6 min ago

Managing Oracle on AWS has some twists. Here are five daily DBA activities that have changed on AWS:

Kill Sessions:

begin
rdsadmin.rdsadmin_util.kill(
sid => &sid,
serial => &serial,
method => 'IMMEDIATE');
end;
/

 

Flush shared_pool or buffer_cache:

exec rdsadmin.rdsadmin_util.flush_shared_pool;
exec rdsadmin.rdsadmin_util.flush_buffer_cache;

 

Perform RMAN Operations:

BEGIN
 rdsadmin.rdsadmin_rman_util.validate_database(
 p_validation_type => 'PHYSICAL+LOGICAL',
 p_parallel => 4,
 p_section_size_mb => 10,
 p_rman_to_dbms_output => FALSE);
END;
/

 

Grant Privileges to SYS Objects

# Grant

begin
    rdsadmin.rdsadmin_util.grant_sys_object(
        p_obj_name  => 'V_$SESSION',
        p_grantee   => 'PYTHIAN',
        p_privilege => 'SELECT');
end;
/

# Grant with Grant Option

begin
    rdsadmin.rdsadmin_util.grant_sys_object(
        p_obj_name     => 'V_$SESSION',
        p_grantee      => 'PYTHIAN',
        p_privilege    => 'SELECT',
        p_grant_option => true);
end;
/

# Revoke

begin
    rdsadmin.rdsadmin_util.revoke_sys_object(
        p_obj_name  => 'V_$SESSION',
        p_revokee   => 'PYTHIAN',
        p_privilege => 'SELECT');
end;
/

 

Create Custom Functions to Verify Passwords:

begin
    rdsadmin.rdsadmin_password_verify.create_verify_function(
        p_verify_function_name => 'CUSTOM_PASSWORD_FUNCTION', 
        p_min_length           => 12, 
        p_min_uppercase        => 2, 
        p_min_digits           => 1, 
        p_min_special          => 1,
        p_disallow_at_sign     => true);
end;
/

If you want to double-check the generated code, here’s simple trick: Check on DBA_SOURCE:

col text format a150
select TEXT  from DBA_SOURCE 
where OWNER = 'SYS' and NAME = 'CUSTOM_PASSWORD_FUNCTION' order by LINE;

I hope this helps!

Categories: DBA Blogs

GoldenGate – Supplemental Logging Is A Mess

Michael Dinh - Tue, 2020-06-02 22:22

I was tasked to find supplemental logging details for Oracle database used with GoldenGate.

Note: this is not a pluggable database.

With ADD TRANDATA, use dba_log_groups and dba_log_group_columns.

With ADD SCHEMATRANDATA, use select * from table(logmnr$always_suplog_columns( SCHEMA, TABLE ));

Basically, one would need to run the query with logmnr pipeline function for all the tables in the schema.

Here is one process I used.

Create info_schematrandata.prm

$ cat info_schematrandata.prm
dblogin USERID ggs, PASSWORD *
info schematrandata *

Run ggsci using info_schematrandata.prm (full path is required)

$ ggsci paramfile /home/oracle/working/dinh/info_schematrandata.prm > info_schematrandata.log

Here is example for results (actual contains 12 schemas)

$ grep -i enable info_schematrandata.log
2020-06-01 05:19:35  INFO    OGG-06480  Schema level supplemental logging, excluding non-validated keys, is enabled on schema "SCOTT".
2020-06-01 05:19:35  INFO    OGG-01981  Schema level supplemental logging is enabled on schema "SCOTT" for all columns.

After finding the schemas, use logmnr pipeline function to find all the details.

select * from table(logmnr$always_suplog_columns('SCOTT','EMP')) order by intcol;
select * from table(logmnr$always_suplog_columns('SCOTT','BONUS')) order by intcol;
select * from table(logmnr$always_suplog_columns('SCOTT','DEPT')) order by intcol;

You can find demo with logmnr$always_suplog_columns at: GoldenGate 12c Features Found in 11.2.1.0.21 ???

References:

OGG: How To Log All Columns With Add Schematrandata To Get NOCOMPRESSUPDATES (Doc ID 1413142.1)

ADD SCHEMATRANDATA does not allow columns to be specified.
This enables logging of Primary Key columns only.
By default, updates are compressed.
In order to log all columns ADD TRANDATA would have to be used.
The ADD TRANDATA can be used in conjunction with ADD SCHEMATRANDATA to specify the non-primary key columns.

How to Check Supplemental Logging When ADD SCHEMATRANDATA is Enabled (Doc ID 1537837.1)

It is not listed in dba_log_groups or dba_log_group_columns.
select * from table(logmnr$always_suplog_columns( SCHEMA, TABLE ));

Effects of ADD TRANDATA and ADD SCHEMATRANDATA on an Oracle databases’ Supplemental Logging (Doc ID 2070331.1)

Some useful commands from ggsci:

INFO TRANDATA [container.]owner.table (info trandata *) did not work
INFO SCHEMATRANDATA schema            (info schematrandata *)
LIST TABLES table                     (list tables SCOTT.*)

Note to self:

$ cat list_table.prm
dblogin USERID ggs, PASSWORD *
list tables SCOTT.*

$ ggsci paramfile /home/oracle/working/dinh/list_table.prm > list_table.log

$ grep '\.' list_table.log | egrep -iv 'found|ggsci'| grep -A 10000 "Successfully logged into database."|grep -v database > table.log

$ cat table.log
SCOTT.EMP
SCOTT.BONUS
SCOTT.DEPT

$ cat read.sh
#!/bin/bash
IFS="."
while read f1 f3
do
echo "select * from table(logmnr\$always_suplog_columns('$f1','$f3')) order by intcol;"
done < /home/oracle/working/dinh/table.log
exit

$ ./read.sh > /tmp/suplog.sql

$ head /tmp/suplog.sql
select * from table(logmnr$always_suplog_columns('SCOTT','EMP')) order by intcol;
select * from table(logmnr$always_suplog_columns('SCOTT','BONUS')) order by intcol;
select * from table(logmnr$always_suplog_columns('SCOTT','DEPT')) order by intcol;

$ cat suplog.sql
set numw 8 lines 200 timing off echo off pages 10000 trimsp on tab off
column NAME_COL_PLUS_SHOW_PARAM format a30
column VALUE_COL_PLUS_SHOW_PARAM format a65 wrap
col owner for a20
col table_name for a20
col column_name for a30
col log_group_type for a20
col column_list for a80
col log_group_name for a30
col table_name for a30
spool Database_Supplemental_Logging_Details.log
pro ******** Database ********
SELECT
name,db_unique_name,open_mode,database_role,remote_archive,switchover_status,dataguard_broker,primary_db_unique_name
FROM v$database
;
pro ******** Database Supplemental Logging ********
SELECT
supplemental_log_data_min MIN,
supplemental_log_data_pk PK,
supplemental_log_data_ui UI,
supplemental_log_data_fk FK,
supplemental_log_data_all "ALL"
FROM v$database
;
pro ******** Table Supplemental Logging ********
pro
pro ******** GoldenGate: ADD TRANDATA ********
SELECT
g.owner, g.table_name, g.log_group_name, g.log_group_type,
DECODE(always,'ALWAYS','Unconditional',NULL,'Conditional') always,
LISTAGG(c.column_name, ', ') WITHIN GROUP (ORDER BY c.POSITION) column_list
FROM dba_log_groups g, dba_log_group_columns c
WHERE g.owner = c.owner(+)
AND g.log_group_name = c.log_group_name(+)
AND g.table_name = c.table_name(+)
GROUP BY g.owner, g.log_group_name, g.table_name, g.log_group_type, DECODE(always,'ALWAYS','Unconditional',NULL,'Conditional')
ORDER BY g.owner, g.log_group_name, g.table_name, g.log_group_type
;
pro ******** Schema Supplemental Logging ********
pro
pro ******** GoldenGate: ADD SCHEMATRANDATA ********
@/tmp/suplog.sql
exit

Introduction to Apache Kafka

Gerger Consulting - Tue, 2020-06-02 13:03
Apache Kafka is a product that should be in every IT professional's toolbox.

Attend the webinar by developer advocate Ricardo Ferreira and learn how to use Kafka in your daily work


About the Webinar:

The use of distributed streaming platforms is becoming increasingly popular among developers, but have you ever wondered why?
Part Pub/Sub messaging system, partly distributed storage, partly CEP-type event processing engine, the usage of this type of technology brings a whole new perspective on how developers capture, store, and process events. This talk will explain what distributed streaming platforms are and how it can be a game changer for modern data architectures. We'll discuss the road in IT that led to the need of this type of platform, the current state of Apache Kafka, as well as scenarios where this technology can be implemented.


About the Presenter:
Ricardo is a Developer Advocate at Confluent, the company founded by the original co-creators of Apache Kafka. He has over 22 years of experience working with software engineering and specializes in service-oriented architecture, big data, cloud, and serverless architecture. Prior to Confluent, he worked for other vendors, such as Oracle, Red Hat, and IONA Technologies, as well as several consulting firms. While not working he enjoys grilling steaks on his backyard with his family and friends, where he gets the chance to talk about anything that is not IT related. Currently, he lives in Raleigh, North Carolina, with his wife, and son.
Categories: Development

New SWOT Report: Running Mission-Critical Workloads In Multi-Cloud Environments Is Oracle’s Super Power

Oracle Press Releases - Tue, 2020-06-02 07:00
Blog
New SWOT Report: Running Mission-Critical Workloads In Multi-Cloud Environments Is Oracle’s Super Power

By Sasha Banks-Louie, Oracle—Jun 2, 2020

Multi-cloud environments are becoming the de-facto cloud strategy among a majority of US businesses that have moved their applications to the cloud, but managing these complex infrastructures is creating new challenges that many companies are struggling to surmount—if they decide to move to the cloud at all.

These are among the key conclusions from a new report by Omdia Consulting, which analyzes the strengths, weaknesses, opportunities, and threats of running workloads in Oracle Cloud Infrastructure.  

Key findings from the Omdia SWOT report include:

  • More than 52% of businesses report the inability to move workloads between clouds is slowing their adoption of cloud computing
  • The alliance between Oracle Cloud Infrastructure and Microsoft Azure plans to speed up cloud adoption, by offering businesses direct interconnection between these two clouds, integrating identity management, and honoring a collaborative support agreement
  • Oracle’s open, enterprise-grade cloud architecture not only provides businesses with near zero downtime and no cost to onboard and offboard users, it also offers the most comprehensive sets of security standards and customer support services compared to competing cloud vendors
 

While most cloud infrastructure vendors offer companies an environment on which to run their mission-critical applications without having to manage a data center, invest in hardware, or install and update software, those vendors’ service, pricing, and support plans can vary widely.

In its recently published SWOT Assessment of Oracle Cloud Infrastructure, Omdia Consulting offers new insight into why companies should consider running their mission-critical workloads in the Oracle Cloud.   

Because Oracle Cloud Infrastructure has built a reputation for reliability, companies are guaranteed more than 99.99% availability uptime, and fewer than four minutes per month for maintenance work, the report says.

Such high availability is particularly important, because banks that can’t process high-speed financial transactions or retailers who aren’t able to synchronize their ecommerce websites with their on-hand inventories and point-of-sale data, can lose revenue, frustrate customers, and damage their brands.

Mastering Multi-Cloud Environments

As an increasing number of businesses today live in a multi-cloud world, it’s important for cloud vendors to integrate their offerings with those of their competitors.

The Oracle and Microsoft alliance announced in June 2019 enables joint customers to deploy mission-critical enterprise workloads that span both Microsoft Azure and Oracle Cloud Infrastructure environments.

Such customers can run Azure analytics and AI, for example with Oracle Autonomous Database on the same workload. This not only makes it easier for companies to have a backup cloud to aid in disaster recovery, but also to split up workloads so that data architects and application developers can choose their preferred environments and tools.

The Oracle and Microsoft alliance also removes the burden of managing multiple service orders, networking configurations, and data transfers from different clouds across workloads.

Raising The Bar For Security Standards

The range of standards that Oracle provides compliance with is one of the most comprehensive among the leading cloud providers, according to Omdia’s SWOT report.

While currently compliant with ISO 27001, SOC1, SOC2, PCI DSS, HIPAA/HITECH, FedRAMP Medium, and FedRAMP High, Oracle Cloud Infrastructure also follows a media destruction process adhering to NIST SP 800-88r1 and DoD emergency destruction and secret classification standards.

A new feature in Oracle’s Gen 2 Cloud is Isolated Network Virtualization, which isolates physical network interfaces and cards from each other, isolating an attacker who has gained unauthorized access to the network. Through this process, Oracle helps companies protect against bad actors from attacking their networks when an instance, bare-metal, virtual-machine, or container, has been compromised.

Gaining Share Through Human Customer Support

While all cloud infrastructure vendors let their customers access online documentation and community forums for free, many of those vendors charge hefty fees for hands-on, expert support—like the kind you’ll need to fix a latency problem or network outage.

But companies running their workloads on Oracle Cloud Infrastructure Free Tier, can get an enterprise-level support package, which includes two Oracle Autonomous Databases with powerful tools like Oracle Application Express (APEX) and Oracle SQL Developer, two Oracle Cloud Infrastructure Compute virtual machines, block, object, and archive storage, load balancer and data egress, and monitoring and modifications—for free.

It’s this kind of human customer care that has driven approximately 80% of Oracle’s customers to stay in the Oracle Cloud for between one and three years, and 21% of them committing to three-year subscriptions, according to Omdia’s SWOT analysis. The report also shows more than 50% of Oracle’s customers increase their spend once they have moved to Oracle Cloud Infrastructure, and the rate of new customers moving to Oracle Cloud is more than 150% year on year.

This level premium support, including zero fees for onboarding or offboarding customers to its eponymous Cloud Infrastructure, demonstrates Oracle’s strong commitment to be an open enterprise-grade cloud—earning its position as a top-five cloud provider in the world.

We live in a multi-cloud world and customers expect cloud providers to excel at interconnecting various platforms, applications, and workloads. Omdia’s report highlights several critical elements that make Oracle uniquely qualified to provide this interconnectivity, while offering exceptional performance, pricing, and support.

Click here for more Industry Analyst Research.

Oracle Cloud Infra [1Z0-1072] Certification For Architects

Online Apps DBA - Tue, 2020-06-02 05:38

Hear us when we say Oracle Cloud (OCI) is everywhere whether it is Zoom, 8×8, Air Asia, Michelin, Vodafone, Xerox, Airtel, Netflix & many more. Things are moving from On-Premise to Cloud and so are DBAs, Apps DBAs, System Administrators, Network, Storage & Security Admins & Architects, and everyone working On-Premise. The 1Z0-1072- Oracle Cloud […]

The post Oracle Cloud Infra [1Z0-1072] Certification For Architects appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Automating External Table load

Tom Kyte - Mon, 2020-06-01 19:06
Hi There, Will like some thoughts on the following (working on 12c). We get flat files from various sources and each of these might come in with different delimiter ('|', ',','tab'). Is there a way to create an automated process to do the f...
Categories: DBA Blogs

ORA-06530: Reference to uninitialized composite

Tom Kyte - Mon, 2020-06-01 19:06
getting the ORA-06530: Reference to uninitialized composite with below code.. you will be able to simulate the issue. thanks in advance <code> CREATE TYPE "NUMBER_TABLE" AS TABLE OF NUMBER / CREATE TYPE TYP_BOARD_PACKAGE_OBJ AS OBJECT ( ...
Categories: DBA Blogs

ORA-02070: database DBLINK does not support subqueries in this context when updating SQL Server via gateway

Tom Kyte - Mon, 2020-06-01 19:06
Hi Tom, I am trying to update MS SQL Server table from Oracle via dblink. Using 11g on Oracle with Gateway HS configuration i can query of SQL Server table from Oracle but cannot update as below is detail error is showing. Error starting at ...
Categories: DBA Blogs

Order By

Jonathan Lewis - Mon, 2020-06-01 07:05

This is a brief note with an odd history – and the history is more significant than the note.

While searching my library for an example of an odd costing effect for the “order by” clause I discovered a script that looked as if I’d written for 11.1.0.6 in 2008 to demonstrate a redundant sort operation appearing in an execution plan; and then I discovered a second script written for 11.2.0.4 in 2014 demonstrating a variant of the same thing (presumably because I’d not found the original script in 2014) and the second script referenced a MOS bug number

Bug 18701129 : SORT ORDER BY ISN’T AVOIDED WHEN ROWID IS ADDED TO ORDER BY CLAUSE

Whenever I “discover” an old bug test I tend to re-run it to check whether or not the bug has been fixed.  So that’s what I did, and found that the anomaly was still present in 19.3.0.0. The really odd thing, though, was that the bug note no longer existed – and even after I’d done a few searches involving the text in the description I couldn’t manage to find it!

For the record, here’s the original 2008 script (with a couple of minor edits)


rem
rem     Script:         order_by_anomaly.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2008
rem     Purpose:        
rem
rem     Last tested 
rem             19.3.0.0        Still sorting
rem             12.2.0.1
rem             11.1.0.6
rem

set linesize 180
set pagesize 60

create table test 
as 
select  * 
from    all_objects 
where   rownum <= 10000 -- >  comment to avoid wordpress format issue
;

alter table test modify object_name not null;
create index i_test_1 on test(object_name);

analyze table test compute statistics;

set serveroutput off
alter session set statistics_level = all;

select  * 
from    (select * from test order by object_name) 
where 
        rownum < 11 -- > comment to avoid wordpress format issue
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));



select  * 
from    (select /*+ index(test) */ * from test order by object_name,rowid) 
where
        rownum < 11 -- > comment to avoid wordpress format issue
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

alter session set statistics_level = typical;
set serveroutput on

Yes, that is an analyze command – it’s a pretty old script and I must have been a bit lazy about writing it. (Or, possibly, it’s a script from an Oracle-l or Oracle forum posting and I hadn’t re-engineered it.)

I’ve run two queries – the first uses an inline view to impose an order on some data and then selects the first 10 rows. The second query does nearly the same thing but adds an extra column to the “order by” clause – except it’s not a real column it’s the rowid pseudo-column. Conveniently there’s an index on the table that is a perfect match for the “order by” clause and it’s on a non-null column so the optimizer can walk the index in order and stop after 10 rows.

Adding the rowid to the “order by” clause shouldn’t make any difference to the plan as the index Oracle is using is a single column non-unique index, which means that the internal representation makes it a two-column index where the rowid is (quite literally) stored as the second column. But here are the two execution plans:


----------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |          |      1 |        |     10 |00:00:00.01 |       7 |
|*  1 |  COUNT STOPKEY                |          |      1 |        |     10 |00:00:00.01 |       7 |
|   2 |   VIEW                        |          |      1 |     10 |     10 |00:00:00.01 |       7 |
|   3 |    TABLE ACCESS BY INDEX ROWID| TEST     |      1 |  10000 |     10 |00:00:00.01 |       7 |
|   4 |     INDEX FULL SCAN           | I_TEST_1 |      1 |     10 |     10 |00:00:00.01 |       3 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(ROWNUM<11)



----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                              | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                       |          |      1 |        |     10 |00:00:00.01 |    4717 |       |       |          |
|*  1 |  COUNT STOPKEY                         |          |      1 |        |     10 |00:00:00.01 |    4717 |       |       |          |
|   2 |   VIEW                                 |          |      1 |  10000 |     10 |00:00:00.01 |    4717 |       |       |          |
|*  3 |    SORT ORDER BY STOPKEY               |          |      1 |  10000 |     10 |00:00:00.01 |    4717 |  4096 |  4096 | 4096  (0)|
|   4 |     TABLE ACCESS BY INDEX ROWID BATCHED| TEST     |      1 |  10000 |  10000 |00:00:00.01 |    4717 |       |       |          |
|   5 |      INDEX FULL SCAN                   | I_TEST_1 |      1 |  10000 |  10000 |00:00:00.01 |      44 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(ROWNUM<11)
   3 - filter(ROWNUM<11)


When I add the rowid to the “order by” clause the optimizer no longer sees walking the index as an option for avoiding work; it wants to collect all the rows from the table, sort them, and then report the first 10. In fact walking the index became such an expensive option that I had to hint the index usage (hence the non-null declaration) to make the optimizer choose it, the default plan for 19.3 was a full tablescan and sort.

It’s just a little example of an edge case, of course. It’s a pity that the code doesn’t recognise the rowid as (effectively) a no-op addition to the ordering when the rest of the “order by” clause matches the index declaration, but in those circumstances the rowid needn’t be there at all and you wouldn’t expect anyone to include it.

As I said at the start – the interesting thing about this behaviour is that it was once described in a bug note that has since disappeared from public view.

 

[AZ-300] Microsoft Azure Architect Training: Step By Step Activity Guides/Hands-On Lab Exercise

Online Apps DBA - Mon, 2020-06-01 05:25

The Azure Architect should have advanced experience and knowledge across various aspects of IT operations, including networking, virtualization, identity, security, business continuity, disaster recovery, data management, budgeting, and governance. Check out k21academy’s blog post at https://k21academy.com/az30005 to find out all about our Hands-On Lab exercises that help you prepare for the certification course as well […]

The post [AZ-300] Microsoft Azure Architect Training: Step By Step Activity Guides/Hands-On Lab Exercise appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Targeting specific namespaces with kubectl

Pas Apicella - Mon, 2020-06-01 00:45
Note for myself given kubectl does not allow multiple namespaces as per it's CLI

$ eval 'kubectl  --namespace='{cf-system,kpack,istio-system}' get pod;'

OR (get all) if you want to see all resources

$ eval 'kubectl  --namespace='{cf-system,kpack,istio-system}' get all;'
  
$ eval 'kubectl --namespace='{cf-system,kpack,istio-system}' get pod;'
NAME READY STATUS RESTARTS AGE
ccdb-migrate-995n7 0/2 Completed 1 3d23h
cf-api-clock-7595b76c78-94trp 2/2 Running 2 3d23h
cf-api-deployment-updater-758f646489-k5498 2/2 Running 2 3d23h
cf-api-kpack-watcher-6fb8f7b4bf-xh2mg 2/2 Running 0 3d23h
cf-api-server-5dc58fb9d-8d2nc 5/5 Running 5 3d23h
cf-api-server-5dc58fb9d-ghwkn 5/5 Running 4 3d23h
cf-api-worker-7fffdbcdc7-fqpnc 2/2 Running 2 3d23h
cfroutesync-75dff99567-kc8qt 2/2 Running 0 3d23h
eirini-5cddc6d89b-57dgc 2/2 Running 0 3d23h
fluentd-4fsp8 2/2 Running 2 3d23h
fluentd-5vfnv 2/2 Running 1 3d23h
fluentd-gq2kr 2/2 Running 2 3d23h
fluentd-hnjgm 2/2 Running 2 3d23h
fluentd-j6d5n 2/2 Running 1 3d23h
fluentd-wbzcj 2/2 Running 2 3d23h
log-cache-7fd48cd767-fj9k8 5/5 Running 5 3d23h
metric-proxy-695797b958-j7tns 2/2 Running 0 3d23h
uaa-67bd4bfb7d-v72v6 2/2 Running 2 3d23h
NAME READY STATUS RESTARTS AGE
kpack-controller-595b8c5fd-x4kgf 1/1 Running 0 3d23h
kpack-webhook-6fdffdf676-g8v9q 1/1 Running 0 3d23h
NAME READY STATUS RESTARTS AGE
istio-citadel-589c85d7dc-677fz 1/1 Running 0 3d23h
istio-galley-6c7b88477-fk9km 2/2 Running 0 3d23h
istio-ingressgateway-25g8s 2/2 Running 0 3d23h
istio-ingressgateway-49txj 2/2 Running 0 3d23h
istio-ingressgateway-9qsqj 2/2 Running 0 3d23h
istio-ingressgateway-dlbcr 2/2 Running 0 3d23h
istio-ingressgateway-jdn42 2/2 Running 0 3d23h
istio-ingressgateway-jnx2m 2/2 Running 0 3d23h
istio-pilot-767fc6d466-8bzt8 2/2 Running 0 3d23h
istio-policy-66f4f99b44-qhw92 2/2 Running 1 3d23h
istio-sidecar-injector-6985796b87-2hvxw 1/1 Running 0 3d23h
istio-telemetry-d6599c76f-ps6xd 2/2 Running 1 3d23h
Categories: Fusion Middleware

Custom ROMs: Installing via TWRP (e.g. Prometheus ROM)

Dietrich Schroff - Sun, 2020-05-31 02:15
After installing TWRP and installing some ROMs via adb sideload, (or this story with my old Nexus 7) i learned that there is another way for installing Custom ROMs:

  • Boot in to recovery (Samsung: Home Button + Power + Volume Up)
  • Copy the ROM.zip to the SD card of the smartphone
  • and then follow this screenshots
Click on "Install"

Click "Select Storage"

Select your SD card

Select the ROM you want to install:

Swipe to the right

Then some of the ROMs (like Prometheus) will guide you through an installation wizard, where you can chose some options.

If you want to try Prometheus you can download it here.

[Solved]Upgrading Oracle Apps (EBS) To 12.2 ? ORA-29283: Invalid File Operation

Online Apps DBA - Sun, 2020-05-31 01:41

While running an American English Upgrade patch driver, when upgrading to Oracle EBS R12.2, if you are facing the error: ORA-29283: Invalid File Operation Then check out our post at http://k21academy.com/ebsupgrade14 in which we cover: · ISSUE of the Error while running the American English update patch driver · Root Cause of the Error · […]

The post [Solved]Upgrading Oracle Apps (EBS) To 12.2 ? ORA-29283: Invalid File Operation appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

EBS R12.2 Upgrade High Level Overview/Steps

Online Apps DBA - Sun, 2020-05-31 01:29

Oracle has announced on October 18, 2019, they were moving to a Continuous Innovation support model for Oracle E-Business Suite 12.2 and that there will be no 12.3 release. At the Moment Only EBS R12.1.3 & R12.2.3+ Is Supported To know more check out our post at https://k21academy.com/ebsupgrade12 This post covers a high-level overview of […]

The post EBS R12.2 Upgrade High Level Overview/Steps appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

19c Database In Oracle EBS R12: Everything You Must Know

Online Apps DBA - Sun, 2020-05-31 01:18

Oracle has recently announced the release of Oracle E-Business Suite 19c Database. With the Database 19c certification, EBS 12.2 on-premises databases are now certified with the CDB architecture (multitenant architecture). Check out K21 Academy post at https://k21academy.com/ebsupgrade11 which covers: • Overview Oracle Database 19c • What’s New for EBS with Oracle Database 19c • R12.x […]

The post 19c Database In Oracle EBS R12: Everything You Must Know appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

ORA-06512 - PL/SQL: numeric or value error%s

Tom Kyte - Sat, 2020-05-30 12:06
Apologies... this is related to a previous question answered earlier today, but I did not know how to ask a 'follow up' question... I've written the code below to generate an email report on users in an instance which have not been logged into i...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator