Anthony Shorten

Subscribe to Anthony Shorten feed
Oracle Blogs
Updated: 17 hours 27 min ago

Supported Platform Guide/UTA Pack Guide

Wed, 2019-03-20 09:44

As with all Oracle products we release supported platforms in the Installation documentation. But, as platform changes happen between releases Oracle products produce Certification and Supported platform documentation in My Oracle Support.

Oracle Utilities is no different and we publish the up to date information on an article within My Oracle Support. It is recommended to use this article as a reference as the Installation Guides may become stale in respect to platforms.

The article for Oracle Utilities provides a spreadsheet centralizing all the information including certified platforms, certified database release and even versions of Oracle Utilities products supported with content in the Oracle Utilities Accelerator.

The article is available at Certification Matrix for Oracle Utilities Products (Doc Id: 1454143.1) available from My Oracle Support.

 

Schedule Management for Oracle Scheduler Integration

Mon, 2019-03-11 11:15

One of the most common questions I get from my product users is how to manage your batch schedules when using the Oracle Scheduler Integration.  As Oracle Scheduler is part of the Oracle Database, Oracle provides a number of ways of managing your schedule:

  • Command Line. If you are an administrator that manages your database using commands and PL/SQL calls then you can use the DBMS_SCHEDULER interface directly from any SQL tool. You have full access to the scheduler objects.
  • Oracle SQL Developer. The latest versions of Oracle SQL Developer include capabilities to manage your schedule directly from that tool. The advantage of this is that the tool supports techniques such as drag and drop to simplify the management of scheduler objects. For example, you can create a chain and then drop the programs into the chain and "wire" them together. This interface generates the direct DBMS_SCHEDULER calls to implement your changes. Refer to the Oracle SQL Developer documentation for details of maintaining individual scheduler objects. For example:

SQL Developer Interface

  • Oracle Enterprise Manager. From Oracle Database 12c and above, Oracle Enterprise Manager automatically includes DBA functions and is the recommended tool for all database work. Most DBA's will use this capability to manage the database. This includes Oracle Scheduler management. For example:

Enterprise Manager Interface

Implementations have a range of options for managing your schedule. Customers on the cloud use the Oracle Utilities Cloud Service Foundation to manage their schedule in a similar interface to Enterprise Manager via our Scheduler API.

 

Batch Architecture - Designing Your Submitters - Part 3

Mon, 2019-03-04 16:13

If you are using the command line submission interface, the last step in the batch architecture is to configure the Submitter configuration settings.

Note: Customers using the Oracle Scheduler Integration or Online Submission (the latter is for non-production only) need to skip this article as the configuration outlined in this article is not used by those submission methods.

As with the cluster and threadpool configurations, the use of the Batch Edit (bedit.sh) utility is recommended to save costs and reduce risk with the following set of commands:

$ bedit.sh -s
 
$ bedit.sh -b <batch_control>
  • Default option (-s). This sets up a default configuration file used for any batch control where no specific batch properties file exists. This creates a submitbatch.properties file located in the $SPLEBASE/splapp/standalone/config subdirectory.
  • Batch Control Specific configuration (-b). This will create a batch control specific configuration file named process.<batchcontrol>.properties where <batchcontrol> is the Batch Control identifier (Note: Make sure it is the same case as the identified in the meta-data). This file is located in the $SPLEBASE/splapp/standalone/config/cm subdirectory. In this option, the soft parameters on the batch control can be configured as well.

Batch Submitter Configuration

Use the following guidelines:

  • Use the -s option where possible. Setup a global configuration to cover as many processes as possible and then create specific process parameter files for batch controls that require specific soft parameters.
  • Minimize the use command Line overrides. The advantage of setting up submitter configuration files is to reduce your maintenance costs. Whilst it is possible to use command line overrides to replace all the settings in the configuration, avoid overuse of overrides to stabilize your configuration and minimize your operational costs.
  • Set common batch parameters. Using the -s option specify the parameters for the common settings.
  • Change the User used. The default user AUSER is not a valid user. This is intention to force the appropriate configuration for your site. Avoid using SYSUSER as that is only to be used to load additional users into the product.
  • Setup soft parameters in process specific configurations. For batch controls that have parameters, these need to be set in the configuration file or as overrides on the command line. To minimize maintenance costs and potential command line issues, it is recommended to set the values in a process specific configuration file. using the add soft command in bedit.sh with the following recommendations:
    •  The parameter name in the parm setting must match the name and case of the Parameter Name on the Batch Control.
    •  The value set in the value setting must be the same or valid for the Parameter Value on the Batch Control.
    •  Optional parameters do not not need to be specified unless used.

For example:

$ bedit.sh -s

Editing file /u01/demo/splapp/standalone/config/submitbatch.properties using template /u01/demo/etc/submitbatch.be
Batch Configuration Editor [submitbatch.properties]
---------------------------------------------------------------
Current Settings
  poolname (DEFAULT)
  threads (1)
  commit (10)
  maxtimeout (15)
  user (AUSER)
  lang (ENG)
  storage (false)
  role ({batchCode})
>

$ bedit.sh -b MYJOB

File /u01/demo/splapp/standalone/config/cm/job.MYJOB.properties does not exist. Create? (y/n) y
Editing file /u01/demo/splapp/standalone/config/cm/job.MYJOB.properties using template /u01/demo/etc/job.be
Batch Configuration Editor [job.MYJOB.properties]
---------------------------------------------------------------
Current Settings
  poolname (DEFAULT)
  threads (1)
  commit (10)
  user (AUSER)
  lang (ENG)
  soft.1
      parm (maxErrors)
      value (500)
>

For more advice on individual parameters use the help <parameter> command.

To use the configuration use the submitjob.sh -b <batch_code> command. Refer to the Server Administration Guide supplied with your product for more information.

Batch Scheduler Integration (Doc Id: 2196486.1) Updated

Wed, 2019-02-27 21:00

In line with the Batch Best Practices whitepaper being updated the Batch Scheduler Integration whitepaper has also been updated to reflect the new advice. The Batch Scheduler Integration whitepaper explains the DBMS_SCHEDULER (also known as Oracle Scheduler) Interface implemented within Oracle Utilities Application Framework.

The Oracle Scheduler is included in the Database licensing for the Oracle Utilities Application Framework and can deployed locally or enterprise wide (the latter situation may incur additional licensing depending on the deployment model). The Oracle Utilities Application Framework includes a prebuilt interface that allows the Oracle Scheduler uses its objects to execute Oracle Utilities Application Framework based processes.

The advantage of the Oracle Scheduler:

  • Licensing is included in Oracle Database licenses already. It is already available to those customers.
  • Someone in your organization is already using it. The Oracle Scheduler is used by a variety of  products including the Database itself, Oracle Enterprise Manager etc to schedule and perform work. It is a key element in the autonomous database.
  • It is used by Oracle Utilities SaaS Cloud implementation. We use it for our Oracle Utilities SaaS Cloud Implementations natively and via an API for external usage. We also built a scheduling interface within the Oracle Utilities Cloud Services Foundation which is included exclusively for all Oracle Utilities SaaS Cloud implementations.
  • Choice of interfaces to manage your schedule. As the Oracle Scheduler is part of the database, Oracle provides a management and monitoring capability interface via Command line, Oracle SQL Developer, Oracle JDeveloper and/or Oracle Enterprise Manager.

The new version of the whitepaper is available from My Oracle Support at Batch Scheduler Integration (Doc Id: 2196486.1).

Batch Best Practices (Doc Id: 836362.1) Updated

Wed, 2019-02-27 19:55

The Batch Best Practices whitepaper has been completely rewritten from scratch to optimize the information and provide a simpler mechanism for helping implementations configure and manage their batch architecture. The new whitepaper covers building and maintaining an effective batch architecture and then guidelines for optimizations around that architecture. It also separates different techniques for the various submission methods.

The whitepaper now covers the following topics:

  • Batch Concepts. How the batch architecture and its objects works together to form the batch functionality in the Oracle Utilities Application Framework.
  • Batch Architecture. A new simpler view of the various layers in the batch architecture.
  • Configuration. A look at the configuration process and guidelines using the Batch Edit to simplify the process.
  • Batch Best Practices. These are generic but important best practices collected from our cloud and no-premise implementations that may prove useful to implementations.
  • Plug In Batch. This is a primer for the Plug In Batch capability (it will be explored in detail in other documents).

The whitepaper is available from My Oracle Support at Batch Best Practices (Doc Id: 836362.1).

Batch Architecture - Designing Your Threadpools - Part 2

Mon, 2019-02-25 16:45

In the last article we discussed the setup of a cluster. Once the cluster is setup, the next step is to design and configure your threadpools. Before I illustrate how to quickly configure your threadpools here are a few things to understand about threadpools:

  • Threadpools are Long Running JVMs. The idea behind threadpools are they are long running JVMS that accept work (from submitters) and process that work. Each individual instance of a threadpool is an individual running JVM on a host (or hosts).
  • Threadpools are Named. Each threadpool is named with a tag (known as the Threadpool Name). This tag is used when running a batch process to target specific JVM's to perform the processing. The names are up to the individual implementation.
  • Threadpools Can Have Multiple Instances. Threadpools can be singular instance or have multiple instances within a host or across hosts.
  • Threadpools have thread limits. Each instance of a threadpool has a physical thread limit. This is not the Java thread limit but the maximum number of threads that can be safely executed in the instance.
  • Threadpools with the same name have cumulative limits when clustered. When multiple instances of the same thread pool name are available, the number of threads available is the sum total of all instances. This is regardless whether the instances are on the same host or across host as long as they are in the same cluster.

A summary of the above is shown in the figure below:

Thread Limit Example

For the above scenarios:

Scenario Option Example Pool Comments A Single Thread Pool on a Single Host POOL1 This is the most simplest scenario. B Multi Thread Pool Instances on a Single Host POOL2 The number of threads across this scenario are cumulative. In this example there are ten (10) threads available. C Multi Thread Pool on Multi Hosts POOL3

This is a clustered setup across hosts. Again the threads available are cumulative and in this case there are twelve (12) threads available.

Note: The second instance of POOL3 can have different thread limits to reflect the capacity on the host. In most cases, the number of threads will be the same but it is possible to change the configuration on a host to reflect the capacity of that individual machine.

Note: You can combine any of these scenarios for a complex batch architecture.

Building Your Threadpool Configuration

As with the Cluster configuration the best way of building your threadpool configuration is using the Batch Edit (bedit.sh) utility. There are two commands available to you:

Theadpool Configuration

  • The bedit.sh -w command is recommended as your initial command to create a set of default thread configurations in threadpoolworker.properties file. This file is used by the threadpoolworker.sh command by default.
  • The bedit.sh -l <arg> command is used to create custom threadpools with the <arg> used to denote the template to base the configuration against. The product ships a cache and job template and it is possible to create customer templates directly from the command. The utility generates a threadpool.<arg>.properties file. To use the template use the threadpoolworker.sh -l <arg> command line.

Here are some guidelines when building the threadpool configuration:

  • Some Settings Are At The Template Level. Some of the settings are common to all threadpools using the template. For example, JVM Settings, Caching and Internal Threads are specified at  the template level.
  • Create An Appropriate List Of Threadpool Names And Thread Limits In The File. The list of threadpools can be configured with names and limits.
  • Use Cache Template for Cache Threadpools. In a complex architecture with lots of threadpool instances, it is recommended to invest in creating a cache threadpool to optimize network traffic.
  • Use Job Template for Focussed Threadpools. If optimization of JVM settings, caching etc is required use the job template or create a custom template to configure these settings.

The bedit.sh utility allows for configuration of settings from the templates. For example:

$ bedit.sh -w

Batch Configuration Editor [threadpoolworker.properties]
--------------------------------------------------------------------------
Current Settings
  minheap (1024m)
  maxheap (1024m)
  daemon (false)
  rmiport (7540)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_TPW)
  jmxstartport (7540)
  l2 (READ_WRITE)
  devmode (false)
  ollogdir (/u02/sploutput/demo)
  ollogretain ()
  thdlogretain ()
  timed_thdlog_dis (false)
  pool.1
      poolname (POOL1)
      threads (10)
  pool.2
      poolname (POOL2)
      threads (5)
  pool.3
      poolname (POOL3)
      threads (7)
>   

 

Use the help command on each setting for advice.

When designing your threadpools there are several guidelines that can apply:

  • Simple Is Best. One of the key capabilities of the architect is that you have access to a full range of alternative configuration to suit your needs. To quote a famous movie "With Great Power Comes Great Responsibility", so it is recommended not to go overboard with the complexity of the configuration. For example, in non-Production environment use a small number of threadpools so keep it simple. When designing your threadpool architecture, balance maintenance over technical elegance. The more complex solutions can increase maintenance costs so keep the solution as simple as you can.
  • Consider Specialist Threadpools for L2 Caching. Some of the ancillary processes in the product require the L2 Cache to be disabled. This is because they are updating the information and the cache will actually adversely affect the performance of these processes. Processes such as Configuration Migration Assistant, LDAP Import and some of the conversion processes require the caching to be disabled. In this case create a template where a dedicated threadpool that turns off L2 Caching.
  • Threadpools Need Only To Be Running As Needed. One misnomer about threadpools is that they must be up ALL the time to operate. This is not true. They only need to be active when a batch processes needs access to the particular threadpool. When they are not being used, any threadpool JVM's are still consuming memory and CPU (even just a little). There is a fundamental principal in Information Technology which is "Thou Shalt Not Waste Resources". An idle threadpool is still consuming resources, including memory, that can be best used by other active processes.
  • Consider Specialist Threadpools for Critical Processing. A threadpool will accept any work from any process it is targeted for. Whilst this is flexible, it can cause issues with critical resources. When a critical process is executed in your architecture, it is best to make sure there are resources available to process it efficiently. If other resources are sharing the same threadpools then those critical processes are competing for resources from lesser critical processes. One technique is to setup dedicated threadpools, with optimized settings, for exclusive use of critical processes.
  • There Are Limits. Even though it is possible to run many threadpools in your architecture there are limits to consider. The most obvious is memory. Each threadpool instance is a running JVM reserving memory for use by threads. By the default, this is between 768MB - 1GB (or more) per JVM. Your physical memory may be the limit that decides how many active JVM's are possible on a particular host (do not forget the operating system also needs some memory as well). Another limit will be contention on resources such as records and disk. One technique that has proven useful is to monitor throughput (records per second) and to increase threading or threadpools until this starts to reduce.

The above techniques are but a few that are useful in designing your threadpools. The next step in the process is to decide the submitters and the number of threads to consider across these threadpools. This will be subject of the next part of the series.

 

 

Batch Architecture - Designing Your Cluster - Part 1

Sun, 2019-02-17 18:42

The Batch Architecture for the Oracle Utilities Application Framework is both flexible and powerful. To simplify the configuration and prevent common mistakes the Oracle Utilities Application Framework includes a capability called Batch Edit. This is a command line utility, named bedit.sh, that provides a wizard style capability to build and maintain your configuration. By default the capability is disabled and can be enabled by setting the Enable Batch Edit Functionality to true in the Advanced Configuration settings using the configureEnv.sh script:

$ configureEnv.sh -a ************************************************* * Environment Configuration demo * *************************************************   50. Advanced Environment Miscellaneous Configuration ...        Enable Batch Edit Functionality:                    true ...

Once enabled the capability can be used to build and maintain your batch architecture.

Using Batch Edit

The Batch Edit capability is an interactive utility to build the environment. The capability is easy to use with the following recommendations:

  • Flexible Options. When invoking the command you specify the object type you want to configure (cluster, threadpool or submitter) and any template you want to use. The command options will vary. Use the -h option for a full list.
  • In Built Help. If you do not know what a parameter is about or even the object type. You can use the help <topic> command. For example, using when configuring help threadpoolworker gives you advice about the approaches you can take for threadpools. If you want a list of topics, type help with no topic.
  • Simple Commands. The utility has a simple set of commands within the utility to interact with the settings. For example if you want to set the role within the cluster to say fred you would use the set role fred command within the utility.
  • Save the Configuration. There is a save command to make all changes in the session reflect in the relevant file and conversely if you make a mistake you can exit without saving the session.
  • Informative. It will tell you which file you are editing at the start of the session so you can be sure you are in the right location.

Here is an example of an edit session:

$ bedit.sh -w

Editing file /u01/ugtbk/splapp/standalone/config/threadpoolworker.properties using template /u01/ugtbk/etc/threadpoolworker.be
Includes the following push destinations:
  dir:/u01/ugtbk/etc/conf/tpw

Batch Configuration Editor 4.4.0.0.0_1 [threadpoolworker.properties]
--------------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  daemon (true)
  rmiport (7540)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  jmxstartport (7540)
  l2 (READ_ONLY)
  devmode (false)
  ollogdir (/u02/sploutput/ugtbk)
  ollogretain ()
  thdlogretain ()
  timed_thdlog_dis (false)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (LOCAL)
      threads (0)
> save
Changes saved
Pushing file threadpoolworker.properties to /u01/ugtbk/etc/conf/tpw ...
> exit

Cluster Configuration

The first step in the process is to design your batch cluster. This is the group of servers that will execute batch processes. The Oracle Utilities Application Framework uses a Restricted Use License of Oracle Coherence to cluster batch processes and resources. The use of Oracle Coherence allows you to implement different architectures from simple to complex. Using Batch Edit there are three cluster types supported (you must choose one type per environment).

Cluster Type (template code) Use Cases Comments Single Server (ss) Cluster is restricted to a single host This is useful for non-production environments such as demonstration, development and testing as it is most simple to implement Uni-Cast (wka) The cluster uses unicast protocol with the hosts explicitly named within the cluster that are part of the cluster. This is recommended for sites wanting to lock down a cluster to specific hosts and does not want to use multi-cast protocols. Administrators will have to name the list of hosts, known as Well Known Addresses, that are part of the cluster as part of this configuration Multi-Cast (mc) The cluster uses the multi-cast protocol with a valid multi-cast IP address and port. This is recommended for sites who want a dynamic configuration where threadpools and submitters are accepted on demand. This is the lowest amount of configuration for product clusters as the threadpools can join a cluster from any server with the right configuration dynamically. It is not recommended for sites that do not use the multi-cast protocol. Single Server Configuration

This is the simplest configuration with the cluster restricted to a single host. The cluster configuration is restricted networking wise within the configuration. To use this cluster type simply use the following command and follow the configuration generated for you from the template.

bedit.sh -c -t ss Uni-Cast Configuration

This is a multi-host cluster where the hosts in the configuration are defined explicitly in host and port number combinations. The port number is used for communication to that host in the cluster. This style is useful where the site does not want to use the multi-cast protocol or wants to micro-manage their configuration. To use this cluster type simply use the following command and follow the configuration generated for you from the template.

bedit.sh -c -t wka

You then add each host as a socket using the command:

add socket

This will add a new socket collection in the format socket.<socketnumber>. To set the values use the command:

set socket.<socketnumber> <parameter> <value>

where:

<socketnumber> The host number to edit <parameter> Either wkaaddress (host or IP address of server) and wkaport (port number on that host to use) <value> the value for the parameter. For example: set socket.1 wkaaddress host1

To use this cluster style ensure the following:

  • Use the same port number per host. Try and use the same broadcast port on each host in the cluster. If they are different then the port number in the main file for the machines that are on the cluster has to be changed to define that port.
  • Ensure each host has a copy of the configuration file. When you build the configuration file, ensure the same file is on each of the servers in the cluster (each host will require a copy of the product).
Multi-Cast Configuration

This is the most common multi-host configuration. The idea with this cluster type is that a multi-cast port and IP Address are broadcast across your network per cluster. It requires very little configuration and the threadpools can dynamically connect to that cluster with little configuration. It uses the multi-cast protocol which network administrators either love or hate. The configuration is similar to the Single Server but the cluster settings are actually managed in the installation configuration (ENVIRON.INI) using the COHERENCE_CLUSTER_ADDRESS and COHERENCE_CLUSTER_PORT settings. Refer to the Server Administrator Guide for additional configuration advice.

Cluster Guidelines

When setting up the cluster there are a few guidelines to follow:

  • Use Single Server for Non-Production. Unless you need multi-host clusters, use the Single Server cluster to save configuration effort.
  • Name Your Cluster Uniquely. Ensure your cluster is named appropriately and uniquely per environment to prevent cross environment unintentional clustering.
  • Set a Cluster Type and Stick with it. It is possible to migrate from one cluster type to another (without changing other objects) but to save time it is better to lock in one type and stick with it for the environment.
  • Avoid using Prod Mode. There is a mode in the configuration which is set to dev by default. It is recommended to leave the default for ALL non-production environment to avoid cross cluster issues. The Prod mode is recommended for Production systems only. Note: There are further safeguards built into the Oracle Utilities Application Framework to prevent cross cluster connectivity.

The cluster configuration generates a tangosol-coherence-override.xml configuration file used by Oracle Coherence to manage the cluster.

Cluster Configuration

Now we have the cluster configured, the next step is to design your threadpools to be housed in the cluster. That will be discussed in Part 2 (coming soon).

See You At The Edge Conference in Austin

Thu, 2019-02-14 16:31
I will be attending the Oracle Utilities Edge Conference in Austin TX in March. This year the agenda is slightly different to past years with the Technical Sessions being part of the Cloud and Technology Track with the Cloud sessions so I will have lots more speakers this year. I will be running a few sessions around our next generation Testing solution, migration to the cloud, a deep dive into our futures as well as co-chairing an exciting session on our directions in Machine Learning in the Oracle Utilities stack.

Use Of Oracle Coherence in Oracle Utilities Application Framework

Sun, 2019-02-10 17:03

In the batch architecture for the Oracle Utilities Application Framework, a Restricted Use License of Oracle Coherence is included in the product. The Distributed and Named Cache functionality of Oracle Coherence are used by the batch runtime to implement clustering of threadpools and submitters to help support the simple and complex architectures necessary for batch.

Partners ask about the libraries and their potential use in their implementations. There are a few things to understand:

  • Restricted Use License conditions. The license is for exclusive use in the managing of executing executing members (i.e. submitters and threadpools) across hardware licensed for use with the Oracle Utilities Application Framework based products. It cannot be used in any code outside of that restriction. Partners cannot use the libraries directly in their extensions. It is all embedded in the Oracle Utilities Application Framework.
  • Limited Libraries. The Oracle Coherence libraries are restricted to a subset needed by the license. It is not a full implementation of Oracle Coherence. As it is a subset, Oracle does not recommend using the Oracle Coherence Plug-In available for Oracle Enterprise Manager to be used with the Oracle Utilities Application Framework implementation of the Oracle Coherence cluster. Use of this plugin against the batch cluster will result in missing and incomplete information presented to the plug-in causing inconsistent results in that plug-in.
  • Patching. The Oracle Coherence libraries are shipped with the Oracle Utilities Application Framework and therefore are managed by patches for the Oracle Utilities Application Framework not Coherence directly. Unless otherwise directed by Oracle Support, do not manually manipulate the Oracle Coherence libraries.

The Oracle Coherence implementation with the Oracle Utilities Application Framework has been optimized for use with the batch architecture with a combination of prebuilt Oracle Coherence and Oracle Utilities Application Framework based configuration files.

Note: If you need to find out the version of the Oracle Coherence Libraries used in the product at any time then the libraries are listed in the file $SPLEBASE/etc/ouaf_jar_versions.txt

The following command can be used to see the version:

cat $SPLEBASE/etc/ouaf_jar_versions.txt | grep coh

For example in the latest version of the Oracle Utilities Application Framework (4.4.0.0.0):

cat /u01/umbk/etc/ouaf_jar_versions.txt | grep coh

coherence-ouaf                   12.2.1.3.0
coherence-work                   12.2.1.3.0

Special Tables in OUAF based products

Wed, 2019-02-06 22:47

Long time users of the Oracle Utilities Application Framework might recognize two common table types, recognized by their name suffixes, that are attached to most Maintenance Objects within the product:

  • Language Tables (Suffix _L).  The Oracle Utilities Application Framework is multi-lingual and can support multiple languages at a particular site (for example customers who have multi-lingual call centers or operate across jurisdictions where multiple languages are required). The Language table holds the tags for each language for any fields that need to display text on a screen. The Oracle Utilities Application Framework matches the right language records based upon the users language profile (and active language code).
  • Key Tables (Suffix _K). These tables hold the key values (and the now less used environment code) that are used in the main object tables. The original use for these tables was for key tracking in the original Archiving solution (which has now been replaced by ILM). Now that the original Archiving is not available, the role of these tables changed to be used in a number of areas:
    • Conversion. The conversion toolkit in Oracle Utilities Customer Care and Billing and now in the Cloud Service Foundation, uses the key table for efficient key generation and black listing of identifiers.
    • Key Generation. The Key generation utilities now use the key tables to quickly ascertain the uniqueness of a key. This is far more efficient than using the main table for this, especially with caching support in the database.
    • Information Life-cycle Management. The ILM capability uses the key tables to drive some of its processes including recognizing when something is archived and when it has been restored.

These tables are important for the operation of the Oracle Utilities Application Framework for all types of parts of the product. When you see them now you understand why they are there.

Batch Scheduler Integration Questions

Sun, 2019-02-03 21:57

One of the most common questions I get from partners is around batch scheduling and execution. Oracle Utilities Application Framework has a flexible set of methods of managing, executing and monitoring batch processes. The alternatives available are as follows:

  • Third Party Scheduler Integration. If the site has an investment in a third party batch scheduler to define the schedules and execute product batch processes with non-product processes, at an enterprise level, then the Oracle Utilities Application Framework includes a set of command line utilities, via scripts, that can be invoked by a wide range of third party schedulers to execute the process. This allows scheduling to be managed by the third party scheduler and the scripts to be used to execute and manage product batch processes. The scripts return standard return codes that the scheduler to use to determine next actions if necessary. For details of the command line utilities refer to the Server Administration Guide supplied with your version of the product.
  • Oracle Scheduler Integration. The Oracle Utilities Application Framework provides a dedicated API to allow implementations to use the Oracle DBMS Scheduler included in all editions of the database to be used as local or enterprise wide scheduler. The advantage of this is that the scheduler is already included in your existing database license and has inbuilt management capabilities provided via the base functionality of Oracle Enterprise Manager (12+) (via Scheduler Central) and also via Oracle SQL Developer. Oracle uses this scheduler in the Oracle Utilities SaaS Cloud solutions. Customers of those cloud services can use the interface provided by the included Oracle Utilities Cloud Service Foundation to manage their schedules or use the provided REST based scheduler API to execute schedules and/or processes from a third party scheduler. For more details of the scheduler interface refer to the Batch Scheduler Integration (Doc Id: 2138193.1) whitepaper available from My Oracle Support.
  • Online Submission. The Oracle Utilities Application Framework provides a development and testing tool to execute individual batch processes from the online system. It is basic and only supports execution of individual processes (not groups of jobs like the alternatives do). This online submission capability is designed for cost effective developer and non-production testing, if desired, and is not supported for production use. For more details, refer to the online documentation provided with the version of the product you are using.

Note: For customers of legacy versions of Oracle Utilities Customer Care and Billing, a basic workflow based scheduler was provided for development and testing purposes. This interface is not supported for production use and one of the alternatives outlined above should be used instead.

All the above methods all use the same architecture for execution of running batch processes (though some have some additional features that need to be enabled).  For details of the each of the configurations, refer to the Server Administration Guide supplied with your version of the product. 

When asked about which technology should be used I tend to recommend the following:

  • If you have an existing investment, that you want to retain, in a third party scheduler then use the command line interface. This will retain your existing investment and you can integrate across products or even integrate non-product batch such as backups from the same scheduler.
  • If you do not have an existing scheduler, then consider using the DBMS Scheduler provided with the database. It is more likely your DBA's are already using it for their tasks and it is used by a lot of Oracle products already. The advantage of this scheduler is that you already have the license somewhere in your organization already. It can be deployed locally within the product database or remotely as an enterprise wide solution. It has a lot of good features and Oracle Utilities will use this scheduler as a foundation of our cloud implementations. If you are on the cloud then use the provided interface in Oracle Utilities Cloud Service Foundation and if you have an external scheduler via the REST based Scheduler API. If you are on-premise, then use the Oracle Enterprise Manager (12+) interface (Scheduler Central) in preference to the SQL Developer interface (though the latter is handy for developers). Oracle also ships a command line interface to the scheduler objects if you like pl/sql type administration.

Note: Scheduler Central in Oracle Enterprise Manager is included in the base functionality for Oracle Enterprise Manager and does not require any additional packs.

  • I would only recommend to use the online submission for demonstrations, development and perhaps in testing (where you are not using Oracle Utilities Testing Accelerator or have the scheduler not implemented).  It has very limited support and will only execute individual processes.

 

Configuration Management for Oracle Utilities

Thu, 2019-01-31 18:45

An updated series of whitepapers are now available for managing configuration and code in Oracle Utilities products whether the implementation is on-premise, hybrid or using Oracle Utilities SaaS Utilities. It has been updated for the latest Oracle Utilities Application Framework release. The series highlights the generic tools, techniques and practices available for use in Oracle Utilities products. The series is split into a number of documents:

  • Concepts. Overview of the series and the concept of Configuration Management for Oracle Utilities products.
  • Environment Management. Establishing and managing environments for use on-premise, hybrid and on the Oracle Utilities SaaS Cloud. There are some practices and techniques discussed to reduce implementation costs.
  • Version Management. Understanding the inbuilt and third party integration for managing individual versions of individual extension assets. There is a discussion of managing code on the Oracle Utilities SaaS Cloud.
  • Release Management. Understanding the inbuilt release management capabilities for creating extension releases and accelerators.
  • Distribution. Installation advice for releasing extensions across the environments on-premise, hybrid and Oracle Utilities SaaS Cloud.
  • Change Management. A generic change management process to approve extension releases including assessment criteria.
  • Configuration Status. The information available for reporting state of extension assets.
  • Defect Management. A generic defect management process to handle defects in the product and extensions.
  • Implementing Fixes. A process and advice on implementing single fixes individually or in groups.
  • Implementing Upgrades. The common techniques and processes for implementing upgrades.
  • Preparing for the Cloud. Common techniques and assets that need to be migrated prior to moving to the Oracle Utilities SaaS Cloud.

For more information and for the whitepaper associated with these topics refer to the Configuration Management Series (Doc Id: 560401.1) available from My Oracle Support.

Oracle Utilities Customer To Meter V2.7.0.1.0 now available

Mon, 2019-01-28 21:00

Oracle Utilities Customer To Meter V2.7.0.1.0 is now available for download from Oracle Software Delivery Cloud. This release is based upon Oracle Utilities Application Framework V4.4.0.0.0 and contains the following software:

  • Oracle Utilities Application Framework V4.4.0.0.0
  • Oracle Customer Care and Billing V2.7.0.1.0
  • Oracle Meter Data Management V2.3.0.0.0 including SGG, SOM and Settlements
  • Oracle Work and Asset Management V2.2.0.3.0 (also known as Operational Device Management)

Refer to the release notes provided with this product and related products for a full list of changes and new functionality in this release.

Oracle Utilities Application Framework V4.4.0.0.0 Released

Tue, 2019-01-22 12:47

The latest release of Oracle Utilities Application Framework V4.4.0.0.0 is now available with the first products becoming available on-premise and Oracle Utilities Cloud. This release is significant as it is forms the micro-services based foundation of the next generation of Oracle Utilities Cloud offering and whilst the bulk of the changes are cloud based there are some significant changes for on-premise customers as well:

  • New Utilities User Experience. Last year we previewed our directions in terms of the user experience across the Oracle Utilities product portfolio.  Oracle Utilities Application Framework V4.4.0.0.0 delivers the initial set of this experience with a new look and feel which form the basis of the new user experience. The new user experience is based upon feedback from various user experience teams, customers and partners to deliver a better user experience including reducing eye strain and supporting a wider range of platforms/devices now and in the future. For example (edited for publishing purposes):

Example User Interface

  • New To Do Portals. Based upon customer feedback, the first of the new management portals for To Do Management is now included with Oracle Utilities Application Framework V4.4.0.0.0. These portals can used alongside the legacy To Do functionality and can be migrated to over time. The idea is these portals is to focus on finding, managing and resolving To Do's with  the minimum amount of effort from the portals as much as possible. The new portals support dynamic criteria based upon data and To Do Type, date ranges, saved views, search on text and multi-query. For example:

Example To Do Management Portal

  • Improved User Error Field Marking. In line with the New Utilities User Experience, the indication of fields in error has been improved both visually and programmatically. This allows implementations to be flexible on how fields in error are indicated in error routines and how they are indicated on the user experience.
  • To Do Pre-creation Algorithm. In past releases, it was possible to use a To Do Pre-creation algorithm resided exclusively on the Installation records to implement logic to target when, in a lifecycle, a To Do can be created. This was seen as inefficient if implementations had a large number of To Do Types. It is now possible to introduce this logic at the To Do Type level to override the global algorithm.
  • Cloud Enhancements. This release contains a set of enhancements for the Oracle Utilities Cloud SaaS versions which are not typically applicable to non-cloud implementations. These enhancements are shipped with the product in an appropriate format (some features are not available to non-cloud implementations).
  • More Granular To Do Security. Complete All functionality in To Do now has separate security applications service to provide a more focused capability. This allows more granular security to be implemented, if desired.
  • External Message Enhancements. The External Messages capability has been extended to now support URI substitutions to support Cloud/Hybrid implementations and isolate developers from configuration changes to reduce costs.

Products using this new version of the framework are now available including Oracle Utilities Customer Care And Billing 2.7.0.1.0 are now available from Oracle Delivery Cloud. Refer to the release notes shipped with those products for details of these enhancements and other enhancements available in this release.

New ILM Planning Guide available

Sun, 2019-01-20 15:37

All the whitepapers are going through a major overhaul and the first ones of these is the ILM Planning Guide. In past releases the ILM solution used a feature called ILM Assistant which was an addon to the database that generated some partitioning capabilities and had some additional planning capabilities. The ILM Assistant is largely been replaced with the base functionality in Oracle Enterprise Manager with superior functionality and capability in a comprehensive interface. Given that Oracle Enterprise Manager (with or without ANY packs) is becoming the defacto standard for DBA's to manage their on-premise and cloud databases, the ILM whitepaper has been updated to reflect this.

The new whitepaper covers the above as well as new advice and a simpler set of scenarios that illustrate the implementation of the Oracle Utilities ILM solution.

The whitepaper is available at ILM Planning Guide (Doc Id: 1682436.1) available from My Oracle Support.

 

Welcome to 2019 for the Oracle Utilities products

Sun, 2019-01-06 17:33

Welcome to 2019 for all my blog readers. 2019 is looking like a stellar year for the Oracle Utilities products with exciting new features and new versions being introduced across the year. I look forward to writing articles about new features as well as articles outlining advanced techniques for existing features. I will endeavor to write an article every week or so which is a challenge as we get very busy over certain periods but I will try and keep up as much as practical.

It is going to be an exciting year for the Oracle Utilities Application Framework and Utilities Testing Accelerator with exciting new and updated features. Also remember this year we have Edge conferences in the USA, Australia and England which I will be attending, so feel free to come and chat to me there as well.

Registering a Service Request With Oracle Support

Tue, 2018-11-20 22:25

As with other vendors, Oracle provides a web site My Oracle Support for customers to find information as well as register service requests for bugs and enhancements they want us to pursue. Effective use of this facility can both save you time as well as help you find the information you might need to answer your questions.

Most customers think My Oracle Support is the place to get patches but it is far more of a resource than that. Apart from patches for all its products it provides some important resources for customers and partners including:

  • Knowledge Base - A set of articles that cover all the Oracle products with announcements and advice. For example all the whitepapers you see available from this blog end up as Knowledge Base articles. Product Support and Product Development regularly publish to that part of MOS to provide customers and partner up to date information.
  • Communities - For most Oracle products, there are communities of people who can answer questions. Some partners actually share implementation tips in these communities and they can become self sustaining with advice about features that have been actually implemented and tips on how to best implement them from partners and customers. Oracle Product Support and Development monitor those communities to see trends as well as determine possible optimizations to our products. They are yet another way you can contribute to the product.

Now to help you navigate the site, I have compiled a list of the most common articles that you might find useful. This list is not comprehensive and I would recommend that you look at the site to find more than what is listed here:

For more articles I suggest you use the terms OUAF or Oracle Utilities Application Framework in the search. For specific product advice, use the product acronym or product name in the search to find articles.

Building Your Batch Architecture with Batch Edit

Tue, 2018-11-13 17:08

With the introduction of Oracle Coherence to our batch architecture introduces both power and flexibility to the architecture to support a wide range of batch workloads. But to quote one of the late Stan Lee's iconic characters "With great power comes great responsibility" configuration of this architecture can be challenging for complex scenarios. To address this the product introduced a text based utility called Batch Edit.

The Batch Edit utility is a text based wizard to help implementers build simple and complex batch architectures without the need for complex editing of configuration files. The latter is still supported for more experienced implementers but the main focus is for inexperienced implementers to be able to build and maintain a batch architecture easily using this new wizard.

The use of Batch Edit (bedit.sh command) is totally optional and to use it you must enable it in you must change the Enable Batch Edit Functionality to true using the configureEnv.sh -a command. For example:

This is necessary for backward compatibility.

Now that it is enabled the wizard can be use to build the configuration.  Here are a few tips in using it:

  • The bedit.sh command has command options that need to be specified to edit parts of the architecture. The table below lists the command options:
Command Line Usage bedit.sh -c Edit the cluster configuration. bedit.sh -w Edit the threadpoolworker configuration. bedit.sh -s Edit the submitter configuration.
  • Use the bedit.sh -h option for a full list of options and combinations.
  • Within bedit.sh you can setup different scenarios using predefined templates:
Configuration Scope Styles Supported Cluster

We support the following templates in respect to clusters:

  • Single Server (ss) - This is ideal for simple non-production environments to restrict
  • Unicast Server (wka) - Well Known Address based clusters with the ability to define nodes in your cluster within the wizard.
  • Multi-cast Server (mc) - Using Multi-cast for dynamic node management and configuration.
Threadpool

It is possible to setup a individual or groups of threadpools using the tool with the following templates:

  • Standard Threapools - Setting up individual threadpools or groups of threadpools with macro and micro level configuration (including sizing and caching)
  • Cache Threadpools - Support for caching threadpools which are popular with complex setups to drastically reduce the network traffic across nodes.
  • Admin Threadpools - Support for reserving threadpools for management and monitoring capabilities (this reduces the JMX overhead)
Submitter

If you not using the DBMS_SCHEDULER interface, which uses Batch Control for parameters, then properties files for the submitters will be required. The bedit.sh utility allows for the following types of submitter files to be generated and maintained:

  • Global configuration - Set global defaults for all batch controls. For example, it is possible to specify the product user used for authorization purposes.
  • Specific Batch Control configuration - Set the parameters for specific Batch Controls.

For example:

  • Within each template there are a list of settings with help on each setting to help decide the values. The bedit.sh utility allows setting each parameter individually using in-line commands. Use the set command to set values. Use help to get context sensitive help on individual parameters. For example:

  • One of the most important commands is save which applies the changes you have made.

The Batch Edit utility uses templates provided by product to build configuration files without user interaction. It is highly recommended for customers who do not want to manually manage templates or configurations for batch or do not have in-depth Oracle Coherence knowledge.

For more information about the Batch Edit utility refer to the Server Administration Guide shipped with the product and Batch Best Practices (Doc Id: 836362.1) available from My Oracle Support. Customers wanting to know about the DBMS_SCHEDULER interface should refer to Batch Scheduler Integration (Doc Id: 2196486.1) available from My Oracle Support.

Revision Control Basics

Sun, 2018-10-21 17:53

One of the features of the Oracle Utilities Application Framework is Revision Control. This is an optional facility where you can version manage your ConfigTools objects natively within the browser. Revision Control supports the following capabilities:

  • Adding an object for the first time.  The new object is automatically checked out by the system on behalf of the current user.  The revision is finalized when the user checks in the object or reverts all changes.  The latter causes the object to be restored to the version at check-out time.
  • Updating an object requires the object to be checked out prior to making any change.  A user can either manually check out an object or the first save confirms an automatic check out with the user.  The revision is finalized when the user checks in the object or reverts all changes.  
  • Deletion is captured. Deleting an object results in a new revision record capturing the object at deletion time. This does not remove the object from Object Revision. It also allows for restores in the future if necessary.
  • Restoring generated a new revision. Restoring an object also results in a new revision capturing the object after being restored to an older version.
  • State is important. An object is considered checked out if it has a revision in a non-final state.
  • Algorithms control the revision. A Revision Control Maintenance Object plug-in spot is introduced to enforce revision rules such as preventing from a user to work on an object checked out by another user.  Revision control is an optional feature that can be turned on for each eligible maintenance object.  To turn the feature on for a Maintenance Object, a revision control algorithm has to be plugged in to it.  
  • Automatic revision dashboard zones. A new Revision Control context sensitive dashboard zone is provided to manage the above revision events, except the restore, for the current object being maintained.  The zone is visible only if the Maintenance Object has this feature turned on.  A hyperlink from this zone navigates to a new Revision History portal listing the history of revision events for the current object.      
  • Tracking objects. A dashboard zone is provided that shows all the objects currently being checked out by the user.  
  • Search revision history. In addition, a Revision History Search portal is provided to search and see information about a user's historical work and the historical revisions to an object.

Revision Control supports the ability to check in, check out and revert versions of objects you develop in ConfigTools from within the browser interface. Additionally the Revision Control supports team based development with supervisor functions to force state of versions allocated to individuals. The diagram below summarizes the facilities:

Note: By default Revision Control is disabled by default and the F1-REVCTL algorithm must be added as a Revision Control algorithm on the ConfigTools Maintenance Objects it applies to. For example:

Once enabled whenever the configured object is edited a Revision Control dashboard zone will be displayed depending on the state of the object within the Revision Control system.

The state of a revision can be queried using the Revision Control Search. For example:

This is has just been a summary of some of the features of Revision Control. Refer to the online documentation for additional advice and a full description of the features.

Oracle Utilities Documentation

Mon, 2018-10-08 22:20

One of the most common questions I get from partners and customers is the location of the documentation for the product. In line with most Oracle products there are three locations for documentation:

  • Online Help. The product ships with online help which has information about the screens and advice on the implementation and extension techniques available. Of course, this assumes you have installed the product first. Help can be accessed using the assigned keyboard shortcut or using the help icon on the product screens.
  • Oracle Software Delivery Cloud. Along with the installation media for the products, it is possible to download PDF versions of all the documentation for offline use. This is usually indicated on the download when selecting the version of the product to down though it can downloaded at anytime.
  • Oracle Utilities Help Center. As with other Oracle products, all the documentation is available online via the Oracle Help Center (under Industries --> Utilities).

The following documentation is available:

Document Usage Audience Release Notes Summary of the changes and new features in the Oracle Utilities product Implementation Teams Quick Install Summary of the technical installation process including prerequisites UNIX Administrators Installation Guide Detailed software installation guide for the Oracle Utilities product. UNIX Administrators Optional Products Installation Summary of any optional additional or third party products used for the Oracle Utilities product. This guide only exists if optional products are certified/supported with the product. UNIX Administrators Database Administrator's Guide Installation, management and guidelines for use with the Oracle Utilities product. DBA Administrators Licensing Information User Manual Legal license information relating to the Oracle Utilities product and related products UNIX Administrators Administrative User Guide Offline copy of the Administration documentation for the Oracle Utilities product. This is also available via the online help installed with the product. Implementation Teams, Developers Business User Guide Offline copy of the Business and User documentation for the Oracle Utilities product. This is also available via the online help installed with the product. Implementation Teams, Developers Package Organization Summary Summary of the different packages included in the Oracle Utilities product. This may not exist for single product installations. Implementation Teams Server Administration Guide Guide to the technical configuration settings, management utilities and other technical architecture aspects of the Oracle Utilities product. UNIX Administrators Security Guide Guide to the security aspects of the Oracle Utilities product centralized in a single document. Covers both security functionality and technical security capabilities. This is design for use by Security personnel to design their security solutions. Implementation Teams API Reference Notes Summary of the API's provided with the Oracle Utilities product. This is also available via online features. Developers Developers Guide This is the Oracle Utilities SDK guide for using the Eclipse based development tools for extending the Oracle Utilities product using Java. Partners using the ConfigTools or Groovy should use the Administrative User Guide instead. Developers

Be familiar with this documentation as well as Oracle Support which has additional Knowledge Base articles.

Pages