Anthony Shorten

Subscribe to Anthony Shorten feed
Oracle Blogs
Updated: 1 week 13 hours ago

Hash Keys and Security

Mon, 2019-05-13 20:11

One of the features that has been changed over the last few releases of the Oracle Utilities Application Framework has been security. To keep up with security requirements across the industry, the Oracle Utilities Application Framework utilizes the security features of the infrastructure (Operating System, Oracle WebLogic and Oracle Database) as well as provide inbuilt security capabilities. One of the major capabilities is the support for Hash Keys on the user identity.

On the user object, there is a hash key that is managed by the Oracle Utilities Application Framework. The goal of this hash key is to detect any unauthorized changes to the user identity and prevent users from being used after an unauthorized change has been done. From an Oracle Utilities Application Framework point of view, an unauthorized change is a change that is done without going through the user object itself. For example, if you issued an UPDATE statement against the user tables directly, that did not go through the user object. That is an example of an unauthorized change.

When a user record is accessed, for example at login time, the Oracle Utilities Application Framework recalculates the hash key and compares that against the stored hash key. If they match, then the user is authorized, using the authorization model, to access the product. If the hash key does not match, then the user record has been compromised and the user action is rejected. In the case of a login, the user is refused access to the product.

The log will contain the message:

User security hash doesn't match for userid

From time to time we get customers reporting issues with these same characteristics. In most cases, this is caused by a number of practices:

  • User Object Updated Directly. Some implementations update the user object via direct SQL for a particular reason. This technique is discouraged bypasses the business rules configured for the user object within the product. We recommend that customers update the user object via the provided methods to prevent the user becoming recognized as compromised. The user object is protected by the authentication and authorization model used.
  • Encryption Key has been changed. At some sites, the encryption key is rotated on a regular basis. When this happens, the hash key becomes stale and needs to be rebuilt to reflect the new key.  

These are the only two use cases where the hash key becomes invalid. So what can be done about it? Well there are two techniques that are suggested to resolve this issue:

Note: The utility will set all the hash's not just the invalid ones.

It is recommended not to alter the User Object directly without going through the user object to avoid security hash issues.

Cube Viewer - Designing Your Cube

Sun, 2019-05-05 20:54

In the last Cube Viewer article we outlined a generic process for building a cube, but the first step on this process is to actually design the data we want to analyze in the cube. A common misconception with Cube Viewer is that you can take an query and convert it into a cube for analysis. This is not exactly true as Cube Viewer is really designed for particular types of analysis and should be used for those types of analysis to take full advantage of the capability.

The easiest way of deciding what type of analysis are optimal for the Cube Viewer is to visualize your end result. This is not a new technique as most designers work from what they want to determine the best approach. The easiest way I tend to visualize the Cube Viewer is to actually visualize the data in a more analytical view. If you are familiar with the "Pivot" functionality that is popular in spreadsheet programs then that is the idea. The pivot allows for different columns in a list to be combined in such a way to provide more analytical information. A very simple example is shown below:

Pivot Table

The above example we have three columns, two are considered dimensions (how we "cut" the data) and one the value we want to analyze. The pivot relationship in the above example is between Column A and Column B.

In Cube Viewer there are three concepts:

  • Dimensions. These are the columns used in the analysis. Typically dimensions represent the different ways you want to view the data in relation to other dimensions.
  • Filters. These act on the master record set (the data you want to use in the analysis) to refine the subset to focus upon. For example, you might want to limit your analysis to specific date ranges. By their nature, Filters can also become dimensions in a cube.
  • Values. These are the numeric values (including any functions) that need to analyzed.

Note: Filters and Values can be considered dimensions as well due to the interactivity allows in the Cube Viewer.

When designing your cube consider the following guidelines:

  • Dimensions (including filters) define the data to analyze. The dimensions and filters are used to define the data to focus upon. The SQL will be designed around all the concepts.
  • Interactively means analysis is fluid. Whilst dimensions, filters and values are defined in the cube definition, their roles can be altered at runtime through the interaction by the user of the cube. The user has interaction (within limits) to interactively define how the data is represented.
  • Dimensions can be derived. It is possible to add ad-hoc dimensions that may or may not be even data in the database directly. The ConfigTools capability allows for additional columns to be added during the configuration that are not present directly in the SQL. For example, it is possible to pull in value from a related object not in the SQL but in the ConfigTools objects.

Note: For large amounts of data to include or process as part of the cube it is highly recommended to build that logic into the cube query itself to improve performance.

  • Values need to be numeric. The value to be analyzed should be numeric to provide the ability to be analyzed correctly.

In he next series of articles we will explore actually building the SQL statement and then translating that into the ConfigTools objects to complete the Cube.

UTA Components and License Restrictions

Thu, 2019-04-25 20:17

The Oracle Utilities Testing Accelerator is fast becoming a part of a lot of cloud and on-premise implementations as partners and customer recognize the value of pre-built assets in automated testing to reduce costs and risk. This growth necessitates a clarification in respect to licensing of the Oracle Utilities Testing Accelerator to ensure compliance with the license.

  • Oracle Utilities product exclusive. The focus of the Oracle Utilities Testing Accelerator is to provide an optimized solution for optimizing testing of Oracle Utilities products. The Oracle Utilities Testing Accelerator is licensed for exclusive use with the Oracle Utilities products it is certified against. it will not work with product not certified as their is no content or capability inbuilt into the solution for product outside that realm.
  • Named User Plus License. The Oracle Utilities Testing Accelerator uses the Named User Plus license metric. Refer to the License Definitions and Rules for a definition of the restrictions of that license. The license cannot be shared across physical users. The license gives each licensed user access to any relevant content available for any number of any supported non-production copies of certified Oracle Utilities products (including multiple certified products and multiple certified versions).
  • Non-Production Use. The Oracle Utilities Testing Accelerator is licensed for use against non-Production copies of the certified products. It cannot be used against a Production environment.
  • All components of the Oracle Utilities Testing Accelerator are covered by the license. The Oracle Utilities Testing Accelerator is provided in a number of components including the browser based Oracle Utilities Testing Workbench (including the execution engine), Oracle Utilities Testing Repository (storing assets and results of tests), Oracle Utilities Test Asset libraries provided by Oracle,  Oracle Utilities Testing Accelerator Eclipse Plug In and the Oracle Utilities Testing Accelerator Testing API implemented on the target copy of the Oracle Utilities product you are testing. These are subject to conditions of the license. For example, you cannot use the Oracle Utilities Testing Accelerator Testing API without the use of the Oracle Utilities Testing Accelerator. Therefore you cannot install the Testing API on a Production environment or even use the API in any respect other than with Oracle Utilities Testing Accelerator (and selected other Oracle testing products).

Oracle Utilities Testing Accelerator continues to be the cost effective way of reducing testing costs associated with Oracle Utilities products on-premise and on the Oracle Cloud.

Moving from Change Handlers to Algorithms

Thu, 2019-04-25 18:08

One of the most common questions I receive from partners is that Java based Change Handlers are not supported on the Oracle Utilities Cloud SaaS. Change Handlers are not supported for a number of key reasons:

  • Java is not Supported on Oracle Utilities SaaS Cloud. As pointed out previously, Java based extensions are not supported on the Oracle Utilities SaaS Cloud to reduce costs associated with deployment activities on the service and to restrict access to raw devices and information at the service level. We replaced Java with enhancements to both scripting and the introduction of Groovy support.
  • Change Handlers are a legacy from the history of the product. Change handlers were introduced in early versions of the products to compensate for limited algorithm entities in those early versions. Algorithms entities are points in the logic, or process, where customers/partners can manipulate data and process for extensions using algorithms. In early versions, algorithm entities were limited by the common points of extension that were made available in those versions. Over time, based upon feedback from customers and partners, the Oracle Utilities products introduced a wider range of algorithm entities that can be exploited for extensions. In fact in the latest release of C2M, there are over 370 algorithm entities available. Over the numerous years, the need for change handlers have slowly been replaced by the provision of these new or improved algorithm entities to the point where they are no longer as relevant as they once were.

On the Oracle Utilities SaaS Cloud, it is recommended to use the relevant algorithm entity with an appropriate algorithm, written in Groovy or ConfigTools based scripting rather than using Change Handlers. Customers using change handlers today are strongly encouraged to replace those change handlers with the appropriate algorithm.

Note: Customers and Partners not intending to use the Oracle Utilities SaaS Cloud can continue to use the Change Handler functionality but it is highly recommended to also consider moving to using the appropriate algorithms to reduce maintenance costs and risks.

Utilities Testing Accelerator 6.0.0.1.0 Now Available

Mon, 2019-04-22 14:19

Oracle Utilities is pleased to announce the general availability of Oracle Utilities Testing Accelerator Version 6.0.0.1.0 via the Oracle Software Delivery Cloud with exciting new features which provide improved test asset building and execution capabilities. This release is a foundation release for future releases with key new and improved features.

Last year the first release of the Oracle Utilities Testing Accelerator was released to replace the Oracle Functional Testing Advanced Pack for Oracle Utilities product to optimize the functional testing of Oracle Utilities products. The new version extends the existing feature set and adds new capabilities for the testing of Oracle Utilities products.

The key changes and new capabilities in this release include the following:

  • Accessible. This release is now accessible making the product available to a wider user audience.
  • Extensions to Test Accelerator Repository. The Oracle Utilities Testing Accelerator was shipped with a database repository, Test Accelerator Repository, to store test assets. This repository has been extended to accommodate new objects introduced in this release including a newly redesigned Test Results API to provide comprehensive test execution information. 
  • New! Server Execution Engine. In past releases, the only way to execute tests was using the provided Oracle Utilities Testing Accelerator Eclipse Plugin. Whilst that plugin is still available and will continue to be provided, an embedded scalable server execution engine has been implemented directly in the Oracle Utilities Testing Accelerator Workbench. This allows testers to build and execute test assets without leaving the browser. This engine will be the premier method of executing tests in this release and in future releases of the Oracle Utilities Testing Accelerator.
  • New! Test Data Management. One of the identified bottlenecks in automation is the provision and re-usability of test data for testing activities. The Oracle Utilities Testing Accelerator has added an additional capability to extend the original test data capabilities by allowing test users to extract data from non-production sources for reuse in test data. The principle is based upon the notion that it is quicker to update data than create it. The tester can specify a secure connection to a non-production source to pull the data from and allow manipulation at the data level for testing complex scenarios. This test data can be stored at the component level to create reusable test data banks or at the flow level to save a particular set of data for reuse. With this capability testers can quickly get sets of data to be reused within and across flows. The capability includes the ability to save and name test data within the extended Test Accelerator repository.
  • New! Flow Groups are now supported. The Oracle Utilities Testing Accelerator supports the concept of Flow Groups. These are groups of flows that can be executed as a set in parallel or serial to reduce test execution time. This capability is used by the Server Execution Engine to execute groups of flows efficiently. This capability is also foundation of future functionality.
  • New! Groovy Support for Validation. In this release, it is possible to use Groovy to express rules for validation in addition to the component validation language already supported. This capability allows partners and testers to add complex rule logic at the component and flow level. As with the Groovy support within the Oracle Utilities Application Framework, the language is whitelisted and does not support external Groovy frameworks.
  • Annotation Support. In the component API, it is possible to annotate each step in the process to make it more visible. This information, if populated, is now displayed on the flow tree for greater visibility. For backward compatibility, this information may be blank on the tree unless it is already populated.
  • New! Test Dashboard Zones. An additional set of test dashboard zones have been added to cover the majority of the queries needed for test execution and results.
  • New! Security Enhancements. For the Oracle Utilities SaaS Cloud releases of the product, the Oracle Utilities Testing Accelerator has been integrated with Oracle Identity Cloud Service to manage identity in the product as part of the related Oracle Utilities SaaS Cloud Services.

Note: This upgrade is backward compatible with test assets built with the previous Oracle Utilities Testing Accelerator releases so no rework is anticipated on existing assets as part of the upgrade process.

For more details of this release and the capabilities of the Oracle Utilities Testing Accelerator product refer to Oracle Utilities Testing Accelerator (Doc Id: 2014163.1) available from My Oracle Support.

CubeViewer - Process to Build the Cube Viewer

Wed, 2019-04-17 18:32

As pointed out in the last post, the Cube Viewer is a new way of displaying data for advanced analysis. The Cube Viewer functionality extends the existing ConfigTools (a.k.a Task Optimization) objects to allow the analysis to be defined as a Cube Type and Cube View. Those definitions are used by the widget to display correctly and define what level of interactivity the user can enjoy.

Note: Cube Viewer is available in Oracle Utilities Application Framework V4.3.0.6.0 and above.

The process of building a cube introduces new concepts and new objects to ConfigTools to allow for an efficient method of defining the analysis and interactivity. In summary form the process is described by the figure below:

Cube View Process

  • Design Your Cube. Decide the data and related information to to be used in the Cube Viewer for analysis. This is not just a typical list of values but a design of dimensions, filters and values. This is an important step as it helps determine whether the Cube Viewer is appropriate for the data to be analyzed.
  • Design Cube SQL. Translating the design into a Cube based SQL. This SQL statement is formatted specifically for use in a cube.
  • Setup Query Zone. The SQL statement designed in the last step needs to be defined in a ConfigTools Query Zone for use in the Cube Type later in the process. This also allows for the configuration of additional information not contained in the SQL to be added to the Cube.
  • Setup Business Service. The Cube Viewer requires a Business Service based upon the standard FWLZDEXP application service. This is also used by the Cube Type later process.
  • Setup Cube Type. Define a Cube Type object defining the Query Zone, Business Service and other settings to be used by the Cube Viewer at runtime. This  brings all the configuration together into a new ConfigTools object.
  • Setup Cube View. Define an instance of the Cube Type with the relevant predefined settings for use in the user interface as a Cube View object. Appropriate users can use this as the initial view into the cube and use it as a basis for any Saved Views they want to implement.

Over the next few weeks, a number of articles will be available to outline each of these steps to help you understand the feature and be on your way to building your own cubes.

Cube Viewer - A new way of analyzing operational data in OUAF

Mon, 2019-04-08 19:23

In past releases of Oracle Utilities Application Framework, Query Zones have been a flexible way of display lists of information with flexible filters and dynamic interaction including creating and saving views of the lists for reuse. In Oracle Utilities Application Framework V4.3.0.6.0 and above, we introduced the Cube Viewer, which extends the query model to support pivot style analytical analysis and visualization of operational data. The capability extends the ConfigTools (aka Task Optimization) capability to allow implementations to define cubes and provide interactivity for end users on operational data.

The Cube Viewer brings together a number of ConfigTools objects to build an interactive visualization with the following capabilities:

  • Toolbar. An interactive toolbar to decide the view of the cube to be shown by the user. This includes saving a view including the criteria for reuse.
  • Settings. The view and criteria/filters to use on the data set to help optimize the analysis. For example you might want to see the raw data, a pivot grid, a line chart or bar chart. You can modify the dimensions shown and even add rules for how certain values are highlighted using formats.
  • Filters. You can decide the filters and values shown in the grid within the selection criteria.
  • View. The above configuration results in a number of views of the data.

An example of the Cube Viewer is shown below:

Example Cube ViewerThe Cube Viewer has many features that allow configuration to optimize and highlight critical data whilst allowing users to interact with the information presented. In summary the key features are:

  • Flexible View Configuration. It is possible to use the configuration at runtime to determine the subset the data to analyze and display format as a saved view. As with query portals, views can be saved and reused. These views can be Private, Shared (within an Access Group) or Public.
  • Formatting Support. To emphasize particular data values, it is possible at runtime to alter their display using simple rules. For example:

Example formatting

  • Visual and Analytical Views. The data to be shown can be expressed in a number of view formats including a variety of graph styles, in grid format and/or raw format. This allows users to interpret the data according to their preferences.
  • Configurable using ConfigTools. The Cube View uses and extends existing ConfigTools objects to allow greater flexibility and configuration control. This allows existing resources who have skills in ConfigTools.
  • Comparison Feature. Allows different selection criteria  sets to be used for comparison purposes.  This allows for difference comparison between two sets of data.
  • Save View as "Snapshot". It is possible to isolate data using the interactive elements of the Cube Viewer to find the data you want to analyze. Once found, you can save the configuration and filters etc for recall later, very similar to the concept of a "Snapshot". For example, if you find some data that needs attention, you can save the view and then reuse it to show others later if necessary.
  • Function Support. In the details additional functions such as Average Value, Count, Maximum Value, Median Value, Minimum Value, Standard Deviation and Sum are supported at the row and column levels.  For example:

Example Functions

Cube Views may be available with each product (refer to documentation shipped with the product) and Cubes Views can be configured by implementers and reused across users as necessary. Over the next few weeks a step by step guide will be published here and other locations to show the basic process and some best practices of building a Cube Viewer.

Utilities Testing Accelerator - A new way to test quickly lowering your cost and risk

Sun, 2019-03-31 21:11

One of the most common pieces of feedback I got from attending the Oracle Utilities User Group and the Oracle Utilities Customer Edge Conference in Austin recently was that the approach that Oracle Utilities Testing Accelerator is taking is different and logical.

Traditionally, test automation is really coding using the language provided by the tool. The process of using it is recording a screen, with the data for the test, and having that become a script in whatever language supported by the tool. To use the script for other tests, either means you have to record it again or get some programmer, fluent in the scripting language to modify the script. The issue becomes when the user interface changes for any reason. This will most likely make the script now invalid so the whole process is repeated. You end up spending more time building scripts than actually testing.

Oracle Utilities Testing Accelerator takes a different and more cost effective approach:

  • Component Based Testing. Oracle Utilities Test Accelerator uses the tried and tested Oracle Testing approach by exposing test assets as reusable components in a pre-built library. These components do not use the online as the API but use the same underlying API used by online and other channels into the product. This isolates the test from changes to any of the channels as it is purely focused on functionality not user experience testing only.
  • More Than Online Testing. Oracle Utilities Testing Accelerator can test all channels (online, web service, mobile and batch). This allows maximum flexibility in testing across diverse business processes.
  • Orchestrate Not Build. Oracle Utilities Testing Accelerator allows for your business processes to be orchestrated as a sequence of components which can be within a single supported Oracle Utilities product, across supported Oracle Utilities products, within the same channel or across multiple channels. The orchestration reduces the need for relying on traditional recording and maximizes flexibility. 
  • No Touch Scripting. Oracle Utilities Testing Accelerator generates Selenium based code that requires no adjustment to operate which can be executed from an Eclipse Plugin or directly from the tool (the latter is available in 6.0.0.1.0).
  • Data is Independent of Process. During the orchestration data is not required to build out the business process. Data can be added at any time during the process including after the business process is completed and/or at runtime. This allows business processes to be reused with multiple test data sets. Test data can entered using the Workbench directly, via an external file and in the latest release (6.0.0.1.0) it can be populated via test data extractors.
  • Content from Product QA. The Oracle Utilities Testing Accelerator comes with pre-built component libraries, provided by Product QA, for over 30 version/product combinations for Oracle Utilities products. The license, which is user based, gives you access to any of the libraries appropriate for your site, regardless of the number of non-production environments or number of Oracle Utilities product it is used against.

These differences reduce your costs and risk when adopting automated testing. For more information about Oracle Utilities Testing Accelerator refer to Oracle Utilities Testing Accelerator (Doc Id: 2014163.1) available from My Oracle Support.

Supported Platform Guide/UTA Pack Guide

Wed, 2019-03-20 09:44

As with all Oracle products we release supported platforms in the Installation documentation. But, as platform changes happen between releases Oracle products produce Certification and Supported platform documentation in My Oracle Support.

Oracle Utilities is no different and we publish the up to date information on an article within My Oracle Support. It is recommended to use this article as a reference as the Installation Guides may become stale in respect to platforms.

The article for Oracle Utilities provides a spreadsheet centralizing all the information including certified platforms, certified database release and even versions of Oracle Utilities products supported with content in the Oracle Utilities Accelerator.

The article is available at Certification Matrix for Oracle Utilities Products (Doc Id: 1454143.1) available from My Oracle Support.

 

Schedule Management for Oracle Scheduler Integration

Mon, 2019-03-11 11:15

One of the most common questions I get from my product users is how to manage your batch schedules when using the Oracle Scheduler Integration.  As Oracle Scheduler is part of the Oracle Database, Oracle provides a number of ways of managing your schedule:

  • Command Line. If you are an administrator that manages your database using commands and PL/SQL calls then you can use the DBMS_SCHEDULER interface directly from any SQL tool. You have full access to the scheduler objects.
  • Oracle SQL Developer. The latest versions of Oracle SQL Developer include capabilities to manage your schedule directly from that tool. The advantage of this is that the tool supports techniques such as drag and drop to simplify the management of scheduler objects. For example, you can create a chain and then drop the programs into the chain and "wire" them together. This interface generates the direct DBMS_SCHEDULER calls to implement your changes. Refer to the Oracle SQL Developer documentation for details of maintaining individual scheduler objects. For example:

SQL Developer Interface

  • Oracle Enterprise Manager. From Oracle Database 12c and above, Oracle Enterprise Manager automatically includes DBA functions and is the recommended tool for all database work. Most DBA's will use this capability to manage the database. This includes Oracle Scheduler management. For example:

Enterprise Manager Interface

Implementations have a range of options for managing your schedule. Customers on the cloud use the Oracle Utilities Cloud Service Foundation to manage their schedule in a similar interface to Enterprise Manager via our Scheduler API.

 

Batch Architecture - Designing Your Submitters - Part 3

Mon, 2019-03-04 16:13

If you are using the command line submission interface, the last step in the batch architecture is to configure the Submitter configuration settings.

Note: Customers using the Oracle Scheduler Integration or Online Submission (the latter is for non-production only) need to skip this article as the configuration outlined in this article is not used by those submission methods.

As with the cluster and threadpool configurations, the use of the Batch Edit (bedit.sh) utility is recommended to save costs and reduce risk with the following set of commands:

$ bedit.sh -s
 
$ bedit.sh -b <batch_control>
  • Default option (-s). This sets up a default configuration file used for any batch control where no specific batch properties file exists. This creates a submitbatch.properties file located in the $SPLEBASE/splapp/standalone/config subdirectory.
  • Batch Control Specific configuration (-b). This will create a batch control specific configuration file named process.<batchcontrol>.properties where <batchcontrol> is the Batch Control identifier (Note: Make sure it is the same case as the identified in the meta-data). This file is located in the $SPLEBASE/splapp/standalone/config/cm subdirectory. In this option, the soft parameters on the batch control can be configured as well.

Batch Submitter Configuration

Use the following guidelines:

  • Use the -s option where possible. Setup a global configuration to cover as many processes as possible and then create specific process parameter files for batch controls that require specific soft parameters.
  • Minimize the use command Line overrides. The advantage of setting up submitter configuration files is to reduce your maintenance costs. Whilst it is possible to use command line overrides to replace all the settings in the configuration, avoid overuse of overrides to stabilize your configuration and minimize your operational costs.
  • Set common batch parameters. Using the -s option specify the parameters for the common settings.
  • Change the User used. The default user AUSER is not a valid user. This is intention to force the appropriate configuration for your site. Avoid using SYSUSER as that is only to be used to load additional users into the product.
  • Setup soft parameters in process specific configurations. For batch controls that have parameters, these need to be set in the configuration file or as overrides on the command line. To minimize maintenance costs and potential command line issues, it is recommended to set the values in a process specific configuration file. using the add soft command in bedit.sh with the following recommendations:
    •  The parameter name in the parm setting must match the name and case of the Parameter Name on the Batch Control.
    •  The value set in the value setting must be the same or valid for the Parameter Value on the Batch Control.
    •  Optional parameters do not not need to be specified unless used.

For example:

$ bedit.sh -s

Editing file /u01/demo/splapp/standalone/config/submitbatch.properties using template /u01/demo/etc/submitbatch.be
Batch Configuration Editor [submitbatch.properties]
---------------------------------------------------------------
Current Settings
  poolname (DEFAULT)
  threads (1)
  commit (10)
  maxtimeout (15)
  user (AUSER)
  lang (ENG)
  storage (false)
  role ({batchCode})
>

$ bedit.sh -b MYJOB

File /u01/demo/splapp/standalone/config/cm/job.MYJOB.properties does not exist. Create? (y/n) y
Editing file /u01/demo/splapp/standalone/config/cm/job.MYJOB.properties using template /u01/demo/etc/job.be
Batch Configuration Editor [job.MYJOB.properties]
---------------------------------------------------------------
Current Settings
  poolname (DEFAULT)
  threads (1)
  commit (10)
  user (AUSER)
  lang (ENG)
  soft.1
      parm (maxErrors)
      value (500)
>

For more advice on individual parameters use the help <parameter> command.

To use the configuration use the submitjob.sh -b <batch_code> command. Refer to the Server Administration Guide supplied with your product for more information.

Batch Scheduler Integration (Doc Id: 2196486.1) Updated

Wed, 2019-02-27 21:00

In line with the Batch Best Practices whitepaper being updated the Batch Scheduler Integration whitepaper has also been updated to reflect the new advice. The Batch Scheduler Integration whitepaper explains the DBMS_SCHEDULER (also known as Oracle Scheduler) Interface implemented within Oracle Utilities Application Framework.

The Oracle Scheduler is included in the Database licensing for the Oracle Utilities Application Framework and can deployed locally or enterprise wide (the latter situation may incur additional licensing depending on the deployment model). The Oracle Utilities Application Framework includes a prebuilt interface that allows the Oracle Scheduler uses its objects to execute Oracle Utilities Application Framework based processes.

The advantage of the Oracle Scheduler:

  • Licensing is included in Oracle Database licenses already. It is already available to those customers.
  • Someone in your organization is already using it. The Oracle Scheduler is used by a variety of  products including the Database itself, Oracle Enterprise Manager etc to schedule and perform work. It is a key element in the autonomous database.
  • It is used by Oracle Utilities SaaS Cloud implementation. We use it for our Oracle Utilities SaaS Cloud Implementations natively and via an API for external usage. We also built a scheduling interface within the Oracle Utilities Cloud Services Foundation which is included exclusively for all Oracle Utilities SaaS Cloud implementations.
  • Choice of interfaces to manage your schedule. As the Oracle Scheduler is part of the database, Oracle provides a management and monitoring capability interface via Command line, Oracle SQL Developer, Oracle JDeveloper and/or Oracle Enterprise Manager.

The new version of the whitepaper is available from My Oracle Support at Batch Scheduler Integration (Doc Id: 2196486.1).

Batch Best Practices (Doc Id: 836362.1) Updated

Wed, 2019-02-27 19:55

The Batch Best Practices whitepaper has been completely rewritten from scratch to optimize the information and provide a simpler mechanism for helping implementations configure and manage their batch architecture. The new whitepaper covers building and maintaining an effective batch architecture and then guidelines for optimizations around that architecture. It also separates different techniques for the various submission methods.

The whitepaper now covers the following topics:

  • Batch Concepts. How the batch architecture and its objects works together to form the batch functionality in the Oracle Utilities Application Framework.
  • Batch Architecture. A new simpler view of the various layers in the batch architecture.
  • Configuration. A look at the configuration process and guidelines using the Batch Edit to simplify the process.
  • Batch Best Practices. These are generic but important best practices collected from our cloud and no-premise implementations that may prove useful to implementations.
  • Plug In Batch. This is a primer for the Plug In Batch capability (it will be explored in detail in other documents).

The whitepaper is available from My Oracle Support at Batch Best Practices (Doc Id: 836362.1).

Batch Architecture - Designing Your Threadpools - Part 2

Mon, 2019-02-25 16:45

In the last article we discussed the setup of a cluster. Once the cluster is setup, the next step is to design and configure your threadpools. Before I illustrate how to quickly configure your threadpools here are a few things to understand about threadpools:

  • Threadpools are Long Running JVMs. The idea behind threadpools are they are long running JVMS that accept work (from submitters) and process that work. Each individual instance of a threadpool is an individual running JVM on a host (or hosts).
  • Threadpools are Named. Each threadpool is named with a tag (known as the Threadpool Name). This tag is used when running a batch process to target specific JVM's to perform the processing. The names are up to the individual implementation.
  • Threadpools Can Have Multiple Instances. Threadpools can be singular instance or have multiple instances within a host or across hosts.
  • Threadpools have thread limits. Each instance of a threadpool has a physical thread limit. This is not the Java thread limit but the maximum number of threads that can be safely executed in the instance.
  • Threadpools with the same name have cumulative limits when clustered. When multiple instances of the same thread pool name are available, the number of threads available is the sum total of all instances. This is regardless whether the instances are on the same host or across host as long as they are in the same cluster.

A summary of the above is shown in the figure below:

Thread Limit Example

For the above scenarios:

Scenario Option Example Pool Comments A Single Thread Pool on a Single Host POOL1 This is the most simplest scenario. B Multi Thread Pool Instances on a Single Host POOL2 The number of threads across this scenario are cumulative. In this example there are ten (10) threads available. C Multi Thread Pool on Multi Hosts POOL3

This is a clustered setup across hosts. Again the threads available are cumulative and in this case there are twelve (12) threads available.

Note: The second instance of POOL3 can have different thread limits to reflect the capacity on the host. In most cases, the number of threads will be the same but it is possible to change the configuration on a host to reflect the capacity of that individual machine.

Note: You can combine any of these scenarios for a complex batch architecture.

Building Your Threadpool Configuration

As with the Cluster configuration the best way of building your threadpool configuration is using the Batch Edit (bedit.sh) utility. There are two commands available to you:

Theadpool Configuration

  • The bedit.sh -w command is recommended as your initial command to create a set of default thread configurations in threadpoolworker.properties file. This file is used by the threadpoolworker.sh command by default.
  • The bedit.sh -l <arg> command is used to create custom threadpools with the <arg> used to denote the template to base the configuration against. The product ships a cache and job template and it is possible to create customer templates directly from the command. The utility generates a threadpool.<arg>.properties file. To use the template use the threadpoolworker.sh -l <arg> command line.

Here are some guidelines when building the threadpool configuration:

  • Some Settings Are At The Template Level. Some of the settings are common to all threadpools using the template. For example, JVM Settings, Caching and Internal Threads are specified at  the template level.
  • Create An Appropriate List Of Threadpool Names And Thread Limits In The File. The list of threadpools can be configured with names and limits.
  • Use Cache Template for Cache Threadpools. In a complex architecture with lots of threadpool instances, it is recommended to invest in creating a cache threadpool to optimize network traffic.
  • Use Job Template for Focussed Threadpools. If optimization of JVM settings, caching etc is required use the job template or create a custom template to configure these settings.

The bedit.sh utility allows for configuration of settings from the templates. For example:

$ bedit.sh -w

Batch Configuration Editor [threadpoolworker.properties]
--------------------------------------------------------------------------
Current Settings
  minheap (1024m)
  maxheap (1024m)
  daemon (false)
  rmiport (7540)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_TPW)
  jmxstartport (7540)
  l2 (READ_WRITE)
  devmode (false)
  ollogdir (/u02/sploutput/demo)
  ollogretain ()
  thdlogretain ()
  timed_thdlog_dis (false)
  pool.1
      poolname (POOL1)
      threads (10)
  pool.2
      poolname (POOL2)
      threads (5)
  pool.3
      poolname (POOL3)
      threads (7)
>   

 

Use the help command on each setting for advice.

When designing your threadpools there are several guidelines that can apply:

  • Simple Is Best. One of the key capabilities of the architect is that you have access to a full range of alternative configuration to suit your needs. To quote a famous movie "With Great Power Comes Great Responsibility", so it is recommended not to go overboard with the complexity of the configuration. For example, in non-Production environment use a small number of threadpools so keep it simple. When designing your threadpool architecture, balance maintenance over technical elegance. The more complex solutions can increase maintenance costs so keep the solution as simple as you can.
  • Consider Specialist Threadpools for L2 Caching. Some of the ancillary processes in the product require the L2 Cache to be disabled. This is because they are updating the information and the cache will actually adversely affect the performance of these processes. Processes such as Configuration Migration Assistant, LDAP Import and some of the conversion processes require the caching to be disabled. In this case create a template where a dedicated threadpool that turns off L2 Caching.
  • Threadpools Need Only To Be Running As Needed. One misnomer about threadpools is that they must be up ALL the time to operate. This is not true. They only need to be active when a batch processes needs access to the particular threadpool. When they are not being used, any threadpool JVM's are still consuming memory and CPU (even just a little). There is a fundamental principal in Information Technology which is "Thou Shalt Not Waste Resources". An idle threadpool is still consuming resources, including memory, that can be best used by other active processes.
  • Consider Specialist Threadpools for Critical Processing. A threadpool will accept any work from any process it is targeted for. Whilst this is flexible, it can cause issues with critical resources. When a critical process is executed in your architecture, it is best to make sure there are resources available to process it efficiently. If other resources are sharing the same threadpools then those critical processes are competing for resources from lesser critical processes. One technique is to setup dedicated threadpools, with optimized settings, for exclusive use of critical processes.
  • There Are Limits. Even though it is possible to run many threadpools in your architecture there are limits to consider. The most obvious is memory. Each threadpool instance is a running JVM reserving memory for use by threads. By the default, this is between 768MB - 1GB (or more) per JVM. Your physical memory may be the limit that decides how many active JVM's are possible on a particular host (do not forget the operating system also needs some memory as well). Another limit will be contention on resources such as records and disk. One technique that has proven useful is to monitor throughput (records per second) and to increase threading or threadpools until this starts to reduce.

The above techniques are but a few that are useful in designing your threadpools. The next step in the process is to decide the submitters and the number of threads to consider across these threadpools. This will be subject of the next part of the series.

 

 

Batch Architecture - Designing Your Cluster - Part 1

Sun, 2019-02-17 18:42

The Batch Architecture for the Oracle Utilities Application Framework is both flexible and powerful. To simplify the configuration and prevent common mistakes the Oracle Utilities Application Framework includes a capability called Batch Edit. This is a command line utility, named bedit.sh, that provides a wizard style capability to build and maintain your configuration. By default the capability is disabled and can be enabled by setting the Enable Batch Edit Functionality to true in the Advanced Configuration settings using the configureEnv.sh script:

$ configureEnv.sh -a ************************************************* * Environment Configuration demo * *************************************************   50. Advanced Environment Miscellaneous Configuration ...        Enable Batch Edit Functionality:                    true ...

Once enabled the capability can be used to build and maintain your batch architecture.

Using Batch Edit

The Batch Edit capability is an interactive utility to build the environment. The capability is easy to use with the following recommendations:

  • Flexible Options. When invoking the command you specify the object type you want to configure (cluster, threadpool or submitter) and any template you want to use. The command options will vary. Use the -h option for a full list.
  • In Built Help. If you do not know what a parameter is about or even the object type. You can use the help <topic> command. For example, using when configuring help threadpoolworker gives you advice about the approaches you can take for threadpools. If you want a list of topics, type help with no topic.
  • Simple Commands. The utility has a simple set of commands within the utility to interact with the settings. For example if you want to set the role within the cluster to say fred you would use the set role fred command within the utility.
  • Save the Configuration. There is a save command to make all changes in the session reflect in the relevant file and conversely if you make a mistake you can exit without saving the session.
  • Informative. It will tell you which file you are editing at the start of the session so you can be sure you are in the right location.

Here is an example of an edit session:

$ bedit.sh -w

Editing file /u01/ugtbk/splapp/standalone/config/threadpoolworker.properties using template /u01/ugtbk/etc/threadpoolworker.be
Includes the following push destinations:
  dir:/u01/ugtbk/etc/conf/tpw

Batch Configuration Editor 4.4.0.0.0_1 [threadpoolworker.properties]
--------------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  daemon (true)
  rmiport (7540)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  jmxstartport (7540)
  l2 (READ_ONLY)
  devmode (false)
  ollogdir (/u02/sploutput/ugtbk)
  ollogretain ()
  thdlogretain ()
  timed_thdlog_dis (false)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (LOCAL)
      threads (0)
> save
Changes saved
Pushing file threadpoolworker.properties to /u01/ugtbk/etc/conf/tpw ...
> exit

Cluster Configuration

The first step in the process is to design your batch cluster. This is the group of servers that will execute batch processes. The Oracle Utilities Application Framework uses a Restricted Use License of Oracle Coherence to cluster batch processes and resources. The use of Oracle Coherence allows you to implement different architectures from simple to complex. Using Batch Edit there are three cluster types supported (you must choose one type per environment).

Cluster Type (template code) Use Cases Comments Single Server (ss) Cluster is restricted to a single host This is useful for non-production environments such as demonstration, development and testing as it is most simple to implement Uni-Cast (wka) The cluster uses unicast protocol with the hosts explicitly named within the cluster that are part of the cluster. This is recommended for sites wanting to lock down a cluster to specific hosts and does not want to use multi-cast protocols. Administrators will have to name the list of hosts, known as Well Known Addresses, that are part of the cluster as part of this configuration Multi-Cast (mc) The cluster uses the multi-cast protocol with a valid multi-cast IP address and port. This is recommended for sites who want a dynamic configuration where threadpools and submitters are accepted on demand. This is the lowest amount of configuration for product clusters as the threadpools can join a cluster from any server with the right configuration dynamically. It is not recommended for sites that do not use the multi-cast protocol. Single Server Configuration

This is the simplest configuration with the cluster restricted to a single host. The cluster configuration is restricted networking wise within the configuration. To use this cluster type simply use the following command and follow the configuration generated for you from the template.

bedit.sh -c -t ss Uni-Cast Configuration

This is a multi-host cluster where the hosts in the configuration are defined explicitly in host and port number combinations. The port number is used for communication to that host in the cluster. This style is useful where the site does not want to use the multi-cast protocol or wants to micro-manage their configuration. To use this cluster type simply use the following command and follow the configuration generated for you from the template.

bedit.sh -c -t wka

You then add each host as a socket using the command:

add socket

This will add a new socket collection in the format socket.<socketnumber>. To set the values use the command:

set socket.<socketnumber> <parameter> <value>

where:

<socketnumber> The host number to edit <parameter> Either wkaaddress (host or IP address of server) and wkaport (port number on that host to use) <value> the value for the parameter. For example: set socket.1 wkaaddress host1

To use this cluster style ensure the following:

  • Use the same port number per host. Try and use the same broadcast port on each host in the cluster. If they are different then the port number in the main file for the machines that are on the cluster has to be changed to define that port.
  • Ensure each host has a copy of the configuration file. When you build the configuration file, ensure the same file is on each of the servers in the cluster (each host will require a copy of the product).
Multi-Cast Configuration

This is the most common multi-host configuration. The idea with this cluster type is that a multi-cast port and IP Address are broadcast across your network per cluster. It requires very little configuration and the threadpools can dynamically connect to that cluster with little configuration. It uses the multi-cast protocol which network administrators either love or hate. The configuration is similar to the Single Server but the cluster settings are actually managed in the installation configuration (ENVIRON.INI) using the COHERENCE_CLUSTER_ADDRESS and COHERENCE_CLUSTER_PORT settings. Refer to the Server Administrator Guide for additional configuration advice.

Cluster Guidelines

When setting up the cluster there are a few guidelines to follow:

  • Use Single Server for Non-Production. Unless you need multi-host clusters, use the Single Server cluster to save configuration effort.
  • Name Your Cluster Uniquely. Ensure your cluster is named appropriately and uniquely per environment to prevent cross environment unintentional clustering.
  • Set a Cluster Type and Stick with it. It is possible to migrate from one cluster type to another (without changing other objects) but to save time it is better to lock in one type and stick with it for the environment.
  • Avoid using Prod Mode. There is a mode in the configuration which is set to dev by default. It is recommended to leave the default for ALL non-production environment to avoid cross cluster issues. The Prod mode is recommended for Production systems only. Note: There are further safeguards built into the Oracle Utilities Application Framework to prevent cross cluster connectivity.

The cluster configuration generates a tangosol-coherence-override.xml configuration file used by Oracle Coherence to manage the cluster.

Cluster Configuration

Now we have the cluster configured, the next step is to design your threadpools to be housed in the cluster. That will be discussed in Part 2 (coming soon).

See You At The Edge Conference in Austin

Thu, 2019-02-14 16:31
I will be attending the Oracle Utilities Edge Conference in Austin TX in March. This year the agenda is slightly different to past years with the Technical Sessions being part of the Cloud and Technology Track with the Cloud sessions so I will have lots more speakers this year. I will be running a few sessions around our next generation Testing solution, migration to the cloud, a deep dive into our futures as well as co-chairing an exciting session on our directions in Machine Learning in the Oracle Utilities stack.

Use Of Oracle Coherence in Oracle Utilities Application Framework

Sun, 2019-02-10 17:03

In the batch architecture for the Oracle Utilities Application Framework, a Restricted Use License of Oracle Coherence is included in the product. The Distributed and Named Cache functionality of Oracle Coherence are used by the batch runtime to implement clustering of threadpools and submitters to help support the simple and complex architectures necessary for batch.

Partners ask about the libraries and their potential use in their implementations. There are a few things to understand:

  • Restricted Use License conditions. The license is for exclusive use in the managing of executing executing members (i.e. submitters and threadpools) across hardware licensed for use with the Oracle Utilities Application Framework based products. It cannot be used in any code outside of that restriction. Partners cannot use the libraries directly in their extensions. It is all embedded in the Oracle Utilities Application Framework.
  • Limited Libraries. The Oracle Coherence libraries are restricted to a subset needed by the license. It is not a full implementation of Oracle Coherence. As it is a subset, Oracle does not recommend using the Oracle Coherence Plug-In available for Oracle Enterprise Manager to be used with the Oracle Utilities Application Framework implementation of the Oracle Coherence cluster. Use of this plugin against the batch cluster will result in missing and incomplete information presented to the plug-in causing inconsistent results in that plug-in.
  • Patching. The Oracle Coherence libraries are shipped with the Oracle Utilities Application Framework and therefore are managed by patches for the Oracle Utilities Application Framework not Coherence directly. Unless otherwise directed by Oracle Support, do not manually manipulate the Oracle Coherence libraries.

The Oracle Coherence implementation with the Oracle Utilities Application Framework has been optimized for use with the batch architecture with a combination of prebuilt Oracle Coherence and Oracle Utilities Application Framework based configuration files.

Note: If you need to find out the version of the Oracle Coherence Libraries used in the product at any time then the libraries are listed in the file $SPLEBASE/etc/ouaf_jar_versions.txt

The following command can be used to see the version:

cat $SPLEBASE/etc/ouaf_jar_versions.txt | grep coh

For example in the latest version of the Oracle Utilities Application Framework (4.4.0.0.0):

cat /u01/umbk/etc/ouaf_jar_versions.txt | grep coh

coherence-ouaf                   12.2.1.3.0
coherence-work                   12.2.1.3.0

Special Tables in OUAF based products

Wed, 2019-02-06 22:47

Long time users of the Oracle Utilities Application Framework might recognize two common table types, recognized by their name suffixes, that are attached to most Maintenance Objects within the product:

  • Language Tables (Suffix _L).  The Oracle Utilities Application Framework is multi-lingual and can support multiple languages at a particular site (for example customers who have multi-lingual call centers or operate across jurisdictions where multiple languages are required). The Language table holds the tags for each language for any fields that need to display text on a screen. The Oracle Utilities Application Framework matches the right language records based upon the users language profile (and active language code).
  • Key Tables (Suffix _K). These tables hold the key values (and the now less used environment code) that are used in the main object tables. The original use for these tables was for key tracking in the original Archiving solution (which has now been replaced by ILM). Now that the original Archiving is not available, the role of these tables changed to be used in a number of areas:
    • Conversion. The conversion toolkit in Oracle Utilities Customer Care and Billing and now in the Cloud Service Foundation, uses the key table for efficient key generation and black listing of identifiers.
    • Key Generation. The Key generation utilities now use the key tables to quickly ascertain the uniqueness of a key. This is far more efficient than using the main table for this, especially with caching support in the database.
    • Information Life-cycle Management. The ILM capability uses the key tables to drive some of its processes including recognizing when something is archived and when it has been restored.

These tables are important for the operation of the Oracle Utilities Application Framework for all types of parts of the product. When you see them now you understand why they are there.

Batch Scheduler Integration Questions

Sun, 2019-02-03 21:57

One of the most common questions I get from partners is around batch scheduling and execution. Oracle Utilities Application Framework has a flexible set of methods of managing, executing and monitoring batch processes. The alternatives available are as follows:

  • Third Party Scheduler Integration. If the site has an investment in a third party batch scheduler to define the schedules and execute product batch processes with non-product processes, at an enterprise level, then the Oracle Utilities Application Framework includes a set of command line utilities, via scripts, that can be invoked by a wide range of third party schedulers to execute the process. This allows scheduling to be managed by the third party scheduler and the scripts to be used to execute and manage product batch processes. The scripts return standard return codes that the scheduler to use to determine next actions if necessary. For details of the command line utilities refer to the Server Administration Guide supplied with your version of the product.
  • Oracle Scheduler Integration. The Oracle Utilities Application Framework provides a dedicated API to allow implementations to use the Oracle DBMS Scheduler included in all editions of the database to be used as local or enterprise wide scheduler. The advantage of this is that the scheduler is already included in your existing database license and has inbuilt management capabilities provided via the base functionality of Oracle Enterprise Manager (12+) (via Scheduler Central) and also via Oracle SQL Developer. Oracle uses this scheduler in the Oracle Utilities SaaS Cloud solutions. Customers of those cloud services can use the interface provided by the included Oracle Utilities Cloud Service Foundation to manage their schedules or use the provided REST based scheduler API to execute schedules and/or processes from a third party scheduler. For more details of the scheduler interface refer to the Batch Scheduler Integration (Doc Id: 2138193.1) whitepaper available from My Oracle Support.
  • Online Submission. The Oracle Utilities Application Framework provides a development and testing tool to execute individual batch processes from the online system. It is basic and only supports execution of individual processes (not groups of jobs like the alternatives do). This online submission capability is designed for cost effective developer and non-production testing, if desired, and is not supported for production use. For more details, refer to the online documentation provided with the version of the product you are using.

Note: For customers of legacy versions of Oracle Utilities Customer Care and Billing, a basic workflow based scheduler was provided for development and testing purposes. This interface is not supported for production use and one of the alternatives outlined above should be used instead.

All the above methods all use the same architecture for execution of running batch processes (though some have some additional features that need to be enabled).  For details of the each of the configurations, refer to the Server Administration Guide supplied with your version of the product. 

When asked about which technology should be used I tend to recommend the following:

  • If you have an existing investment, that you want to retain, in a third party scheduler then use the command line interface. This will retain your existing investment and you can integrate across products or even integrate non-product batch such as backups from the same scheduler.
  • If you do not have an existing scheduler, then consider using the DBMS Scheduler provided with the database. It is more likely your DBA's are already using it for their tasks and it is used by a lot of Oracle products already. The advantage of this scheduler is that you already have the license somewhere in your organization already. It can be deployed locally within the product database or remotely as an enterprise wide solution. It has a lot of good features and Oracle Utilities will use this scheduler as a foundation of our cloud implementations. If you are on the cloud then use the provided interface in Oracle Utilities Cloud Service Foundation and if you have an external scheduler via the REST based Scheduler API. If you are on-premise, then use the Oracle Enterprise Manager (12+) interface (Scheduler Central) in preference to the SQL Developer interface (though the latter is handy for developers). Oracle also ships a command line interface to the scheduler objects if you like pl/sql type administration.

Note: Scheduler Central in Oracle Enterprise Manager is included in the base functionality for Oracle Enterprise Manager and does not require any additional packs.

  • I would only recommend to use the online submission for demonstrations, development and perhaps in testing (where you are not using Oracle Utilities Testing Accelerator or have the scheduler not implemented).  It has very limited support and will only execute individual processes.

 

Configuration Management for Oracle Utilities

Thu, 2019-01-31 18:45

An updated series of whitepapers are now available for managing configuration and code in Oracle Utilities products whether the implementation is on-premise, hybrid or using Oracle Utilities SaaS Utilities. It has been updated for the latest Oracle Utilities Application Framework release. The series highlights the generic tools, techniques and practices available for use in Oracle Utilities products. The series is split into a number of documents:

  • Concepts. Overview of the series and the concept of Configuration Management for Oracle Utilities products.
  • Environment Management. Establishing and managing environments for use on-premise, hybrid and on the Oracle Utilities SaaS Cloud. There are some practices and techniques discussed to reduce implementation costs.
  • Version Management. Understanding the inbuilt and third party integration for managing individual versions of individual extension assets. There is a discussion of managing code on the Oracle Utilities SaaS Cloud.
  • Release Management. Understanding the inbuilt release management capabilities for creating extension releases and accelerators.
  • Distribution. Installation advice for releasing extensions across the environments on-premise, hybrid and Oracle Utilities SaaS Cloud.
  • Change Management. A generic change management process to approve extension releases including assessment criteria.
  • Configuration Status. The information available for reporting state of extension assets.
  • Defect Management. A generic defect management process to handle defects in the product and extensions.
  • Implementing Fixes. A process and advice on implementing single fixes individually or in groups.
  • Implementing Upgrades. The common techniques and processes for implementing upgrades.
  • Preparing for the Cloud. Common techniques and assets that need to be migrated prior to moving to the Oracle Utilities SaaS Cloud.

For more information and for the whitepaper associated with these topics refer to the Configuration Management Series (Doc Id: 560401.1) available from My Oracle Support.

Pages