Quantcast
Channel: Philipp Salvisberg's Blog
Viewing all 118 articles
Browse latest View live

PL/SQL Cop for Trivadis PL/SQL & SQL Coding Guidelines Version 3.2

$
0
0

The PL/SQL Cop tool suite supports the new Trivadis PL/SQL & SQL Coding Guidelines 3.2. Download the new versions from the Download section. So, what’s new?

Numbering and Categorisation Scheme

The guidelines have been renumbered, extended, categorised by severity (blocker, critical, major, minor and info) and assigned to one or more SQALE characteristics (changeability, efficiency, maintainability, portability, reliability, reusability, security and testability).

This new categorisation allows to sort issues by severity. The most important issues will be listed first, even if you do not disable less important guidelines.

Severity and characteristics may now be used beside guideline numbers in check and skip lists (white- and blacklists).

Validator Plugins

Would you like to create your own guidelines? Or extend the existing guidelines? Then validators are your friends. A validator is basically a Java class implementing the PLSQLCopValidator interface. Validators may be used in the command-line utility, in SonarQube and in the SQL Developer extension.

A complete example is provided with source code as Maven project. This example extends the default validator by 15 additional guidelines to check naming conventions according chapter 2.2 of the Trivadis PL/SQL & SQL Coding Guidelines 3.2. The following screenshot shows the violation of a custom guideline (G-9013: Exceptions should start with ‘e’) within SonarQube.

PL/SQL Editor for Eclipse

The editor is mainly provided to understand the PL/SQL model better. The full model is available as PLSQL.ecore file, which may be explored best within the Eclipse IDE. Understanding the PL/SQL model is important if you plan to develop your own validators.

Use the plsqleditor.zip file in the eclipse folder to install the PL/SQL editor in Eclipse. The editor supports beside an outline view, syntax colouring, bracket matching, code formatting and error integration into the Eclipse workbench.

Changelogs


plscope-utils – Utilities for PL/Scope in Oracle Database 12.2

$
0
0

PL/Scope was introduced with Oracle Database version 11.1 and covered PL/SQL only. SQL statements such as SELECT, INSERT, UPDATE, DELETE and MERGE were simply ignored. Analysing PL/SQL source code without covering SQL does not provide a lot of value. Hence, PL/Scope was neglected by the Oracle community. But this seems to change with version 12.2. PL/Scope covers SQL statements, finally. This makes fine grained dependency analysis possible. Fine grained means on column level and on package unit level (procedure/function).

PL/Scope is something like a software development kit (SDK) for source code analysis. It consists basically of the following two components:

  • The compiler, which collects information when compiling source code (e.g. plscope_settings=’identifiers:all’)
  • The dictionary views, providing information about collected identifiers (dba_identifiers) and SQL statements (dba_statements).

The provided views are based on a recursive data structure which is not that easy to understand. Querying them will soon need recursive self joins and joins to other Oracle data dictionary views. Everybody is going to build some tools (scripts, reports, views, etc.) for their analysis. Wouldn’t it make sense to have some open sourced library doing that once for all? – Obviously the answer is yes. This library exists. It is available as a GitHub repository named plscope-utils.

Compile

First you have to compile the source code you’d like to analyse. All source code, in all relevant schemas. If you do not compile them, they cannot be referenced and therefore they will be simply missing in your analysis. This step is completely independent of whether you are going to use plscope-utils or not.

Here’s a simplistic ETL example using emp and dept as source data.

ALTER SESSION SET plscope_settings='identifiers:all, statements:all';

CREATE OR REPLACE PROCEDURE load_from_tab IS
BEGIN
   INSERT INTO deptsal (dept_no, dept_name, salary)
   SELECT /*+ordered */
          d.deptno, d.dname, SUM(e.sal + NVL(e.comm, 0)) AS sal
     FROM dept d
     LEFT JOIN (SELECT *
                  FROM emp
                 WHERE hiredate > DATE '1980-01-01') e
       ON e.deptno = d.deptno
    GROUP BY d.deptno, d.dname;
   COMMIT;
END load_from_tab;
/

Views by plscope-utils

After installing plscope-utils, you have access to the following 5 views from any user within the database instance:

For each of these views you find an example query and result in the readme of plscope-utils.

Nonetheless I’m going to show some query results in this post as well.

PLSCOPE_IDENTIFIERS

The view plscope_identifiers combines dba_identifiers and dba_statements and provides all columns from dba_identifiers plus procedure_name, name_path, path_len, ref_owner, ref_object_type and ref_object_name.

Here’s an example query for the procedure defined above.

SELECT procedure_name, line, col, name, name_path, path_len, type, usage,
       ref_owner, ref_object_type, ref_object_name
  FROM plscope_identifiers
 WHERE object_name = 'LOAD_FROM_TAB'
   AND owner = USER
 ORDER BY line, col;

PLSCOPE_STATEMENTS

This view is very similar to dba_statements. It just adds a is_duplicate column to easily identify duplicate SQL statements.

SELECT owner, object_type, object_name, line, col, type, sql_id, is_duplicate, full_text
  FROM plscope_statements S
 WHERE owner = USER
   AND is_duplicate = 'YES'
 ORDER BY sql_id, owner, object_type, object_name, line, col;

PLSCOPE_TAB_USAGE

This view reports synonym, view and table usages. The column direct_dependency is used to indicate whether the usage has been resolved by an synonym or view.

SELECT *
  FROM plscope_tab_usage
 WHERE object_name = 'LOAD_FROM_TAB'
   AND owner = USER
 ORDER BY owner, object_type, object_name, line, col, direct_dependency;

PLSCOPE_COL_USAGE

This view reports view/table column usages. Column-less accesses (e.g. insert statements without column-list or *) and accesses on synonyms and views are resolved. The value ‘NO’ in the column direct_dependency indicates such an access.

SELECT *
  FROM plscope_col_usage
 WHERE object_name = 'LOAD_FROM_TAB'
   AND owner = USER
 ORDER BY owner, object_type, object_name, line, col, direct_dependency;

PLSCOPE_INS_LINEAGE

This view reports the where-lineage of “INSERT … SELECT” statements. I’ll have a talk named “SQL Lineage Made Easy with PL/Scope 12.2?” at the Trivadis TechEvent tomorrow and at APEX Connect 17 on May, 11 2017, where I’ll cover this topic in more detail. I’ll write a dedicated blog post about this topic soon.

Here’s a query which shows that the target column salary is based on the source columns sal and comm in the table emp.

SELECT line, col,
       from_owner, from_object_type, from_object_name, from_column_name,
       to_owner, to_object_type, to_object_name, to_column_name
  FROM plscope_ins_lineage
 WHERE object_name = 'LOAD_FROM_TAB'
 ORDER BY to_column_name;

Summary

Dependency analysis within the Oracle Database 12.2 has become much easier, thanks to PL/Scope. But using the views provided by plscope-utils makes it almost a trivial thing.

 

Get Ready for Oracle 12.2

$
0
0

The Oracle Database 12.2 grammar is now supported in the most current versions of PL/SQL Cop, PL/SQL Cop for SQL Developer, PL/SQL Cop for SonarQube and PL/SQL Analyzer.

The following screenshot shows a query based on analytic views. Have a look at the full example on Oracle Live SQL, if you are interested in analytic views.

The next screenshot shows some of the new PL/SQL and SQL features in action.

Get ready for Oracle Database 12.2 and grab your copy of the updated tools from the download area.

 

plscope-utils for SQL Developer – Simplify the Use of PL/Scope

$
0
0

In this post I showed how to do some code analysis with PL/Scope and how the views and packages of the plscope-utils might simplify this task. However, these views and packages are based on dba_* views and it is sometimes not that easy to get such privileges for an additional user in a non-personal database instance. To simplify the use of PL/Scope even more, I thought it would be helpful to come up with a solution, which does not require the installation of database components and will work with pre version 12.2 connections as well. Hence I’ve build my first bundled XML extension for SQL Developer as part of the plscope-utils GitHub project.

plscope-utils for SQL Developer extends the database navigator tree by a “PL/Scope” node, context menus, detail views for PL/SQL units and some reports.

Compile

Before you analyse code with PL/Scope you have to compile the packages, procedures, functions, triggers, types and synonyms after enabling PL/Scope in the current session. Using dbms_utility.compile_schema sounds like the right choice, but unfortunately it is not enough. The documentation reveals why: “This procedure compiles all procedures, functions, packages, views and triggers in the specified schema.”… Ah, types and synonyms are not compiled.

To compile the code with plscope-utils for SQL Developer, right-click on the “PL/SQL” or the connection name node and select the “Compile with PL/Scope…” option.

Amend the settings

and press the “Apply” button. Maybe it is also a good idea to peek at the SQL statement first, to better understand what exactly will be executed. Maybe you could integrate that part into your installation scripts as well.

Column Usages

Select a package within the “PL/Scope” node of the Connections window to show the column usage and other details. Please note, that the associated package procedure name is provided as well.

Click on the “Link” column to open the PL/SQL editor at the right line and column.

Reports

The “plscope-utils Reports” folder provides the following reports:

The “UDF Calls in SQL Statements” report shows static SELECT, INSERT, UPDATE, DELETE and MERGE statements within packages, procedures, functions, triggers and types using user-defined function calls. Such calls might be costly, especially when they are accessing data as well. See also Steven Feuerstein’s post More 12.2 PL/Scope Magic: Find SQL statements that call user-defined functions. In fact I snitched the idea from Steven.

Download / Installation

You may download plscope-utils for SQL Developer from here, here or here.

Use the “Install From Local File” option within the “Check for Updates… ” dialog to install the previously downloaded zip file or configure the http://update.salvis.com/ update site as described at the bottom of the plscope-utils for SQL Developer page.

Continuous Code Quality for PL/SQL with Docker

$
0
0

In this blog post I show step by step how to set up a continuous code quality inspection environment for a PL/SQL project hosted on GitHub. I’m going to use a Docker container for SonarQube and another container for Jenkins.

Here is the table of content of the major steps.

  1. Install Docker
  2. Create SonarQube und Jenkins Container
  3. Install PL/SQL Cop for SonarQube
  4. Install PL/SQL Cop (Command Line Utility)
  5. Configure SonarQube
  6. Configure Jenkins
  7. Create Code Analysis Job
  8. View Result in SonarQube
  9. Summary

In the summary of this post you find an audioless video completing the process within 3.5 minutes.

1. Install Docker

I’m assuming that you already have installed Docker on your machine. If not, download and install the free Docker Community Edition including Docker Compose. I’m going to use Docker for Mac. But you may use Docker for Windows or a Docker server for one of the various supported Linux distributions. It is also possible to use a cloud provider, but the installation procedure will differ slightly.

You are ready for the next steps when the command

docker run --rm hello-world

produces this output

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
   executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
   to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://cloud.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/engine/userguide/

in your terminal window.

2. Create SonarQube and Jenkins Container

In this step we create a container for SonarQube and another one for Jenkins. Jenkins will need to communicate with SonarQube. Docker Compose allows us to use hostnames instead of hard coded IP addresses and to manage all involved containers together in a single YAML configuration file.

Create a “docker-compose.yml” file in a “plsqlcop” directory. The directory name is used to derive names for containers and volumes. In this case the volumes will be named “plsqlcop_sonardata” and “plsqlcop_cidata”. For the container names the default naming has been overridden.

version: "3.0"
services:
   sonar:
      image: "sonarqube:5.1.2"
      container_name: "sonar"
      environment:
         - SONARQUBE_JDBC_USERNAME=sonar
         - SONARQUBE_JDBC_PASSWORD=sonar
         - SONARQUBE_JDBC_URL=
      ports:
         - "9010:9000"
      volumes:
         - "sonardata:/opt/sonarqube/data"
   ci:
      image: "jenkins/jenkins:2.79"
      container_name: "ci"
      ports:
         - "9020:8080"
      volumes:
         - "cidata:/var/jenkins_home"
volumes:
   sonardata:
      external: false
   cidata:
      external: false

Make sure you are located in the directory where the “docker-compose.yml” file is stored. Then create the containers in the background by running

docker-compose up -d

You should get an output similar to the following:

Creating volume "plsqlcop_sonardata" with default driver
Creating volume "plsqlcop_cidata" with default driver
Creating ci ...
Creating sonar ...
Creating sonar
Creating sonar ... done

3. Install PL/SQL Cop for SonarQube

To install the current version of PL/SQL Cop for SonarQube within the “sonar” container run

docker exec sonar wget --no-check-certificate https://www.salvis.com/blog/?ddownload=6822 -O \
/opt/sonarqube/extensions/plugins/sonar-plsql-cop-plugin-2.1.1.jar

Windows users please note, that the \ (backslash) at the end of the first line is just the Unix line-continuation character (as the ` (grave accent) in PowerShell), you should omit it. The wget command will be executed within the sonar container.

To load the plugin we need to restart the container.

docker restart sonar

4. Install PL/SQL Cop (Command Line Utility)

To install the current version of PL/SQL Cop within the “ci” container run

docker exec -u 0 ci wget --no-check-certificate https://www.salvis.com/blog/?ddownload=6680 -O /opt/tvdcc.zip
docker exec -u 0 ci unzip /opt/tvdcc.zip -d /opt
docker exec -u 0 ci bash -c "mv /opt/tvdcc-* /opt/tvdcc"
docker exec -u 0 ci rm /opt/tvdcc.zip

Why do we need to install this component in the “ci” container and not in the “sonar” container? Well, the SonarQube scanner is executed in the “ci” container and gets the code analyzer from the SonarQube server. This increases the scalability of the code analyzers since just the analysis reports need to be sent to the SonarQube server. However, the PL/SQL Cop SonarQube plugin is basically a wrapper to the command line utility. This means that every Jenkins slave needs to have PL/SQL Cop installed in the location configured on the SonarQube server.

In a environment with multiple servers and multiple Jenkins slaves the PL/SQL Cop installations need to be identical on every slave.

5. Configure SonarQube

Open “http://localhost:9010” in your web browser.

Press “Log in” in the upper right corner and login into SonarQube with the username “admin” and password “admin”.

Click on “Settings” and the Category “Trivadis PL/SQL Cop” and change the “Path to PL/SQL Cop command line tvdcc executable” to “/opt/tvdcc/tvdcc.sh”.

Press “Save Trivadis PL/SQL Cop Settings” and your done.

6. Configure Jenkins

Open “http://localhost:9020” in your web browser.

Jenkins generated during the container creation a password for the user admin. To get this password execute in a terminal window the following:

docker exec ci cat /var/jenkins_home/secrets/initialAdminPassword

Copy the output into the clipboard and paste it into the “Administrator password” field in the browser window and press “Continue”.

Press on “Install Suggested Plugins”

Wait until all plugins are installed.

Enter the requested fields and press “Save and Finish”. And on the next page Click on “Start Using Jenkins”

Click on “Mange Jenkins”.

Click on “Manage Plugins”.

Select “SonarQube Scanner for Jenkins” in the “Available” tab. Use the filter to find the entry. Press “Install without restart”.

Click on “Go back to top page” when the plugins have been installed successfully.

Click on “Manage Jenkins” (again) and then on “Global Tool Configuration”.

Within the Global Tool Configuration click on “Add SonarQube Scanner”, add a name for the scanner and press “Save”.

Click on “Configure System”.

Scroll down to the “SonarQube servers” section and click on “Add SonarQube”.

Enter “SonarQube 5.1.2” for name, enter “http://sonar:9000” for “Server URL”, select “5.1 or lower” in “Server version” and enter “jdbc:h2:tcp://sonar/sonar” for “Database URL”. Click on “Save” to complete the Jenkins configuration.

Please note, that we are accessing the host sonar via the internal network Docker Compose has set up for us. Therefore we have to use the default port 9000 and not port 9010 which we are using from outside, e.g. to access SonarQube in the web browser.

 

7. Create Code Analysis Job

Click on “create new jobs”.

Enter “plscope-utils” and click on “Freestyle project”

Click on the “Source Code Management” tab, select “Git”, enter “https://github.com/PhilippSalvisberg/plscope-utils” for “Repository URL”, add “Sparse Checkout paths” in “Additional Behaviours”, enter “database/utils” for “Paths” and click on “Apply”.

Click on “Build Triggers” tab, select “Poll SCM”. Enter “H/15 * * * *” for “Schedule” and press “Apply”. This will look for changes in the Git repository every 15 minutes and starts a build if a change is detected.

Click on “Build” tab and Scroll down to the Build section and add a “Execute SonarQube Scanner” build step.

Copy the following lines into the “Analysis properties”.

sonar.projectKey=plscope-utils:master
sonar.projectName=plscope-utils
sonar.projectVersion=0.5.0
sonar.sources=database
sonar.language=plsqlcop

and click on “Save”.

Wait until the initial build is started or click on “Build Now” and then on “#1” (the first build job in the Build History).

Click on “Console Output”.

The analysis result has been stored successfully in SonarQube and can now be queried via http://localhost:9010 .

Please note, that the link to SonarQube provided in the console output is not working, since it is referring a port of the internal Docker Compose network which is not accessible to your browser. However, it is possible to configure the external IP address and port for SonarQube in Jenkins and than this link will work.

8. View Result in SonarQube

Open “http://localhost:9010” in your web browser.

Click on “plscope-utils”,  select the “Issues” tab for this project and select all rules.

Click on “>” at the right side of an issue to see the source code line causing this issue.

See https://www.sonarqube.org/features/clean-code/ for more information.

9. Summary

Setting up an continuous code quality inspection environment for a PL/SQL project with Docker is quite simple. The audioless video documents the complete process.

Granted, for production use you need to use a different database as SonarQube backbone, define some roles and manage users and probably integrate an Active Directory. That should not be too difficult, at least not technijcally.

But the most challenging part will be to agree on some rules, adapt your development process and really improve your code quality over time. So, let’s get started.

Limitations of PL/Scope and How to Deal with Them

$
0
0

My first car was a Renault R5 TX. The motor cooling of this car was really bad. On a hot summer day it was simply not possible to drive slowly in high traffic without overheating the engine. To cool the engine you could either stop the car, open the front lid and hope for the best or turn on the heating. I decided for the latter. It was impressive how well the heating system worked on hot days. Not very pleasant to drive uphill behind a slow Dutch caravan in the Alps, but a funny experience in retrospect. My R5 was a reliable companion for years and I loved it.

When you are aware of the limitations of PL/Scope and know how to deal with them, you will find PL/Scope a very useful tool. This post is supposed to enable you to use PL/Scope more effectively. I’m fond of PL/Scope, because it may provide very reliable insights of your static PL/SQL code. I hope your are going to use PL/Scope, even if it is required sometimes “to turn on the heating”.

PL/Scope gathers at compile time metadata about your PL/SQL code and stores it in dedicated tables. These metadata are accessible for analysis via the views dba/all/user_identfiers (since 11.1) and in dba/all/user_statements (since 12.2). If you are not familiar with PL/Scope I recommend to read Steven Feuerstein’s article Powerful Impact Analysis or have a look a the introduction chapters of the documentation. If you are fluent in German, I can recommend also Sabine Heimsath’s article Schöner Coden – PL/SQL analysieren mit PL/Scope.

The limitations covered in this post are based on Oracle Database version 12.2.0.1.170814. Most of the limitations are bugs. You may track the progress on MOS with the provided bug numbers.

1. Missing Results After NULL Statement

In the following example we analyse the procedure p2. Look at the result of the user_identifiers query on line 27-29. All three calls to procedure p1 have been detected. That’s good and correct.

ALTER SESSION SET plscope_settings='identifiers:all, statements:all';

CREATE OR REPLACE PROCEDURE p1 (in_p IN INTEGER) IS
BEGIN
   sys.dbms_output.put_line(sqrt(in_p));
END;
/

CREATE OR REPLACE PROCEDURE p2 IS
BEGIN
   p1(4);
   p1(9);
   p1(16);
END;
/

SELECT usage, type, name, line, col
  FROM user_identifiers
 WHERE object_type = 'PROCEDURE'
   AND object_name = 'P2'
 ORDER BY line, col;

USAGE       TYPE               NAME             LINE        COL
----------- ------------------ ---------- ---------- ----------
DEFINITION  PROCEDURE          P2                  1         11
DECLARATION PROCEDURE          P2                  1         11
CALL        PROCEDURE          P1                  3          4
CALL        PROCEDURE          P1                  4          4
CALL        PROCEDURE          P1                  5          4

But when we add a NULL statement before the first call of p1, the calls after the NULL statement are not reported anymore.

CREATE OR REPLACE PROCEDURE p2 IS
BEGIN
   NULL;
   p1(4);
   p1(9);
   p1(16);
END;
/

SELECT usage, type, name, line, col
  FROM user_identifiers
 WHERE object_type = 'PROCEDURE'
   AND object_name = 'P2'
 ORDER BY line, col;

USAGE       TYPE               NAME             LINE        COL
----------- ------------------ ---------- ---------- ----------
DECLARATION PROCEDURE          P2                  1         11
DEFINITION  PROCEDURE          P2                  1         11

This is a bug. See bug 24916492 on MOS.

I am not aware of a simple workaround. This means you have to change the code. In this case it is easy, just remove the NULL statement.

The good news is, that it affects just the basic block of the NULL statement. Other basic blocks are not affected. Here’s an example of a complete result, even if a NULL statement is used. The term “basic block” has been introduced with PL/SQL Basic Block Coverage in version 12.2. However the definition is valid for all versions of PL/SQL. I like Chris Saxon’s definition: “It’s a piece of code that either runs completely or not at all”.

CREATE OR REPLACE PROCEDURE p2 IS
BEGIN
   IF FALSE THEN
      NULL;
   END IF;
   p1(4);
   p1(9);
   p1(16);
END;
/

SELECT usage, type, name, line, col
  FROM user_identifiers
 WHERE object_type = 'PROCEDURE'
   AND object_name = 'P2'
 ORDER BY line, col, usage_id;

USAGE       TYPE               NAME             LINE        COL
----------- ------------------ ---------- ---------- ----------
DEFINITION  PROCEDURE          P2                  1         11
DECLARATION PROCEDURE          P2                  1         11
CALL        PROCEDURE          P1                  6          4
CALL        PROCEDURE          P1                  7          4
CALL        PROCEDURE          P1                  8          4

In cases where PL/SQL requires at least one statement and the NULL statement is the only one, you should not have a problem. Just unnecessary usages of  NULL statements might cause problems. So, make sure that you do not use unnecessary NULL statements. They are noise and may lead to incomplete code analysis results.

2. Broken Usage Hierarchy

Let’s look at the usage hierarchy of the procedure p2 in example 1c. The hierarchy level is represented in the result column usage. Three leading spaces for each sub-level. The usage_id identifies a row for an object, p2 in this case. The usage_id starts with 1 and ends with 5. There are no gaps. The column usage_context_id is part of the foreign key to the parent usage_id. Oracle decided to start the hierarchy with the non-existing usage_id 0 (zero). That’s what we use in the start_with clause. The recursive query produces the same number of result rows as the non-recursive query in example 1c. That’s important, and that’s how it should be. Always.

WITH ids AS (
   SELECT usage, type, name, line, col, usage_id, usage_context_id
     FROM user_identifiers ids
    WHERE object_type = 'PROCEDURE'
      AND object_name = 'P2'
)
 SELECT lpad(' ', 3 * (level - 1)) || ids.usage AS usage,
        ids.type,
        ids.name,
        ids.line,
        ids.col,
        ids.usage_id,
        ids.usage_context_id
   FROM ids
  START WITH ids.usage_context_id = 0
CONNECT BY PRIOR ids.usage_id = ids.usage_context_id
  ORDER BY ids.line, ids.col, usage_id;

USAGE          TYPE       NAME  LINE  COL USAGE_ID USAGE_CONTEXT_ID
-------------- ---------- ----- ---- ---- -------- ----------------
DECLARATION    PROCEDURE  P2       1   11        1                0
   DEFINITION  PROCEDURE  P2       1   11        2                1
      CALL     PROCEDURE  P1       6    4        3                2
      CALL     PROCEDURE  P1       7    4        4                2
      CALL     PROCEDURE  P1       8    4        5                2

There are two reasons for broken usage hierarchies.

  1. Static SQL statements (expected behaviour)
  2. References to uncompiled synonyms (bug)

2.1. Static SQL Statements

Since version 12.2 PL/Scope covers static SQL statements in the user_statements view. Static SQL statements are missing in the user_identifiers view, probably for compatibility reasons. To get the full usage hierarchy you have to combine the views using UNION ALL like in line 49 of the next example 2.1a. Please note that the usages on line 25-29 are referring to the parent on line 41. A recursive query on user_identifiers only, would return just the lines 69-70 – a quite incomplete result set. Therefore you should think twice before applying your “old” PL/Scope version 11.1 scripts against an Oracle 12.2 database.

ALTER SESSION SET plscope_settings='identifiers:all, statements:all';

CREATE OR REPLACE FUNCTION f1 (in_val NUMBER) RETURN NUMBER IS
BEGIN
   RETURN SQRT(in_val);
END;
/

CREATE OR REPLACE PROCEDURE p3 IS
BEGIN
   UPDATE emp SET sal = sal + f1(comm);
END;
/

SELECT usage, type, name, line, col, usage_id, usage_context_id
  FROM user_identifiers
 WHERE object_type = 'PROCEDURE'
   AND object_name = 'P3'
 ORDER BY line, col, usage_id;

USAGE          TYPE       NAME  LINE  COL USAGE_ID USAGE_CONTEXT_ID
-------------- ---------- ----- ---- ---- -------- ----------------
DECLARATION    PROCEDURE  P3       1   11        1                0
DEFINITION     PROCEDURE  P3       1   11        2                1
REFERENCE      TABLE      EMP      3   11        4                3
REFERENCE      COLUMN     SAL      3   19        6                3
REFERENCE      COLUMN     SAL      3   25        5                3
CALL           FUNCTION   F1       3   31        7                3
REFERENCE      COLUMN     COMM     3   34        8                7

7 rows selected.

SELECT text, type, sql_id, line, col, usage_id, usage_context_id
  FROM user_statements
 WHERE object_type = 'PROCEDURE'
   AND object_name = 'P3'
 ORDER BY line, col;

TEXT                                TYPE   SQL_ID        LINE  COL USAGE_ID USAGE_CONTEXT_ID
----------------------------------- ------ ------------- ---- ---- -------- ----------------
UPDATE EMP SET SAL = SAL + F1(COMM) UPDATE 8kyysdc8m75ag    3    4        3                2

WITH
   ids AS (
      SELECT usage, type, name, line, col, usage_id, usage_context_id
        FROM user_identifiers
       WHERE object_type = 'PROCEDURE'
         AND object_name = 'P3'
    UNION ALL
      SELECT 'EXECUTE' AS usage, type, sql_id AS name, line, col, usage_id, usage_context_id
       FROM user_statements
       WHERE object_type = 'PROCEDURE'
         AND object_name = 'P3'
   )
 SELECT lpad(' ', 3 * (level - 1)) || usage AS usage,
        type,
        name,
        line,
        col,
        usage_id,
        usage_context_id
   FROM ids
  START WITH usage_context_id = 0
CONNECT BY PRIOR usage_id = usage_context_id
  ORDER BY line, col, usage_id;

USAGE                 TYPE       NAME           LINE  COL USAGE_ID USAGE_CONTEXT_ID
--------------------- ---------- -------------- ---- ---- -------- ----------------
DECLARATION           PROCEDURE  P3                1   11        1                0
   DEFINITION         PROCEDURE  P3                1   11        2                1
      EXECUTE         UPDATE     8kyysdc8m75ag     3    4        3                2
         REFERENCE    TABLE      EMP               3   11        4                3
         REFERENCE    COLUMN     SAL               3   19        6                3
         REFERENCE    COLUMN     SAL               3   25        5                3
         CALL         FUNCTION   F1                3   31        7                3
            REFERENCE COLUMN     COMM              3   34        8                7

2.2 References to Uncompiled Synonyms

In the example 2.2a the procedure p4 calls procedure p1, but p1 is not compiled with PL/Scope. The usage hierarchy is intact. Please note that the result set contains also all usages of the parameter in_p.

ALTER SESSION SET plscope_settings='identifiers:none, statements:none';

CREATE OR REPLACE PROCEDURE p1 (in_p IN INTEGER) IS
BEGIN
   sys.dbms_output.put_line(sqrt(in_p));
END;
/

ALTER SESSION SET plscope_settings='identifiers:all, statements:all';

CREATE OR REPLACE PROCEDURE p4 (in_p IN INTEGER) IS
BEGIN
   p1(in_p => in_p);
END;
/

SELECT usage, type, name, line, col, usage_id, usage_context_id
  FROM user_identifiers
 WHERE object_type = 'PROCEDURE'
   AND object_name = 'P4'
 ORDER BY line, col, usage_id;

USAGE                 TYPE       NAME           LINE  COL USAGE_ID USAGE_CONTEXT_ID
--------------------- ---------- -------------- ---- ---- -------- ----------------
DECLARATION           PROCEDURE  P4                1   11        1                0
DEFINITION            PROCEDURE  P4                1   11        2                1
DECLARATION           FORMAL IN  IN_P              1   15        3                2
REFERENCE             SUBTYPE    INTEGER           1   23        4                3
REFERENCE             FORMAL IN  IN_P              3   15        5                2

But the usage hierarchy becomes broken when we call procedure p1via an uncompiled synonym as in the example 2.2b. The highlighted result row is referencing a non-existing usage_id. In a recursive query this result row will get lost.

This is a bug. See bug 26363026 on MOS.

ALTER SESSION SET plscope_settings='identifiers:none, statements:none';

CREATE OR REPLACE SYNONYM s1 FOR p1;

ALTER SESSION SET plscope_settings='identifiers:all, statements:all';

CREATE OR REPLACE PROCEDURE p4 (in_p IN INTEGER) IS
BEGIN
   s1(in_p => in_p);
END;
/

SELECT usage, type, name, line, col, usage_id, usage_context_id
  FROM user_identifiers
 WHERE object_type = 'PROCEDURE'
   AND object_name = 'P4'
 ORDER BY line, col, usage_id;

USAGE                 TYPE       NAME           LINE  COL USAGE_ID USAGE_CONTEXT_ID
--------------------- ---------- -------------- ---- ---- -------- ----------------
DECLARATION           PROCEDURE  P4                1   11        1                0
DEFINITION            PROCEDURE  P4                1   11        2                1
DECLARATION           FORMAL IN  IN_P              1   15        3                2
REFERENCE             SUBTYPE    INTEGER           1   23        4                3
REFERENCE             FORMAL IN  IN_P              3   15        6                5

You can fix such a broken usage hierarchy on the fly. Here’s simplified version of  the query based on the plscope_identifiers view of the plscope-utils project. The analytic function in line 27-31 fixes invalid foreign keys. The highest preceding usage_id might not always be the best choice, but it is usually not that bad either. In this case the non-existing ‘5’ was replaced with a ‘4’ as you can see on line 53.

WITH
   filtered_ids AS (
      SELECT usage, type, name, line, col, usage_id, usage_context_id
        FROM user_identifiers ids
       WHERE object_type = 'PROCEDURE'
         AND object_name = 'P4'
   ),
   sanitized_ids AS (
      SELECT fids.usage, fids.type, fids.name, fids.line, fids.col,
             fids.usage_id, fids.usage_context_id,
             CASE
                WHEN fk.usage_id IS NOT NULL OR fids.usage_context_id = 0 THEN
                   'YES'
                ELSE
                   'NO'
             END AS sane_fk
        FROM filtered_ids fids
        LEFT JOIN filtered_ids fk
          ON fk.usage_id = fids.usage_context_id
   ),
   ids AS (
      SELECT usage, type, name, line, col, usage_id,
             CASE
                WHEN sane_fk = 'YES' THEN
                   usage_context_id
                ELSE
                   last_value(CASE WHEN sane_fk = 'YES' THEN usage_id END)
                      IGNORE NULLS OVER (
                         ORDER BY line, col
                         ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING
                      )
             END AS usage_context_id -- fix broken hierarchies
        FROM sanitized_ids
   )
 SELECT lpad(' ', 3 * (level - 1)) || usage AS usage,
        type,
        name,
        line,
        col,
        usage_id,
        usage_context_id
   FROM ids
  START WITH usage_context_id = 0
CONNECT BY PRIOR usage_id = usage_context_id
  ORDER BY line, col, usage_id;

USAGE                 TYPE       NAME           LINE  COL USAGE_ID USAGE_CONTEXT_ID
--------------------- ---------- -------------- ---- ---- -------- ----------------
DECLARATION           PROCEDURE  P4                1   11        1                0
   DEFINITION         PROCEDURE  P4                1   11        2                1
      DECLARATION     FORMAL IN  IN_P              1   15        3                2
         REFERENCE    SUBTYPE    INTEGER           1   23        4                3
            REFERENCE FORMAL IN  IN_P              3   15        6                4

3 Missing Usages of Objects in CDB$ROOT

PL/Scope stores by default the metadata for the following SYS packages:

  • DBMS_STANDARD
  • STANDARD

If you want to analyse the usage of other supplied PL/SQL packages, you need to compile these package with PL/Scope settings first. The next example shows how to do that. On line 29 and 30 the reference to the package dbms_output and its procedure put_line are properly reported. So far so good.

CONNECT sys/oracle@odb AS SYSDBA

ALTER SESSION SET plscope_settings='identifiers:all, statements:all';

ALTER PACKAGE dbms_output COMPILE;

CONNECT scott/tiger@odb

ALTER SESSION SET plscope_settings='identifiers:all, statements:all';

CREATE OR REPLACE PROCEDURE p5 (in_p IN VARCHAR2) IS
BEGIN
   sys.dbms_output.put_line(in_p);
END;
/

SELECT usage, type, name, line, col, usage_id, usage_context_id
  FROM user_identifiers
 WHERE object_type = 'PROCEDURE'
   AND object_name = 'P5'
 ORDER BY line, col, usage_id;

USAGE       TYPE               NAME        LINE  COL USAGE_ID USAGE_CONTEXT_ID
----------- ------------------ ----------- ---- ---- -------- ----------------
DECLARATION PROCEDURE          P5             1   11        1                0
DEFINITION  PROCEDURE          P5             1   11        2                1
DECLARATION FORMAL IN          IN_P           1   15        3                2
REFERENCE   CHARACTER DATATYPE VARCHAR2       1   23        4                3
REFERENCE   PACKAGE            DBMS_OUTPUT    3    8        5                2
CALL        PROCEDURE          PUT_LINE       3   20        6                5
REFERENCE   FORMAL IN          IN_P           3   29        7                6

We know that the non-CDB architecture has been deprecated with with Oracle 12.2. So let’s try the same with the recommended CDB architecture. In the next example we compile the dbms_output package within the CDB$ROOT container, which owns this package. Compiling it in a PDB is not possible (it does not throw an error, but it simply has no effect). On line 19 to 23 you see the container identifiers and their names. On line line 37 to 40 you see that the PL/Scope identifiers for the procedure put_lineis available in every container, except PDB$SEED. So far everything still looks good.

CONNECT sys/oracle@ocdb AS SYSDBA

SHOW con_name

CON_NAME
------------------------------
CDB$ROOT

ALTER SESSION SET plscope_settings='identifiers:all, statements:all';

ALTER PACKAGE dbms_output COMPILE;

SELECT con_id, name
  FROM v$containers
 ORDER BY con_id;

CON_ID NAME
------ -----------
     1 CDB$ROOT
     2 PDB$SEED
     3 OPDB1
     4 OPDB2
     5 OPDB3

SELECT usage, type, name, line, col, origin_con_id, con_id
  FROM cdb_identifiers
 WHERE OWNER = 'SYS'
   AND object_type = 'PACKAGE'
   AND object_name = 'DBMS_OUTPUT'
   AND TYPE = 'PROCEDURE'
   AND USAGE = 'DECLARATION'
   AND NAME = 'PUT_LINE'
 ORDER BY con_id;

USAGE       TYPE               NAME        LINE  COL ORIGIN_CON_ID CON_ID
----------- ------------------ ----------- ---- ---- ------------- ------
DECLARATION PROCEDURE          PUT_LINE      83   13             1      1
DECLARATION PROCEDURE          PUT_LINE      83   13             1      3
DECLARATION PROCEDURE          PUT_LINE      83   13             1      4
DECLARATION PROCEDURE          PUT_LINE      83   13             1      5

Now we are ready to create our procedure p5 which is using dbms_output in user SCOTT. However, the PL/Scope result is incomplete. Only 5 instead of 7 rows are reported. The two usages of dbms_output are missing.

This is a bug. See bug 26169004 on MOS.

CONNECT scott/tiger@opdb1

SHOW con_name

CON_NAME
------------------------------
OPDB1

ALTER SESSION SET plscope_settings='identifiers:all, statements:all';

CREATE OR REPLACE PROCEDURE p5 (in_p IN VARCHAR2) IS
BEGIN
   sys.dbms_output.put_line(in_p);
END;
/

SELECT usage, type, name, line, col, usage_id, usage_context_id
  FROM user_identifiers
 WHERE object_type = 'PROCEDURE'
   AND object_name = 'P5'
 ORDER BY line, col, usage_id;

USAGE       TYPE               NAME        LINE  COL USAGE_ID USAGE_CONTEXT_ID
----------- ------------------ ----------- ---- ---- -------- ----------------
DECLARATION PROCEDURE          P5             1   11        1                0
DEFINITION  PROCEDURE          P5             1   11        2                1
DECLARATION FORMAL IN          IN_P           1   15        3                2
REFERENCE   CHARACTER DATATYPE VARCHAR2       1   23        4                3
REFERENCE   FORMAL IN          IN_P           3   29        5                2

I see the following workarounds:

  • Use a non-CDB database for PL/Scope analysis
  • Do the analysis in the CDB$ROOT container

4. Missing Identifiers

If you are analysing the usages of identifiers, e.g. to check if a declared identifier is used, then you will report false positives if PL/Scope does not report all identifier usages. See line 6 in the next example. The identifier l_stmt is referenced in the execute immediate statement, but the usage is not reported.

This is a bug. See bug 26351814 on MOS.

ALTER SESSION SET plscope_settings='identifiers:all, statements:all';

CREATE OR REPLACE PROCEDURE p6 IS
   l_stmt VARCHAR2(100) := 'BEGIN NULL; END;';
BEGIN
   EXECUTE IMMEDIATE l_stmt;
END;
/

SELECT usage, type, name, line, col, usage_id, usage_context_id
  FROM user_identifiers
 WHERE object_type = 'PROCEDURE'
   AND object_name = 'P6'
 ORDER BY line, col, usage_id;

USAGE       TYPE               NAME        LINE  COL USAGE_ID USAGE_CONTEXT_ID
----------- ------------------ ----------- ---- ---- -------- ----------------
DECLARATION PROCEDURE          P6             1   11        1                0
DEFINITION  PROCEDURE          P6             1   11        2                1
DECLARATION VARIABLE           L_STMT         2    4        3                2
ASSIGNMENT  VARIABLE           L_STMT         2    4        5                3
REFERENCE   CHARACTER DATATYPE VARCHAR2       2   11        4                3

There is no feasible workaround. Ok, you could use a third party parser to verify the result, but that’s an extreme measure and a lot of work. I really hope Oracle is going to fix this bug soon.

5. Wrong Usages

In the next example the usage of the parameter in_pin the if statement is reported as DEFINITION instead of REFERENCE on line 23.

ALTER SESSION SET plscope_settings='identifiers:all, statements:all';

CREATE OR REPLACE PROCEDURE p7 (in_p IN BOOLEAN) IS
BEGIN
   IF in_p THEN
      NULL;
   END IF;
END;
/

SELECT usage, type, name, line, col, usage_id, usage_context_id
  FROM user_identifiers
 WHERE object_type = 'PROCEDURE'
   AND object_name = 'P7'
 ORDER BY line, col, usage_id;

USAGE       TYPE               NAME        LINE  COL USAGE_ID USAGE_CONTEXT_ID
----------- ------------------ ----------- ---- ---- -------- ----------------
DECLARATION PROCEDURE          P7             1   11        1                0
DEFINITION  PROCEDURE          P7             1   11        2                1
DECLARATION FORMAL IN          IN_P           1   15        3                2
REFERENCE   BOOLEAN DATATYPE   BOOLEAN        1   23        4                3
DEFINITION  FORMAL IN          IN_P           3    7        5                2

This is a bug. See bug 20056796 on MOS.

Since a DEFINITION for a FORMAL IN type does not make sense, you can just replace all occurrences as follows:

SELECT CASE
          WHEN type = 'FORMAL IN' AND usage = 'DEFINITION' THEN
             'REFERENCE'
          ELSE
             usage
       END AS usage, type, name, line, col, usage_id, usage_context_id
  FROM user_identifiers
 WHERE object_type = 'PROCEDURE'
   AND object_name = 'P7'
 ORDER BY line, col, usage_id;

USAGE       TYPE               NAME        LINE  COL USAGE_ID USAGE_CONTEXT_ID
----------- ------------------ ----------- ---- ---- -------- ----------------
DECLARATION PROCEDURE          P7             1   11        1                0
DEFINITION  PROCEDURE          P7             1   11        2                1
DECLARATION FORMAL IN          IN_P           1   15        3                2
REFERENCE   BOOLEAN DATATYPE   BOOLEAN        1   23        4                3
REFERENCE   FORMAL IN          IN_P           3    7        5                2

However,  when you look at the bug description, you will also find examples for the following additional wrong usage reports:

  • an ASSIGNMENT instead of a REFERENCE
  • a REFERENCE instead of an ASSIGNMENT

For these wrong usages it might not be so simple to work around it. You have to find a solution on a on case basis until Oracle provides a bug fix.

6. Missing Usages and Structures in Static SQL Statements

In example 6a the procedure p8 creates a new row in the dept table in a rather awkward manner. In line 31 the analysis query adds the table name to the column name. Something like that is necessary if column names are not unique across tables.

However, the result has some flaws. Let’s look at the line 59-70. They are all direct descendants of the insert statement.

ALTER SESSION SET plscope_settings='identifiers:all, statements:all';

CREATE OR REPLACE FUNCTION f2 (in_name IN VARCHAR2) RETURN VARCHAR2 IS
BEGIN
   RETURN in_name || ' CITY';
END;
/

CREATE OR REPLACE PROCEDURE p8 IS
BEGIN
   INSERT INTO dept (deptno, dname, loc)
   WITH
      silly_data AS (
         SELECT job || substr(ename, 1, 0) AS loc,
                ename AS dname,
                deptno + 20 AS deptno
           FROM emp
          WHERE comm IS NOT NULL
            AND ROWNUM = 1
      )
   SELECT deptno, dname, f2(loc)
     FROM silly_data;
END;
/

WITH
   ids AS (
      SELECT i.usage, i.type,
             CASE
                WHEN i.type = 'COLUMN' THEN
                   p.object_name || '.' || i.name
                ELSE
                   i.name
             END AS name, i.line, i.col, i.usage_id, i.usage_context_id
        FROM user_identifiers i
        JOIN user_identifiers p
          ON p.signature = i.signature
             AND p.usage = 'DECLARATION'
       WHERE i.object_type = 'PROCEDURE'
         AND i.object_name = 'P8'
    UNION ALL
      SELECT 'EXECUTE' AS usage, type, sql_id AS name, line, col, usage_id, usage_context_id
       FROM user_statements
       WHERE object_type = 'PROCEDURE'
         AND object_name = 'P8'
   )
 SELECT lpad(' ', 3 * (level - 1)) || usage AS usage,
        type, name, line, col, usage_id, usage_context_id
   FROM ids
  START WITH usage_context_id = 0
CONNECT BY PRIOR usage_id = usage_context_id
  ORDER BY usage_id;

USAGE                 TYPE               NAME          LINE  COL USAGE_ID USAGE_CONTEXT_ID
--------------------- ------------------ ------------- ---- ---- -------- ----------------
DECLARATION           PROCEDURE          P8               1   11        1                0
   DEFINITION         PROCEDURE          P8               1   11        2                1
      EXECUTE         INSERT             7dkawtszvu0a8    3    4        3                2
         REFERENCE    TABLE              EMP              9   17        4                3
         REFERENCE    COLUMN             EMP.COMM        10   17        5                3
         REFERENCE    COLUMN             EMP.DEPTNO       8   17        6                3
         REFERENCE    COLUMN             EMP.ENAME        7   17        7                3
         REFERENCE    COLUMN             EMP.ENAME        6   31        8                3
         REFERENCE    COLUMN             EMP.JOB          6   17        9                3
         REFERENCE    COLUMN             EMP.ENAME       13   19       10                3
         REFERENCE    TABLE              DEPT             3   16       11                3
         REFERENCE    COLUMN             DEPT.LOC         3   37       12                3
         REFERENCE    COLUMN             DEPT.DNAME       3   30       13                3
         REFERENCE    COLUMN             DEPT.DEPTNO      3   22       14                3
         CALL         FUNCTION           F2              13   26       16                3

How do we identify the target table of the insert statement? Is it emp or is it dept? Ok, when we order the result by line and column instead of usage_id, we could know based on the SQL grammar that dept must be a target table. But what about emp? Could that be the target of a mutitable insert statement? Probably not since there is no other table to query data from, right? Probably right, but if we’d query data from a table function which is not compiled with PL/Scope emp could still be a target of a multitable insert statement. A different usage might help, but unfortunately all table and column usages within static SQL statements are reported as REFERENCE.  This is not a bug. If we want to change that we have to file an enhancement request.

Identifying target columns, source columns or columns used to filter data is impossible with PL/Scope alone. You need a SQL parser or tools using such a parser for deeper static code analysis.

Even if you are interested in the column usages only, you have to be aware that column-less access to tables are possible, e.g. when omitting the column list in the insert_into_clause. In such cases all visible columns of the target table are used. If synonyms and views are used, the analysis is becoming a bit harder.

Nonetheless, the metadata provided through user_statementscompletes the missing pure PL/SQL analysis reporting capabilities for PL/SQL identifiers. Now all usages of PL/SQL identifiers are reported with a static SQL statement context – if they have one. E.g the use of the function f2 within the insert statement on line 70. And that alone is very useful.

7. Summary

Most of the limitations mentioned in this post are based on bugs. Hence I recommend to check from time to time the status of the following bugs on MOS and open a SR when you are unable to produce a correct analysis result due to PL/Scope bugs.

  • Bug 24916492: PLSCOPE_SETTINGS DOESN’T PARSE IDENTIFIER INFORMATION AFTER NULL STATEMENT
  • Bug 26363026: WRONG RESULT IN PL/SCOPE HIERARCHICAL QUERY USING SYNONYM
  • Bug 26351814: EXECUTE IMMEDIATE STATEMENT IDENTIFIER REFERENCE NOT COLLECTED BY PL/SCOPE
  • Bug 26169004: PL/SCOPE DOES NOT DETECT USAGES OF CDB OBJECTS SUCH AS SYS.DBMS_SQL
  • Bug 20056796 : PLSCOPE SHOWS WRONG USAGE OF IDENTIFIERS

Some limitations of PL/Scope are by design. In the end PL/Scope provides just information about identifiers, a subset of data produced during parse time and not a complete parse tree, which would be desirable for complex static code analysis. However, if you just want to analyse the use of identifiers in your PL/SQL code, you should consider using PL/Scope. PL/Scope is storing the results after the semantic analysis, therefore each identifier comes with a context such as the schema, nested program unit, etc. Even if you need a third party tool for static code analysis, PL/Scope might be helpful to verify or to complete the result.

Before you start developing your own PL/Scope queries from scratch, have a look at plscope-utils. There are predefined views which address some of the mentioned limitations out of the box. There’s also a SQL Developer plugin which works against any database version with PL/Scope.

Entity Relationship Model for PL/Scope

$
0
0

Today I found a sketch of an ERD from last year when I looked at the new features of PL/Scope in version 12.2. It looked a bit complicated and also wrong. So, I decided to refactor it using SQL Developer Data Modeler and share the result.

You find the model in the plscope-utils Github project here.

The implementation provides the following views:

  • DBA/ALL/USER_IDENTIFIERSfor the entities “Identifier Declaration” and “DB Object Usage”
  • DBA/ALL/USER_STATEMENTSfor the entities “Identifier Declaration”, “SQL Stmt Usage” and  “SQL Statement”

For me the most interesting entity was “Identifier Declaration”, because it helped me to understand the signature attribute/column. PL/Scope creates a signature for every database object and all its components when a database object is compiled with PL/Scope or when a referenced object cannot be compiled with PL/Scope. I call those objects “Secondary Objects”. Such objects are tables, views, materialized views, operators or sequences. Hence you find PL/Scope metadata for every table which is referenced in the PL/SQL code after compiling it with PL/Scope. Here’s an example:

SELECT usage, type, name, line, col, usage_id, usage_context_id
  FROM user_identifiers
 WHERE object_type = 'TABLE'
   AND object_name = 'EMP'
 ORDER BY line, col;

USAGE       TYPE   NAME     LINE  COL USAGE_ID USAGE_CONTEXT_ID
----------- ------ -------- ---- ---- -------- ----------------
DECLARATION TABLE  EMP         1   15        1                0
DECLARATION COLUMN EMPNO       1   22        2                1
DECLARATION COLUMN ENAME       1   51        3                1
DECLARATION COLUMN JOB         1   77        4                1
DECLARATION COLUMN MGR         1  100        5                1
DECLARATION COLUMN HIREDATE    1  118        6                1
DECLARATION COLUMN SAL         1  134        7                1
DECLARATION COLUMN COMM        1  152        8                1
DECLARATION COLUMN DEPTNO      1  171        9                1

The PL/Scope metadata for table emp will be removed automatically when there are not used by “Primary Objects” (objects which can be compiled with PL/Scope) anymore. Quite nice.

How to Prove That Your SmartDB App Is Secured Against SQL Injection Attacks

$
0
0

If you are guarding your data behind a hard shell PL/SQL API as Bryn Llewellyn, Toon Koppelaars and others recommend, then it should be quite easy to prove, that your PL/SQL application is secured against SQL injection attacks. The basic idea is 1) that you do not expose data via tables nor views to Oracle users used in the middle-tier, by end-users and in the GUI; and 2) that you use only static SQL within PL/SQL packages. By following these two rules, you ensure that only SQL statements with bind variables are used in your application, making the injection of unwanted SQL fragments impossible. In this blog post I show how to check if an application is complying to these two rules.

Demo Applications

I’ve prepared three tiny demo applications to visualise what guarding data behind a hard shell PL/SQL API means and how static and dynamic SQL can be used within Oracle Database 12c Release 2 (12.2). You may install these applications using this script.

[
   {
      "c1": 2,
      "c2": "I like SQL."
   },
   {
      "c1": 3,
      "c2": "And JSON is part of SQL and PL/SQL."
   }
]

 

Rule 1) Do not expose data via tables nor views

The idea behind a hard shell PL/SQL is to expose data through PL/SQL units only. Direct access to tables or views are unwanted. The Oracle users configured in the connection pool of the middle tier, need the CONNECT role and EXECUTE rights on the PL/SQL API only. These rights may be granted directly or indirectly via various levels of Oracle roles.

The following query shows if an Oracle database user is ready to be used in the middle tier application. Oracle users maintained by Oracle itself, such as SYS, SYSTEM, SYSAUX, etc. are excluded along with some other users which grant objects to PUBLIC (see line 35 to 38). To execute this query you need the SELECT_CATALOG_ROLE.

WITH
   -- roles as recursive structure
   role_base AS (
      -- roles without parent (=roots)
      SELECT r.role, NULL AS parent_role
        FROM dba_roles r
       WHERE r.role NOT IN (
                SELECT p.granted_role
                  FROM role_role_privs p
             )
      UNION ALL
      -- roles with parent (=children)
      SELECT granted_role AS role, role AS parent_role
        FROM role_role_privs
   ),
   -- roles tree, calculate role_path for every hierarchy level
   role_tree AS (
      SELECT role,
             parent_role,
             sys_connect_by_path(ROLE, '/') AS role_path
        FROM role_base
      CONNECT BY PRIOR role = parent_role
   ),
   -- roles graph, child added to all ancestors including self
   -- allows simple join to parent_role to find all descendants
   role_graph AS (
      SELECT DISTINCT
             role,
             regexp_substr(role_path, '(/)(\w+)', 1, 1, 'i', 2) AS parent_role
        FROM role_tree
   ),
   -- application users in scope of the analysis
   -- other users are treated as if they were not istalled
   app_user AS (
      SELECT username
        FROM dba_users
       WHERE oracle_maintained = 'N' -- SYS, SYSTEM, SYSAUX, ...
         AND username NOT IN ('FTLDB', 'PLSCOPE')
   ),
   -- user system privileges
   sys_priv AS (
      -- system privileges granted directly to users
      SELECT u.username, p.privilege
        FROM dba_sys_privs p
        JOIN app_user u ON u.username = p.grantee
      UNION
      -- system privileges granted directly to PUBLIC
      SELECT u.username, p.privilege
        FROM dba_sys_privs p
       CROSS JOIN app_user u
       WHERE p.grantee = 'PUBLIC'
         AND p.privilege NOT IN (
                SELECT r.role
                  FROM dba_roles r
             )
      UNION
      -- system privileges granted to users via roles
      SELECT u.username, p.privilege
        FROM dba_role_privs r
        JOIN app_user u ON u.username = r.grantee
        JOIN role_graph g ON g.parent_role = r.granted_role
        JOIN dba_sys_privs p ON p.grantee = g.role
      UNION
      -- system privileges granted to PUBLIC via roles
      SELECT u.username, p.privilege
        FROM dba_role_privs r
        JOIN role_graph g ON g.parent_role = r.granted_role
        JOIN dba_sys_privs p ON p.grantee = g.role
        CROSS JOIN app_user u
       WHERE r.grantee = 'PUBLIC'
   ),
   -- user object privileges
   obj_priv AS (
      -- objects granted directly to users
      SELECT u.username, p.owner, p.type AS object_type, p.table_name AS object_name
        FROM dba_tab_privs p
        JOIN app_user u ON u.username = p.grantee
       WHERE p.owner IN (
                SELECT u2.username
                  FROM app_user u2
             )
      UNION
      -- objects granted to users via roles
      SELECT u.username, p.owner, p.type AS object_type, p.table_name AS object_name
        FROM dba_role_privs r
        JOIN app_user u ON u.username = r.grantee
        JOIN role_graph g ON g.parent_role = r.granted_role
        JOIN dba_tab_privs p ON p.grantee = g.role
       WHERE p.owner IN (
                SELECT u2.username
                  FROM app_user u2
             )
      -- objects granted to PUBLIC
      UNION
      SELECT u.username, p.owner, p.type AS object_type, p.table_name AS object_name
        FROM dba_tab_privs p
       CROSS JOIN app_user u
       WHERE p.owner IN (
                SELECT u2.username
                  FROM app_user u2
             )
         AND p.grantee = 'PUBLIC'
   ),
   -- issues if user is configured in the connection pool of a middle tier
   issues AS (
      -- privileges not part of CONNECT role
      SELECT username,
             'SYS' AS owner,
             'PRIVILEGE' AS object_type,
             privilege AS object_name,
             'Privilege is not part of the CONNECT role' AS issue
        FROM sys_priv
       WHERE privilege NOT IN ('CREATE SESSION', 'SET CONTAINER')
      UNION ALL
      -- access to non PL/SQL units
      SELECT username,
             owner,
             object_type,
             object_name,
             'Access to non-PL/SQL unit'
        FROM obj_priv
       WHERE object_type NOT IN ('PACKAGE', 'TYPE', 'FUNCTION', 'PROCEDURE')
      -- own objects
      UNION ALL
      SELECT u.username,
             o.owner,
             o.object_type,
             o.object_name,
             'Connect user must not own any object'
        FROM app_user u
        JOIN dba_objects o ON o.owner = u.username
      -- missing CREATE SESSION privilege
      UNION ALL
      SELECT u.username,
             'SYS',
             'PRIVILEGE',
             'CREATE SESSION',
             'Privilege is missing, but required'
        FROM app_user u
       WHERE u.username NOT IN (
                SELECT username
                  FROM sys_priv
                 WHERE privilege = 'CREATE SESSION'
             )
   ),
   -- aggregate issues per user
   issue_aggr AS (
      SELECT u.username, COUNT(i.username) issue_count
        FROM app_user u
        LEFT JOIN issues i ON i.username = u.username
       GROUP BY u.username
   ),
   -- user summary (calculate is_connect_user_ready)
   summary AS (
      SELECT username,
             CASE
                WHEN issue_count = 0 THEN
                   'YES'
                ELSE
                   'NO'
             END AS is_connect_user_ready,
             issue_count
        FROM issue_aggr
       ORDER BY is_connect_user_ready DESC, username
   )
-- main
SELECT *
  FROM summary
 WHERE username LIKE 'THE%';

USERNAME      IS_CONNECT_USER_READY ISSUE_COUNT
------------- --------------------- -----------
THE_BAD_USER  YES                             0
THE_GOOD_USER YES                             0
THE_BAD_API   NO                              9
THE_BAD_DATA  NO                              3
THE_GOOD_API  NO                              4
THE_GOOD_DATA NO                              3
THE_UGLY_DATA NO                             10
THE_UGLY_USER NO                              1

8 rows selected.

Just THE_GOOD_USER and THE_BAD_USER are ready to be used in the middle tier. To see the issues of all other users you may simply change the main part of the query as follows:

WITH
   -- roles as recursive structure
   role_base AS (
      -- roles without parent (=roots)
      SELECT r.role, NULL AS parent_role
        FROM dba_roles r
       WHERE r.role NOT IN (
                SELECT p.granted_role
                  FROM role_role_privs p
             )
      UNION ALL
      -- roles with parent (=children)
      SELECT granted_role AS role, role AS parent_role
        FROM role_role_privs
   ),
   -- roles tree, calculate role_path for every hierarchy level
   role_tree AS (
      SELECT role,
             parent_role,
             sys_connect_by_path(ROLE, '/') AS role_path
        FROM role_base
      CONNECT BY PRIOR role = parent_role
   ),
   -- roles graph, child added to all ancestors including self
   -- allows simple join to parent_role to find all descendants
   role_graph AS (
      SELECT DISTINCT
             role,
             regexp_substr(role_path, '(/)(\w+)', 1, 1, 'i', 2) AS parent_role
        FROM role_tree
   ),
   -- application users in scope of the analysis
   -- other users are treated as if they were not istalled
   app_user AS (
      SELECT username
        FROM dba_users
       WHERE oracle_maintained = 'N' -- SYS, SYSTEM, SYSAUX, ...
         AND username NOT IN ('FTLDB', 'PLSCOPE')
   ),
   -- user system privileges
   sys_priv AS (
      -- system privileges granted directly to users
      SELECT u.username, p.privilege
        FROM dba_sys_privs p
        JOIN app_user u ON u.username = p.grantee
      UNION
      -- system privileges granted directly to PUBLIC
      SELECT u.username, p.privilege
        FROM dba_sys_privs p
       CROSS JOIN app_user u
       WHERE p.grantee = 'PUBLIC'
         AND p.privilege NOT IN (
                SELECT r.role
                  FROM dba_roles r
             )
      UNION
      -- system privileges granted to users via roles
      SELECT u.username, p.privilege
        FROM dba_role_privs r
        JOIN app_user u ON u.username = r.grantee
        JOIN role_graph g ON g.parent_role = r.granted_role
        JOIN dba_sys_privs p ON p.grantee = g.role
      UNION
      -- system privileges granted to PUBLIC via roles
      SELECT u.username, p.privilege
        FROM dba_role_privs r
        JOIN role_graph g ON g.parent_role = r.granted_role
        JOIN dba_sys_privs p ON p.grantee = g.role
        CROSS JOIN app_user u
       WHERE r.grantee = 'PUBLIC'
   ),
   -- user object privileges
   obj_priv AS (
      -- objects granted directly to users
      SELECT u.username, p.owner, p.type AS object_type, p.table_name AS object_name
        FROM dba_tab_privs p
        JOIN app_user u ON u.username = p.grantee
       WHERE p.owner IN (
                SELECT u2.username
                  FROM app_user u2
             )
      UNION
      -- objects granted to users via roles
      SELECT u.username, p.owner, p.type AS object_type, p.table_name AS object_name
        FROM dba_role_privs r
        JOIN app_user u ON u.username = r.grantee
        JOIN role_graph g ON g.parent_role = r.granted_role
        JOIN dba_tab_privs p ON p.grantee = g.role
       WHERE p.owner IN (
                SELECT u2.username
                  FROM app_user u2
             )
      -- objects granted to PUBLIC
      UNION
      SELECT u.username, p.owner, p.type AS object_type, p.table_name AS object_name
        FROM dba_tab_privs p
       CROSS JOIN app_user u
       WHERE p.owner IN (
                SELECT u2.username
                  FROM app_user u2
             )
         AND p.grantee = 'PUBLIC'
   ),
   -- issues if user is configured in the connection pool of a middle tier
   issues AS (
      -- privileges not part of CONNECT role
      SELECT username,
             'SYS' AS owner,
             'PRIVILEGE' AS object_type,
             privilege AS object_name,
             'Privilege is not part of the CONNECT role' AS issue
        FROM sys_priv
       WHERE privilege NOT IN ('CREATE SESSION', 'SET CONTAINER')
      UNION ALL
      -- access to non PL/SQL units
      SELECT username,
             owner,
             object_type,
             object_name,
             'Access to non-PL/SQL unit'
        FROM obj_priv
       WHERE object_type NOT IN ('PACKAGE', 'TYPE', 'FUNCTION', 'PROCEDURE')
      -- own objects
      UNION ALL
      SELECT u.username,
             o.owner,
             o.object_type,
             o.object_name,
             'Connect user must not own any object'
        FROM app_user u
        JOIN dba_objects o ON o.owner = u.username
      -- missing CREATE SESSION privilege
      UNION ALL
      SELECT u.username,
             'SYS',
             'PRIVILEGE',
             'CREATE SESSION',
             'Privilege is missing, but required'
        FROM app_user u
       WHERE u.username NOT IN (
                SELECT username
                  FROM sys_priv
                 WHERE privilege = 'CREATE SESSION'
             )
   ),
   -- aggregate issues per user
   issue_aggr AS (
      SELECT u.username, COUNT(i.username) issue_count
        FROM app_user u
        LEFT JOIN issues i ON i.username = u.username
       GROUP BY u.username
   ),
   -- user summary (calculate is_connect_user_ready)
   summary AS (
      SELECT username,
             CASE
                WHEN issue_count = 0 THEN
                   'YES'
                ELSE
                   'NO'
             END AS is_connect_user_ready,
             issue_count
        FROM issue_aggr
       ORDER BY is_connect_user_ready DESC, username
   )
-- main
SELECT *
  FROM issues
 WHERE username LIKE 'THE%'
 ORDER BY username, owner, object_type, object_name;

USERNAME      OWNER         OBJECT_TYPE  OBJECT_NAME          ISSUE
------------- ------------- ------------ -------------------- -----------------------------------------
THE_BAD_API   SYS           PRIVILEGE    CREATE DATABASE LINK Privilege is not part of the CONNECT role
THE_BAD_API   SYS           PRIVILEGE    CREATE PROCEDURE     Privilege is not part of the CONNECT role
THE_BAD_API   THE_BAD_API   JAVA CLASS   C                    Connect user must not own any object
THE_BAD_API   THE_BAD_API   JAVA SOURCE  C                    Connect user must not own any object
THE_BAD_API   THE_BAD_API   PACKAGE      PKG                  Connect user must not own any object
THE_BAD_API   THE_BAD_API   PACKAGE      PKG2                 Connect user must not own any object
THE_BAD_API   THE_BAD_API   PACKAGE BODY PKG                  Connect user must not own any object
THE_BAD_API   THE_BAD_API   PACKAGE BODY PKG2                 Connect user must not own any object
THE_BAD_API   THE_BAD_DATA  TABLE        T                    Access to non-PL/SQL unit
THE_BAD_DATA  SYS           PRIVILEGE    CREATE TABLE         Privilege is not part of the CONNECT role
THE_BAD_DATA  THE_BAD_DATA  INDEX        SYS_C0012875         Connect user must not own any object
THE_BAD_DATA  THE_BAD_DATA  TABLE        T                    Connect user must not own any object
THE_GOOD_API  SYS           PRIVILEGE    CREATE PROCEDURE     Privilege is not part of the CONNECT role
THE_GOOD_API  THE_GOOD_API  PACKAGE      PKG                  Connect user must not own any object
THE_GOOD_API  THE_GOOD_API  PACKAGE BODY PKG                  Connect user must not own any object
THE_GOOD_API  THE_GOOD_DATA TABLE        T                    Access to non-PL/SQL unit
THE_GOOD_DATA SYS           PRIVILEGE    CREATE TABLE         Privilege is not part of the CONNECT role
THE_GOOD_DATA THE_GOOD_DATA INDEX        SYS_C0012873         Connect user must not own any object
THE_GOOD_DATA THE_GOOD_DATA TABLE        T                    Connect user must not own any object
THE_UGLY_DATA SYS           PRIVILEGE    CREATE CLUSTER       Privilege is not part of the CONNECT role
THE_UGLY_DATA SYS           PRIVILEGE    CREATE INDEXTYPE     Privilege is not part of the CONNECT role
THE_UGLY_DATA SYS           PRIVILEGE    CREATE OPERATOR      Privilege is not part of the CONNECT role
THE_UGLY_DATA SYS           PRIVILEGE    CREATE PROCEDURE     Privilege is not part of the CONNECT role
THE_UGLY_DATA SYS           PRIVILEGE    CREATE SEQUENCE      Privilege is not part of the CONNECT role
THE_UGLY_DATA SYS           PRIVILEGE    CREATE TABLE         Privilege is not part of the CONNECT role
THE_UGLY_DATA SYS           PRIVILEGE    CREATE TRIGGER       Privilege is not part of the CONNECT role
THE_UGLY_DATA SYS           PRIVILEGE    CREATE TYPE          Privilege is not part of the CONNECT role
THE_UGLY_DATA THE_UGLY_DATA INDEX        SYS_C0012877         Connect user must not own any object
THE_UGLY_DATA THE_UGLY_DATA TABLE        T                    Connect user must not own any object
THE_UGLY_USER THE_UGLY_DATA TABLE        T                    Access to non-PL/SQL unit

30 rows selected.

THE_UGLY_USER has access to table T owned by THE_UGLY_DATA. This violates clearly rule 1.

However, it is important to note, that we have excluded all Oracle maintained users from the analysis. So let’s have a look at all the tables and views granted to PUBLIC by the Oracle maintained users.

WITH
   public_privs AS (
      SELECT p.owner,
             p.type       AS object_type,
             p.privilege,
             count(*)     AS priv_count
        FROM dba_tab_privs p
       WHERE p.grantee = 'PUBLIC'
         AND p.type IN ('VIEW', 'TABLE')
         AND p.owner IN (
                SELECT u2.username
                  FROM dba_users u2
                 WHERE u2.oracle_maintained = 'Y'
             )
       GROUP BY p.owner, p.type, p.privilege
   ),
   public_privs_pivot AS (
      SELECT owner,
             object_type,
             insert_priv,
             update_priv,
             delete_priv,
             select_priv, -- allows SELECT ... FOR UPDATE ...
             read_priv,   -- does not allow SELECT ... FOR UPDATE ...
             flashback_priv,
             nvl(insert_priv,0) + nvl(update_priv,0) + nvl(delete_priv,0)
             + nvl(select_priv,0) + nvl(read_priv,0)
             + nvl(flashback_priv,0) AS total_priv
        FROM public_privs
       PIVOT (
                sum(priv_count) FOR privilege IN (
                   'INSERT'    AS insert_priv,
                   'UPDATE'    AS update_priv,
                   'DELETE'    AS delete_priv,
                   'SELECT'    AS select_priv,
                   'READ'      AS read_priv,
                   'FLASHBACK' AS flashback_priv
                )
             )
       ORDER BY owner
   ),
   public_privs_report AS (
      SELECT owner,
             object_type,
             sum(insert_priv)    AS "INSERT",
             sum(update_priv)    AS "UPDATE",
             sum(delete_priv)    AS "DELETE",
             sum(select_priv)    AS "SELECT",
             sum(read_priv)      AS "READ",
             sum(flashback_priv) AS "FLASHBACK",
             sum(total_priv)     AS "TOTAL"
        FROM public_privs_pivot
       GROUP BY ROLLUP(owner, object_type)
      HAVING (GROUPING(owner), GROUPING(object_type)) IN ((0,0), (1,1))
       ORDER BY owner, object_type
   )
-- main
SELECT * FROM public_privs_report;

OWNER             OBJECT_TYPE     INSERT     UPDATE     DELETE     SELECT       READ  FLASHBACK      TOTAL
----------------- ----------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
APEX_050100       VIEW                 1          1          4          5        192                   203
CTXSYS            TABLE                                                 1          4                     5
CTXSYS            VIEW                11          5          8         12         51                    87
GSMADMIN_INTERNAL VIEW                                                             3                     3
LBACSYS           VIEW                                                            18                    18
MDSYS             TABLE               21         14         19         21         36                   111
MDSYS             VIEW                27         26         26         26         68                   173
OLAPSYS           VIEW                                                            18                    18
ORDDATA           VIEW                                       1                     5                     6
ORDSYS            VIEW                                                             5                     5
ORDS_METADATA     VIEW                                                 20                               20
SYS               TABLE               21         10         16         30         12                    89
SYS               VIEW                                                  1       1717          2       1720
SYSTEM            TABLE                3          3          3          4                               13
SYSTEM            VIEW                                                  1                                1
WMSYS             VIEW                                                            40                    40
XDB               TABLE                8          6          8          8         14                    44
XDB               VIEW                 2          2          2          2          3                    11
                                      94         67         87        131       2186          2       2567

19 rows selected.

Even THE_GOOD_USER has access to 2317 views and tables. To reduce this number we have to uninstall some components, but that’s just a drop in the ocean. There is currently no way to create on Oracle user without access to views and tables. Hence we just have to focus on our application and our data.

 

Rule 2) Use only static SQL within PL/SQL

If you use just static SQL in your PL/SQL units, then no SQL injection is possible. The absence of dynamic SQL proves that your application is secured against SQL injection attacks. Of course, there are good reasons for dynamic SQL. But proving that a dynamic SQL is not injectable is difficult. Checking for the absence of dynamic SQL is the simpler approach, even if it is not that easy as I’ve initially thought.

In How to write SQL injection proof PL/SQL the following ways are mentioned to implement dynamic SQL:

  • native dynamic SQL (EXECUTE IMMEDIATE)
  • DBMS_SQL.EXECUTE
  • DBMS_SQL.PARSE
  • DBMS_UTILITY.EXEC_DDL_STATEMENT
  • DBMS_DDL.CREATE_WRAPPED
  • DBMS_HS_PASSTHROUGH.EXECUTE_IMMEDIATE
  • DBMS_HS_PASSTHROUGH.PARSE
  • OWA_UTIL.BIND_VARIABLES
  • OWA_UTIL.LISTPRINT
  • OWA_UTIL.TABLEPRINT

But there are more ways to execute dynamic SQL, such as:

  • Open-For Statement
  • Java Stored Procedure
  • DBMS_SYS_SQL.EXECUTE
  • DBMS_SYS_SQL.PARSE
  • DBMS_SYS_SQL.PARSE_AS_USER

It’s difficult to get a complete list, due to the fact the the Oracle-supplied subprograms are wrapped and often the SQL statement is hidden behind a C API. That leaves us basically two options

  1. Use a list of Oracle-supplied packages and/or subprograms to identify dynamic SQL, even if the list might be incomplete
  2. Suspect that any Oracle-supplied subprogram may contain dynamic SQL, except some trusted packages such as “DBMS_STANDARD” and “STANDARD”

Both options are not very appealing. But I’m in favour of option 1. At a certain point I have to focus on my application code and assume/trust, that the Oracle supplied packages are doing their part to reduce the risk of SQL injection.

The following object types may contain PL/SQL and have to be checked for dynamic SQL:

  • FUNCTION
  • PROCEDURE
  • PACKAGE BODY
  • TYPE BODY
  • TRIGGER

What if we call services outside of the database via REST calls or via AQ messages? I think we may ignore these cases. They are not part of this application anymore and even if the services call this database, they have to go through the hard shell, and these are PL/SQL units already covered.

We need PL/Scope metadata for some checks. The following anonymous PL/SQL block produces these data. Be aware that the application code and some SYS objects are compiled. Invalid, dependent objects will be recompiled at the end. Nonetheless, you should not run this code in your production environment.

DECLARE
   PROCEDURE enable_plscope IS
   BEGIN
      EXECUTE IMMEDIATE q'[ALTER SESSION SET plscope_settings='IDENTIFIERS:ALL, STATEMENTS:ALL']';
   END enable_plscope;
   --
   PROCEDURE compile_defs_without_plscope IS
   BEGIN
      <<compile_definition>>
      FOR r IN (
         WITH
            -- application users in scope of the analysis
            -- other users are treated as if they were not istalled
            app_user AS (
               SELECT username
                 FROM dba_users
                WHERE oracle_maintained = 'N'
            ),
            -- objects for which PL/Scope metadata is required
            obj AS (
                SELECT o.owner, o.object_type, o.object_name
                  FROM dba_objects o
                 WHERE object_name IN ('DBMS_UTILITY', 'OWA_UTIL')
                   AND object_type IN ('PACKAGE', 'SYNONYM')
               UNION ALL
               SELECT o.owner, o.object_type, o.object_name
                 FROM dba_objects o
                 JOIN app_user u ON u.username = o.owner
                WHERE object_type IN ('PACKAGE BODY', 'TYPE BODY', 'FUNCTION',
                         'PROCEDURE', 'TRIGGER')
            ),
            -- objects without PL/Scope metadata
            missing_plscope_obj AS (
               SELECT o.owner, o.object_type, o.object_name
                 FROM obj o
                 LEFT JOIN dba_identifiers i
                   ON i.owner = o.owner
                      AND i.object_type = o.object_type
                      AND i.object_name = o.object_name
                      AND i.usage_context_id = 0
                WHERE i.usage_context_id IS NULL
            ),
            -- all objects to recompile and (re)gather PL/Scope metadata
            compile_scope AS (
               SELECT o.owner, o.object_type, o.object_name
                 FROM obj o
                WHERE EXISTS (
                         SELECT 1
                           FROM missing_plscope_obj o2
                          WHERE o2.owner = 'SYS'
                      )
               UNION ALL
               SELECT owner, object_type, object_name
                 FROM missing_plscope_obj
                WHERE NOT EXISTS (
                         SELECT 1
                           FROM missing_plscope_obj o2
                          WHERE o2.owner = 'SYS'
                      )
            ),
            -- compile statement required to produce PL/Scope metadata
            compile_stmt AS (
               SELECT 'ALTER ' || replace(object_type, ' BODY')
                      || ' "' || owner || '"."' || object_name || '" COMPILE'
                      || CASE
                            WHEN object_type LIKE '%BODY' THEN
                               ' BODY'
                         END AS stmt
                 FROM compile_scope
            )
         -- main
         SELECT stmt
           FROM compile_stmt
      ) LOOP
         EXECUTE IMMEDIATE r.stmt;
      END LOOP compile_definition;
   END compile_defs_without_plscope;
   --
   PROCEDURE recompile_invalids IS
   BEGIN
      <<schemas_with_invalids>>
      FOR r IN (
         SELECT DISTINCT owner
           FROM dba_objects
          WHERE status != 'VALID'
          ORDER BY CASE owner
                      WHEN 'SYS' THEN
                         1
                      WHEN 'SYSTEM' THEN
                         2
                      ELSE
                         3
                   END,
                owner
      ) LOOP
         utl_recomp.recomp_serial(r.owner);
      END LOOP schemas_with_invalids;
   END recompile_invalids;
BEGIN
   enable_plscope;
   compile_defs_without_plscope;
   recompile_invalids;
END;
/

Here’s the query to check application users.

WITH
   app_user AS (
      SELECT username
        FROM dba_users
       WHERE oracle_maintained = 'N'
   ),
   obj AS (
      SELECT o.owner, o.object_type, o.object_name
        FROM dba_objects o
        JOIN app_user u ON u.username = o.owner
       WHERE object_type IN ('PACKAGE BODY', 'TYPE BODY', 'FUNCTION', 'PROCEDURE', 'TRIGGER')
   ),
   missing_plscope_obj AS (
      SELECT o.owner, o.object_type, o.object_name
        FROM obj o
        LEFT JOIN dba_identifiers i
          ON i.owner = o.owner
             AND i.object_type = o.object_type
             AND i.object_name = o.object_name
             AND i.usage_context_id = 0
       WHERE i.usage_context_id IS NULL
   ),
   stmt AS (
      SELECT s.owner, s.object_type, s.object_name, s.type, s.line, s.col
        FROM dba_statements s
        JOIN app_user u ON u.username = s.owner
       WHERE s.type IN ('EXECUTE IMMEDIATE', 'OPEN')
   ),
   dep AS (
      SELECT d.owner, d.name as object_name, d.type as object_type, d.referenced_name
        FROM dba_dependencies d
        JOIN app_user u ON u.username = d.owner
       WHERE d.referenced_name IN (
                'DBMS_SQL', 'DBMS_DDL', 'DBMS_HS_PASSTHROUGH', 'DBMS_SYS_SQL'
             )
   ),
   issues AS (
      SELECT owner,
             object_type,
             object_name,
             type AS potential_sqli_risk
        FROM stmt
       WHERE type = 'EXECUTE IMMEDIATE'
      UNION
      SELECT stmt.owner,
             stmt.object_type,
             stmt.object_name,
             'OPEN-FOR WITH DYNAMIC SQL'
        FROM stmt
        JOIN dba_source src
          ON src.owner = stmt.owner
             AND src.type = stmt.object_type
             AND src.name = stmt.object_name
             AND src.line = stmt.line
       WHERE stmt.type = 'OPEN'
         AND regexp_substr(substr(src.text, stmt.col), '^open\s+', 1, 1, 'i') IS NULL
         AND regexp_substr(substr(src.text, stmt.col), '^("?\w+"?|q?'')', 1, 1, 'i') IS NOT NULL
      UNION
      SELECT owner,
             object_type,
             object_name,
             referenced_name
        FROM dep
      UNION
      SELECT i.owner,
             i.object_type,
             i.object_name,
             r.object_name || '.' || r.name
        FROM dba_identifiers i
        JOIN app_user u ON u.username = i.owner
        JOIN dba_identifiers r
          ON r.signature = i.signature
             AND r.usage = 'DECLARATION'
       WHERE i.usage = 'CALL'
         AND r.owner = 'SYS'
         AND r.object_type = 'PACKAGE'
         AND (r.object_name, r.name) IN (
                ('DBMS_UTILITY', 'EXEC_DDL_STATEMENT'),
                ('OWA_UTIL', 'BIND_VARIABLES'),
                ('OWA_UTIL', 'LISTPRINT'),
                ('OWA_UTIL', 'TABLEPRINT')
             )
      UNION
      SELECT o.owner,
             o.object_type,
             o.object_name,
             'SQL FROM JAVA SUSPECTED'
        FROM dba_objects o
        JOIN app_user u ON u.username = o.owner
       WHERE o.object_type = 'JAVA CLASS'
      UNION
      SELECT owner,
             object_type,
             object_name,
             'PL/SCOPE METADATA MISSING'
        FROM missing_plscope_obj
   ),
   issue_aggr AS (
      SELECT u.username AS owner, COUNT(i.owner) issue_count
        FROM app_user u
        LEFT JOIN issues i ON i.owner = u.username
       GROUP BY u.username
   ),
   summary AS (
      SELECT owner,
             CASE
                WHEN issue_count = 0 THEN
                   'YES'
                ELSE
                   'NO'
             END AS is_user_sql_injection_free,
             issue_count
        FROM issue_aggr
       ORDER BY is_user_sql_injection_free DESC, owner
   )
-- main
SELECT *
  FROM summary
 WHERE owner LIKE 'THE%';

OWNER         IS_USER_SQL_INJECTION_FREE ISSUE_COUNT
------------- -------------------------- -----------
THE_BAD_DATA  YES                                  0
THE_BAD_USER  YES                                  0
THE_GOOD_API  YES                                  0
THE_GOOD_DATA YES                                  0
THE_GOOD_USER YES                                  0
THE_UGLY_DATA YES                                  0
THE_UGLY_USER YES                                  0
THE_BAD_API   NO                                   9

8 rows selected.

To see the issues of THE_BAD_API you may simply change the main part of the query as follows:

WITH
   app_user AS (
      SELECT username
        FROM dba_users
       WHERE oracle_maintained = 'N'
   ),
   obj AS (
      SELECT o.owner, o.object_type, o.object_name
        FROM dba_objects o
        JOIN app_user u ON u.username = o.owner
       WHERE object_type IN ('PACKAGE BODY', 'TYPE BODY', 'FUNCTION', 'PROCEDURE', 'TRIGGER')
   ),
   missing_plscope_obj AS (
      SELECT o.owner, o.object_type, o.object_name
        FROM obj o
        LEFT JOIN dba_identifiers i
          ON i.owner = o.owner
             AND i.object_type = o.object_type
             AND i.object_name = o.object_name
             AND i.usage_context_id = 0
       WHERE i.usage_context_id IS NULL
   ),
   stmt AS (
      SELECT s.owner, s.object_type, s.object_name, s.type, s.line, s.col
        FROM dba_statements s
        JOIN app_user u ON u.username = s.owner
       WHERE s.type IN ('EXECUTE IMMEDIATE', 'OPEN')
   ),
   dep AS (
      SELECT d.owner, d.name as object_name, d.type as object_type, d.referenced_name
        FROM dba_dependencies d
        JOIN app_user u ON u.username = d.owner
       WHERE d.referenced_name IN (
                'DBMS_SQL', 'DBMS_DDL', 'DBMS_HS_PASSTHROUGH', 'DBMS_SYS_SQL'
             )
   ),
   issues AS (
      SELECT owner,
             object_type,
             object_name,
             type AS potential_sqli_risk
        FROM stmt
       WHERE type = 'EXECUTE IMMEDIATE'
      UNION
      SELECT stmt.owner,
             stmt.object_type,
             stmt.object_name,
             'OPEN-FOR WITH DYNAMIC SQL'
        FROM stmt
        JOIN dba_source src
          ON src.owner = stmt.owner
             AND src.type = stmt.object_type
             AND src.name = stmt.object_name
             AND src.line = stmt.line
       WHERE stmt.type = 'OPEN'
         AND regexp_substr(substr(src.text, stmt.col), '^open\s+', 1, 1, 'i') IS NULL
         AND regexp_substr(substr(src.text, stmt.col), '^("?\w+"?|q?'')', 1, 1, 'i') IS NOT NULL
      UNION
      SELECT owner,
             object_type,
             object_name,
             referenced_name
        FROM dep
      UNION
      SELECT i.owner,
             i.object_type,
             i.object_name,
             r.object_name || '.' || r.name
        FROM dba_identifiers i
        JOIN app_user u ON u.username = i.owner
        JOIN dba_identifiers r
          ON r.signature = i.signature
             AND r.usage = 'DECLARATION'
       WHERE i.usage = 'CALL'
         AND r.owner = 'SYS'
         AND r.object_type = 'PACKAGE'
         AND (r.object_name, r.name) IN (
                ('DBMS_UTILITY', 'EXEC_DDL_STATEMENT'),
                ('OWA_UTIL', 'BIND_VARIABLES'),
                ('OWA_UTIL', 'LISTPRINT'),
                ('OWA_UTIL', 'TABLEPRINT')
             )
      UNION
      SELECT o.owner,
             o.object_type,
             o.object_name,
             'SQL FROM JAVA SUSPECTED'
        FROM dba_objects o
        JOIN app_user u ON u.username = o.owner
       WHERE o.object_type = 'JAVA CLASS'
      UNION
      SELECT owner,
             object_type,
             object_name,
             'PL/SCOPE METADATA MISSING'
        FROM missing_plscope_obj
   ),
   issue_aggr AS (
      SELECT u.username AS owner, COUNT(i.owner) issue_count
        FROM app_user u
        LEFT JOIN issues i ON i.owner = u.username
       GROUP BY u.username
   ),
   summary AS (
      SELECT owner,
             CASE
                WHEN issue_count = 0 THEN
                   'YES'
                ELSE
                   'NO'
             END AS is_user_sql_injection_free,
             issue_count
        FROM issue_aggr
       ORDER BY is_user_sql_injection_free DESC, owner
   )
-- main
SELECT *
  FROM issues
 WHERE owner LIKE 'THE%';

OWNER         OBJECT_TYPE   OBJECT_NAME POTENTIAL_SQLI_RISK
------------- ------------- ----------- ----------------------------------------
THE_BAD_API   JAVA CLASS    C           SQL FROM JAVA SUSPECTED
THE_BAD_API   PACKAGE BODY  PKG         EXECUTE IMMEDIATE
THE_BAD_API   PACKAGE BODY  PKG2        DBMS_DDL
THE_BAD_API   PACKAGE BODY  PKG2        DBMS_SQL
THE_BAD_API   PACKAGE BODY  PKG2        DBMS_SYS_SQL
THE_BAD_API   PACKAGE BODY  PKG2        DBMS_UTILITY.EXEC_DDL_STATEMENT
THE_BAD_API   PACKAGE BODY  PKG2        EXECUTE IMMEDIATE
THE_BAD_API   PACKAGE BODY  PKG2        OPEN-FOR WITH DYNAMIC SQL
THE_BAD_API   PACKAGE BODY  PKG2        OWA_UTIL.LISTPRINT

9 rows selected.

To suspect that every JAVA CLASS uses SQL is not a very differentiated analysis result. Further Java specific code analysis are necessary. However, the other results are reasonable.

 

Conclusion

In this blog post I showed how to prove, that a PL/SQL application does not use dynamic SQL and therefore is secured against SQL injection attacks.

The use of dynamic SQL is automatically reduced in PL/SQL due to the fact that it is easier and more efficient for a developer to deal with compile errors than runtime errors. But there are cases when static SQL is not possible or not efficient enough. In those cases proper input validation is a necessity to mitigate the SQL injection risk (see also “Ensuring the safety of a SQL literal” in How to write SQL injection proof PL/SQL).

Views are often part of the API in applications I’m involved in. I like the power and the flexibility of these views. In fact I’m very grateful that the Oracle database provides a view API for its data dictionary, which simplified the analysis for this blog post. But views come with a SQL injection risk. Moreover the risk and responsibility is delegated to a certain extend to the developers using the database API. Hence, in the future I will think twice before making views part of the API, but I will for sure not rule them out per se.


PL/SQL Cop for SonarQube 7.0

$
0
0

The last two months my Trivadis colleague Daniel Schutzbach and I have been working on the PL/SQL Cop plugin for SonarQube. The goal was to support the most recent SonarQube versions 5.6 LTS, 6.7 LTS and 7.0. Dani was doing the heavy lifting and my job was testing and minor bug fixing. Today I can proudly announce that we were successful and that we have released the following three plugins:

  • PL/SQL Cop for SonarQube 4.5 LTS (tested with SonarQube 4.5, 4.5.7 and 5.1.2)
  • PL/SQL Cop for SonarQube 5.6 LTS (tested with SonarQube 5.6, 5.6.7, 6.0, 6.1, 6.2, 6.3, 6.3.1, 6.4, 6.5 and 6.6)
  • PL/SQL Cop for SonarQube 6.7 LTS (tested with SonarQube 6.7, 6.7.1, 6.7.2 and 7.0)

In this blog post I show how to setup a new SonarQube 7.0 server using Docker and analyze a PL/SQL project on my local machine with SonarQube Scanner. This post is a stripped down version of my Continuous Code Quality for PL/SQL with Docker post. I assume you know about Docker and have it installed on your machine.

Here is the table of content of the major steps.

  1. Create SonarQube Container
  2. Install PL/SQL Cop for SonarQube
  3. Install PL/SQL Cop (Command Line Utility)
  4. Install SonarQube Scanner
  5. Configure SonarQube
  6. Analyze a PL/SQL Project
  7. View Result in SonarQube
  8. Summary

1. Create a SonarQube Container

In this step step I create a standalone container for SonarQube 7.0 using defaults to keep it simple.

docker run -d --name sq7 -p 9000:9000 sonarqube:7.0

2. Install PL/SQL Cop for SonarQube

To install the current version of PL/SQL Cop for SonarQube within the “sq7” container run

docker exec sq7 wget --no-check-certificate \
   https://www.salvis.com/blog?ddownload=8167 \
   -O /opt/sonarqube/extensions/plugins/sonar-plsql-cop-plugin-6.7.0.0.jar

The wget command will be executed within the “sq7” container. Windows user have to replace the “\” with “^” when using CMD or with “`” when using PowerShell.

To load the plugin we need to restart the container.

docker restart sq7

We will complete the installation in step 5.

3. Install PL/SQL Cop (Command Line Utility)

Download PL/SQL Cop and unzip the downloaded file in a directory of your choice. I’ve installed it on my local machine in “/usr/local/bin/tvdcc”.

4. Install SonarQube Scanner

Download SonarQube Scanner and unzip the downloaded file in a directory of your choice. I’ve installed it on my local machine in “/usr/local/opt/sonar-scanner“.

5. Configure SonarQube 7.0

Open “http://localhost:9000” in your web browser and log in with username “admin” and password “admin”.

SonarQube asks you to provide a token name. Enter “cop” and  press “Generate” and then “Continue” on the next page. Then the token name and the token will be shown on the upper right corner of the screen as follows:

Copy your token text (39d483241393ddd5600e9c9348ced410c7903c1a) to the clipboard and store it somewhere. We will need it in step 6. Press “Skip this tutorial” in the upper right corner.

Click on “Administration” and the Category “Trivadis PL/SQL Cop” and change the “Path to PL/SQL Cop command line tvdcc executable” to the path according step 3. In may case this is “/usr/local/bin/tvdcc/tvdcc.sh”. Press “Save” and you are done.

6. Analyze a PL/SQL Project

Create a temporary directory (in my case “/Users/phs/demo” and type the following

git clone https://github.com/PhilippSalvisberg/plscope-utils.git

This will clone the plscope-utils git repository. If you do not have Git installed you may download the repository as zip file and extract it.

Run the following command to analyze the PL/SQL packages of this project:

cd plscope-utils/database/utils/package
sonar-scanner \
   -Dsonar.projectKey=plscope-utils:master \
   -Dsonar.sources=. \
   -Dsonar.login=39d483241393ddd5600e9c9348ced410c7903c1a

Windows user have to replace the “\” with “^” when using CMD or with “`” when using PowerShell.

7. View Result in SonarQube

Open “http://localhost:9000” in your web browser.

Click on “plscope-utils:master”,  select the “Issues” tab for this project and select all rules.

Click on an issue to see the source code line causing this issue.

8. Summary

Setting up a SonarQube 7.0 server with Docker is no big deal. Installing the PL/SQL Cop plugin is simple as well. However, I have only shown the minimum configuration. For real projects you will spend some time to configure your quality profiles and quality gates. A CI environment might help you to implement a fast quality feedback loop.

The audioless video summarizes the major installation and configuration steps. I hope this will encourage you to try PL/SQL Cop.

 

White Listed PL/SQL Programs in Oracle Database 18c

$
0
0

I’ve recently installed plscope-utils in an Oracle Database 18c instance. A package body using the SYS.UTL_XML.ParseQuery function failed to compile. The error message was: PLS-00306: wrong number or types of arguments in call to ‘PARSEQUERY’. Fixing that was easy. I just had to pass the new mandatory currUid parameter. But then I’ve got the next error message.

PLS-00904: insufficient privilege to access object PARSEQUERY.

The excerpt of the SYS.UTL_XML package specification reveals the cause for this error.

-- PARSEQUERY: Parse a SQL query and return in a CLOB as XML
-- PARAMS:
--      currUid         - UID of current user
--      schema          - schem to use for parse
--      sqltext         - the text of the query
--      lobloc          - a LOB locator to receive the parsed value

PROCEDURE parseQuery   (currUid IN NUMBER,
                        schema  IN VARCHAR2,
                        sqltext IN CLOB,
                        lobloc  IN OUT NOCOPY CLOB
                       )
  ACCESSIBLE BY (PACKAGE SYS.DBMS_METADATA);

The accessible_by_clause on line 13 restricts the access of the procedure parseQuery to the package SYS.DBMS_METADATA. It looks like Oracle started to use this 12.1 feature in 12.2 and is tightening its APIs with new releases.

Actually, that’s good. For me, an accessor is comparable to a constraint in a data model. A constraint describes a part of a model in a machine-readable way, so that it can be enforced and visualized efficiently. An accessor is just a constraint for a named PL/SQL unit. The accessible-by-lists (aka white lists) are helpful to plan upgrade projects, to validate role concepts/implementations or to support development tasks such as generating utPLSQL unit test stubs for a PL/SQL package (ignoring inaccessible subprograms).

However, currently it is not that easy to get the information about PL/SQL units and subprograms which are restricted by an accessible_by_clause.  There is no Oracle data dictionary view or API exposing this data. The Oracle enhancement request 27871459 addresses this issue. If you have an Oracle support account, please open an SR and add your CSI to this enhancement requests. This should be a well-known task for an Oracle support engineer.

I’ve created a script named procedure_accessors.sql to query accessors for all PL/SQL units and its subprograms in Oracle maintained schemas of an Oracle Database instance. The result has the granularity of the Oracle data dictionary view dba_procedures. Usually I would use the result of the named query procedure_accessors directly, but for this blog post I’ve done some aggregation and introduced the column accessor_list containing a comma separated list of all accessors of a procedure.

Please note, that this query requires an Oracle Database 12c Release 2 (12.2) instance or newer.

WITH
   --
   -- remove multi-line comments from source
   --
   FUNCTION remove_ml_comments (in_source IN CLOB) RETURN CLOB IS
   BEGIN 
      RETURN regexp_replace(in_source, '/\*.*?\*/', NULL, 1, 0, 'n');
   END remove_ml_comments;
   --
   -- remove single line comments from source
   --
   FUNCTION remove_sl_comments (in_source IN CLOB) RETURN CLOB IS
   BEGIN 
      RETURN regexp_replace(in_source, '--.*', NULL, 1, 0, NULL);
   END remove_sl_comments;
   --
   -- remove string literals from source
   -- TODO: incomplete removal of quoted literals, if quoted literals contain apostrophs
   --
   FUNCTION remove_string_literals (in_source IN CLOB) RETURN CLOB IS
   BEGIN 
      RETURN regexp_replace(in_source, q'['.*?']', NULL, 1, 0, 'n');
   END remove_string_literals;
   --
   -- get subprogram associated with the accessible_by_clause in JSON format, e.g.
   --    {
   --       "id": 5,
   --       "type": "PROCEDURE",
   --       "name": "parseQuery",
   --    }
   --
   -- return NULL if accessible_by_clause is for the PL/SQL unit
   --
   FUNCTION get_subprogram(
      in_object_type VARCHAR2,
      in_source      CLOB,
      in_pos         INTEGER
   ) RETURN VARCHAR2 IS
      l_subprogram json_object_t;
      co_pattern   CONSTANT VARCHAR2(100 CHAR) := '(function|procedure)(\s+)("?[a-zA-Z0-9_#$]+"?)';
      l_source     CLOB;
      l_count      INTEGER;
      l_match      VARCHAR2(4000 CHAR);
   BEGIN
      l_subprogram := json_object_t();
      IF in_object_type NOT IN ('FUNCTION', 'PROCEDURE') THEN
         l_source := regexp_replace(substr(in_source, 1, in_pos - 1), 'accessible\s+by\s*\(.*?\)', NULL, 1, 0, 'in');
         l_count := regexp_count(l_source, co_pattern, 1, 'in');
         IF l_count > 0 THEN
            l_match := regexp_substr(l_source, co_pattern, 1, l_count, 'in');
            l_subprogram.put('id', l_count);
            l_subprogram.put('type', regexp_substr(l_match, co_pattern, 1, 1, 'in', 1));
            l_subprogram.put('name', regexp_substr(l_match, co_pattern, 1, 1, 'in', 3));
         END IF;
      END IF;
      RETURN l_subprogram.to_string();
   END get_subprogram;
   --
   -- get the list of subprograms and its accessors as JSON array, e.g.
   --    [
   --        {
   --            "id": 5,
   --            "type": "PROCEDURE",
   --            "name": "parseQuery",
   --            "accessors": [
   --                {
   --                    "unit_kind": "PACKAGE",
   --                    "schema": "SYS",
   --                    "unit_name": "DBMS_METADATA"
   --                }
   --            ]
   --        }
   --    ]
   --
   FUNCTION get_accessors(
      in_object_type          VARCHAR2,
      in_source               CLOB,
      in_accessible_by_count  INTEGER
   ) RETURN CLOB IS
      co_full_clause_pattern CONSTANT VARCHAR2(100 CHAR) := 'accessible\s+by\s*\(.*?\)';
      co_accessor_pattern    CONSTANT VARCHAR2(200 CHAR) := 
         '(\(|,)(\s*)(function|procedure|package|trigger|type)?(\s*)("?[a-zA-Z0-9_#$]+"?\s*\.)?("?[a-zA-Z0-9_#$]+"?)';
      l_subprograms          json_array_t;
      l_subprogram           json_object_t;
      l_full_clause          VARCHAR2(4000 CHAR);
      l_accessors            json_array_t;
      l_accessor             json_object_t;
      l_pos                  INTEGER;
      l_accessor_count       INTEGER;
   BEGIN
      l_subprograms := json_array_t();
      <<accessible_by_clauses>>
      FOR i in 1 .. in_accessible_by_count LOOP
         l_pos := regexp_instr(in_source, co_full_clause_pattern, 1, i, 0, 'in');
         l_subprogram := json_object_t.parse(get_subprogram(in_object_type, in_source, l_pos));
         l_full_clause := regexp_substr(in_source, co_full_clause_pattern, 1, i, 'in');
         l_accessor_count := regexp_count(l_full_clause, co_accessor_pattern, 1, 'in');
         l_accessors := json_array_t();
         <<accessors>>
         FOR j in 1 .. l_accessor_count LOOP
            l_accessor := json_object_t();
            l_accessor.put('unit_kind', regexp_substr(l_full_clause, co_accessor_pattern, 1, j, 'in', 3));
            l_accessor.put('schema', replace(regexp_substr(l_full_clause, co_accessor_pattern, 1, j, 'in', 5),'.'));
            l_accessor.put('unit_name', regexp_substr(l_full_clause, co_accessor_pattern, 1, j, 'in', 6));
            l_accessors.append(l_accessor);
         END LOOP accessors;
         l_subprogram.put('accessors', l_accessors);
         l_subprograms.append(l_subprogram);
      END LOOP accessible_by_clauses;
      return l_subprograms.to_clob();
   END get_accessors;
   --
   -- ensure identifier matches case sensitive name in Orace data dictionary 
   --
   FUNCTION fix_identifier (in_identifier IN VARCHAR2) RETURN VARCHAR2 IS
      l_identifier VARCHAR2(128 CHAR);
   BEGIN
      IF in_identifier LIKE '"%"' THEN
         l_identifier := substr(in_identifier, 2, length(in_identifier) - 2);
      ELSE
         l_identifier := upper(in_identifier);
      END IF;
      RETURN l_identifier;
   END fix_identifier;
   --
   -- possible PL/SQL units with an accessible_by_clause (optimization step)
   -- false positives when keyword 'accessible' is not used in a accessible_by_clause
   --
   candidates AS (
      SELECT /*+ no_merge */ 
             s.owner, s.type AS object_type, s.name AS object_name, u.oracle_maintained
        FROM dba_users u
        JOIN dba_source s
          ON u.username = s.owner
       WHERE s.type IN ('FUNCTION', 'PROCEDURE', 'PACKAGE', 'TYPE')
         AND lower(text) LIKE '%accessible%'
       GROUP BY s.owner, s.type, s.name, u.oracle_maintained
   ),
   --
   -- extend result with source code (unmodified)
   --
   original_sources AS (
      SELECT owner, object_type, object_name, oracle_maintained,
             sys.dbms_metadata.get_ddl(
                schema      => owner,
                object_type => CASE object_type
                                  WHEN 'PACKAGE' THEN
                                     'PACKAGE_SPEC'
                                  WHEN 'TYPE' THEN
                                     'TYPE_SPEC'
                                  ELSE 
                                     object_type
                               END,
                name        => object_name
             ) AS source_code
        FROM candidates 
   ),
   --
   -- remove comments and string literals from source code to simplify parsing
   --
   reduced_sources AS (
      SELECT owner, 
             object_type, 
             object_name,
             oracle_maintained,
             remove_ml_comments(remove_sl_comments(remove_string_literals(source_code))) AS source_code
        FROM original_sources
   ),
   --
   -- extend result with number of accessible_by_clauses in source
   --
   counts AS (
      SELECT owner, 
             object_type, 
             object_name, 
             source_code,
             oracle_maintained,
             regexp_count(source_code, 'accessible\s+by\s*\(.*?\)', 1, 'in') accessible_by_count
        FROM reduced_sources
   ),
   --
   -- produce a row for every accessor and extend results by accessor related columns
   --
   procedure_accessors AS (
      SELECT c.owner,
             c.object_type,
             c.object_name,
             c.oracle_maintained,
             upper(a.subprogram_type)             AS subprogram_type,
             fix_identifier(a.subprogram_name)    AS procedure_name,
             coalesce(a.subprogram_id, 0)         AS subprogram_id,
             upper(a.accessor_unit_kind)          AS accessor_unit_kind,
             fix_identifier(a.accessor_schema)    AS accessor_schema,
             fix_identifier(a.accessor_unit_name) AS accessor_unit_name
        FROM counts c
       CROSS JOIN JSON_TABLE(
                get_accessors(c.object_type, c.source_code, c.accessible_by_count), 
                '$[*]' columns (
                   subprogram_type VARCHAR2(30 CHAR)  PATH '$.type',
                   subprogram_name VARCHAR2(128 CHAR) PATH '$.name',
                   subprogram_id   INTEGER            PATH '$.id',
                   nested path '$.accessors[*]' columns (
                      accessor_unit_kind VARCHAR2(30 CHAR)  PATH '$.unit_kind',
                      accessor_schema    VARCHAR2(128 CHAR) PATH '$.schema',
                      accessor_unit_name VARCHAR2(128 CHAR) PATH '$.unit_name'
                   )
                )
             ) a
       WHERE c.accessible_by_count > 0 
   ),
   --
   -- produce compact accessor column, remove duplicates from overloaded subprograms
   --
   aggr_procedure_accessors_base AS (
      SELECT DISTINCT 
             owner, 
             object_type, 
             object_name, 
             procedure_name,
             accessor_unit_kind,
             accessor_schema,
             accessor_unit_name,
             CASE
                WHEN accessor_unit_kind IS NOT NULL THEN
                   accessor_unit_kind || ' '
             END || 
             CASE
                WHEN accessor_schema IS NOT NULL THEN
                   accessor_schema || '.'
             END ||
             accessor_unit_name AS accessor,
             oracle_maintained
        FROM procedure_accessors
   ),
   --
   -- aggregated result per subprogram with accessor_list containing comma separated list of all accessors
   --
   aggr_procedure_accessors AS (
      SELECT owner, 
             object_type, 
             object_name, 
             procedure_name, 
             listagg (accessor, ', ') WITHIN GROUP(ORDER BY accessor) AS accessor_list 
        FROM aggr_procedure_accessors_base
       WHERE oracle_maintained = 'Y'
       GROUP BY owner, object_type, object_name, procedure_name
       ORDER BY owner, object_type, object_name, procedure_name
   )
-- main
SELECT *
  FROM aggr_procedure_accessors
/

The following result is based on an Oracle Database 18c installation. Use the search box to filter the result table, e.g. for UTL_XML only 3 result rows are shown. I hope it’s helpful.

OwnerObject TypeObject NameProcedure NameAccessor List
APEX_050100PACKAGEWWV_FLOW_ADVISOR_CHECKS_INTWWV_FLOW_ADVISOR_CHECKS_API
APEX_050100PACKAGEWWV_FLOW_APP_INSTALL_INTWWV_FLOW_IMP_PARSER, WWV_FLOW_PKG_APP_PARSER
APEX_050100PACKAGEWWV_FLOW_REGION_LISTWWV_FLOW_REGION_NATIVE
APEX_050100PACKAGEWWV_FLOW_SPATIAL_INTWWV_FLOW_SPATIAL_API
CTXSYSPACKAGEDRIACCHELPPACKAGE DRIACC
CTXSYSPACKAGEDRIUTLPARSE_OBJECT_NAMEPACKAGE DRVUTL, PROCEDURE PARSE_OBJECT_NAME
CTXSYSPACKAGEDRVODMSVM_TRAINCTXSYS.CTX_CLS
DVSYSPACKAGECONFIGURE_DV_INTERNALPROCEDURE SYS.CONFIGURE_DV
GSMADMIN_INTERNALPACKAGEDBMS_GSM_COMMONGETDBPARAMETERNUMPACKAGE DBMS_GSM_CLOUDADMIN, PACKAGE DBMS_GSM_COMMON, PACKAGE DBMS_GSM_DBADMIN, PACKAGE DBMS_GSM_POOLADMIN, PACKAGE DBMS_GSM_UTILITY, PACKAGE GGSYS.GGSHARDING, PROCEDURE EXECUTEDDL
GSMADMIN_INTERNALPACKAGEDBMS_GSM_COMMONGETDBPARAMETERSTRPACKAGE DBMS_GSM_CLOUDADMIN, PACKAGE DBMS_GSM_COMMON, PACKAGE DBMS_GSM_DBADMIN, PACKAGE DBMS_GSM_POOLADMIN, PACKAGE GGSYS.GGSHARDING
GSMADMIN_INTERNALPACKAGEDBMS_GSM_COMMONRESETDBPARAMETERPACKAGE DBMS_GSM_CLOUDADMIN, PACKAGE DBMS_GSM_COMMON, PACKAGE DBMS_GSM_DBADMIN, PACKAGE DBMS_GSM_POOLADMIN, PACKAGE GGSYS.GGSHARDING
GSMADMIN_INTERNALPACKAGEDBMS_GSM_COMMONSETDBPARAMETERPACKAGE DBMS_GSM_CLOUDADMIN, PACKAGE DBMS_GSM_COMMON, PACKAGE DBMS_GSM_DBADMIN, PACKAGE DBMS_GSM_POOLADMIN, PACKAGE GGSYS.GGSHARDING
GSMADMIN_INTERNALPACKAGEDBMS_GSM_UTILITYGENERATECHANGELOGENTRYPACKAGE DBMS_GSM_CLOUDADMIN, PACKAGE DBMS_GSM_COMMON, PACKAGE DBMS_GSM_DBADMIN, PACKAGE DBMS_GSM_POOLADMIN, PACKAGE GGSYS.GGSHARDING
GSMADMIN_INTERNALPACKAGEEXCHANGEPACKAGE GSMADMIN_INTERNAL.DBMS_GSM_DBADMIN, PACKAGE SYS.EXCH_TEST
SYSFUNCTIONISXMLTYPETABLE_INTERNALISXMLTYPETABLE
SYSPACKAGECDBVIEW_INTERNALPACKAGE SYS.CDBVIEW
SYSPACKAGEDBMS_AQADM_INVPACKAGE SYS.DBMS_AQADM, SYS.DBMS_AQADM_SYS, SYS.DBMS_AQJMS, SYS.DBMS_AQ_SYS_IMP_INTERNAL, SYS.DBMS_PRVTAQIM, SYS.DBMS_PRVTAQIS, SYS.DBMS_PRVTSQDS, SYS.DBMS_PRVTSQIS
SYSPACKAGEDBMS_AQADM_SYSCALLSPACKAGE SYS.DBMS_AQADM, SYS.DBMS_AQ, SYS.DBMS_AQADM_INV, SYS.DBMS_AQADM_SYS, SYS.DBMS_AQJMS, SYS.DBMS_AQJMS_INTERNAL, SYS.DBMS_AQ_IMPORT_INTERNAL, SYS.DBMS_AQ_IMP_ZECURITY, SYS.DBMS_AQ_SYS_EXP_INTERNAL, SYS.DBMS_AQ_SYS_IMP_INTERNAL, SYS.DBMS_PRVTAQIM, SYS.DBMS_PRVTAQIP, SYS.DBMS_PRVTAQIS, SYS.DBMS_PRVTSQDS, SYS.DBMS_PRVTSQIS, SYS.DBMS_RULEADM_INTERNAL, SYS.DBMS_RULE_ADM, SYS.DBMS_STREAMS_CONTROL_ADM
SYSPACKAGEDBMS_AWR_PROTECTEDDBMS_AWR_REPORT_LAYOUT, DBMS_SWRF_REPORT_INTERNAL
SYSPACKAGEDBMS_EXPORT_EXTENSION_IPACKAGE SYS.DBMS_EXPORT_EXTENSION
SYSPACKAGEDBMS_ISCHED_CHAIN_CONDITIONDELETE_STEP_NAME_TABLEPACKAGE DBMS_ISCHED
SYSPACKAGEDBMS_JSON0PACKAGE XDB.DBMS_JSON
SYSPACKAGEDBMS_METADATA_UTILSET_MARKERPACKAGE SYS.DBMS_METADATA_INT
SYSPACKAGEDBMS_MVIEW_STATS_INTERNALDBMS_MVIEW_STATS
SYSPACKAGEDBMS_PCLXUTIL_INTERNALPACKAGE SYS.DBMS_PCLXUTIL
SYSPACKAGEDBMS_PLUGTSPLTS_NEWDATAFILEPACKAGE SYS.KUPW$WORKER
SYSPACKAGEDBMS_RAT_MASK_INTERNALSYS.DBMS_RAT_MASK
SYSPACKAGEDBMS_RECO_SCRIPT_INTEXTENDED_DATATYPE_SUPPORT, SYS.DBMS_RECOVERABLE_SCRIPT, SYS.DBMS_RECO_SCRIPT_INVOK, SYS.DBMS_STREAMS, SYS.DBMS_STREAMS_ADM, SYS.DBMS_STREAMS_ADM_IVK, SYS.DBMS_STREAMS_ADM_UTL, SYS.DBMS_STREAMS_ADM_UTL_INVOK, SYS.DBMS_STREAMS_MC, SYS.DBMS_STREAMS_MT, SYS.DBMS_STREAMS_RPC
SYSPACKAGEDBMS_RESULT_CACHE_INTERNALDBMS_NETWORK_ACL_ADMIN, RC_INTERNAL_TEST
SYSPACKAGEDBMS_SNAPSHOT_KKXRCAPACKAGE DBMS_SNAPSHOT
SYSPACKAGEDBMS_SODA_UTILPACKAGE SYS.DBMS_SODA, PACKAGE XDB.DBMS_SODA_ADMIN, PACKAGE XDB.DBMS_SODA_DML, PACKAGE XDB.DBMS_SODA_DOM
SYSPACKAGEDBMS_SQLTUNE_INTERNALEXEC_EMX_TUNING_TASK_CALLOUTTYPE SYS.WRI$_REPT_SQLT
SYSPACKAGEDBMS_SQLTUNE_INTERNALI_PROCESS_SQLPACKAGE SYS.DBMS_SQLDIAG, PACKAGE SYS.DBMS_SQLTCB_INTERNAL, PACKAGE SYS.DBMS_STATS_INTERNAL, PACKAGE SYS.DBMS_WORKLOAD_REPLAY_I, PACKAGE SYS.DBMS_XPLAN
SYSPACKAGEDBMS_SQLTUNE_INTERNALTEST_PROCESS_SQLSETPACKAGE SYS.DBMS_STATS
SYSPACKAGEDBMS_SQLTUNE_UTIL0CHECK_DV_ACCESSPACKAGE SYS.DBMS_SQLTUNE
SYSPACKAGEDBMS_SQLTUNE_UTIL1GET_SEQ_REMOTEPACKAGE SYS.DBMS_SQLTUNE_INTERNAL
SYSPACKAGEDBMS_STATS_INTERNALCREATE_TEMPPACKAGE DBMS_STATS
SYSPACKAGEDBMS_STATS_INTERNALPOPULATE_TEMP_INSERTPACKAGE DBMS_STATS
SYSPACKAGEDBMS_STATS_INTERNAL_AGGPACKAGE DBMS_STATS, PACKAGE DBMS_STATS_INTERNAL
SYSPACKAGEDBMS_STREAMS_ADM_UTL_INTSYS.DBMS_STREAMS_ADM_UTL, SYS.DBMS_STREAMS_MT
SYSPACKAGEDBMS_STREAMS_RPCREMOVE_FILE_RCREMOVE_FILE
SYSPACKAGEDBMS_STREAMS_TBS_INTDBMS_FILE_GROUP_UTL_INVOK, DBMS_STREAMS_ADM_UTL, DBMS_STREAMS_MT, DBMS_STREAMS_RPC, DBMS_STREAMS_TABLESPACE_ADM, DBMS_STREAMS_TBS_INT_INVOK
SYSPACKAGEDBMS_SUMVDMDBMS_DIMENSION, DBMS_SUMMARY
SYSPACKAGEDBMS_SYNC_REFRESH_INTERNALDBMS_SYNC_REFRESH
SYSPACKAGEDBMS_TRANSFORM_INTERNALPACKAGE SYS.DBMS_TRANSFORM, SYS.DBMS_AQADM, SYS.DBMS_TRANSFORM_EXIMP, SYS.DBMS_TRANSFORM_EXIMP_INTERNAL
SYSPACKAGEDBMS_TTSCONVERTENCRYPTEDDATAFILECOPYPACKAGE SYS.KUPW$WORKER
SYSPACKAGEDBMS_TTSGET_AFN_DBIDPACKAGE SYS.KUPW$WORKER
SYSPACKAGEDBMS_TTSGET_AFN_DBIDXENDIANPACKAGE SYS.KUPW$WORKER
SYSPACKAGEDBMS_TTSPUT_PROTECTED_TSE_KEYPACKAGE SYS.KUPW$WORKER
SYSPACKAGEDBMS_UMF_PROTECTEDDBMS_ASH_INTERNAL, DBMS_AWR_REPORT_LAYOUT, DBMS_SWRF_REPORT_INTERNAL, DBMS_WORKLOAD_REPOSITORY
SYSPACKAGEDBMS_WRR_PROTECTEDDBMS_RAT_MASK_INTERNAL, DBMS_WORKLOAD_CAPTURE, DBMS_WORKLOAD_REPLAY, DBMS_WORKLOAD_REPLAY_I
SYSPACKAGEDBMS_XDB_UTILPACKAGE SYS.XDB_MIGRATESCHEMA, PACKAGE XDB.DBMS_CLOBUTIL, PACKAGE XDB.DBMS_RESCONFIG, PACKAGE XDB.DBMS_XDBZ, PACKAGE XDB.DBMS_XDBZ0, PACKAGE XDB.DBMS_XDB_ADMIN, PACKAGE XDB.DBMS_XEVENT, PACKAGE XDB.DBMS_XMLDOM
SYSPACKAGEDBMS_XDS_INTPACKAGE SYS.DBMS_XDS
SYSPACKAGEKUPC$QUEUE_INTPREPARE_QUEUE_TABLEPACKAGE SYS.KUPV$FT_INT
SYSPACKAGEKUPC$QUEUE_INTSET_DEBUGPACKAGE SYS.KUPP$PROC
SYSPACKAGEKUPC$QUE_INTPREPARE_QUEUE_TABLEPACKAGE SYS.KUPC$QUEUE_INT
SYSPACKAGEKUPC$QUE_INTSET_DEBUGPACKAGE SYS.KUPC$QUEUE_INT
SYSPACKAGEKUPD$DATASET_DEBUGPACKAGE SYS.KUPP$PROC
SYSPACKAGEKUPD$DATA_INTGET_OPT_PARAMPACKAGE KUPD$DATA
SYSPACKAGEKUPF$FILEADD_DEVICEPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILEADD_FILEPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILEALLOCATE_DEVICEPACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPF$FILECHECK_FATAL_ERRORPACKAGE SYS.KUPD$DATA
SYSPACKAGEKUPF$FILECLOSE_CONTEXTPACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPF$FILEDELETE_UNUSED_FILE_REFSPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILEFILE_REQUESTPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILEFILE_REQUEST_NAKPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILEFLUSH_LOBPACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPF$FILEGET_BLKBUF_SIZESPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILEGET_DEFAULT_FILENAMEPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILEGET_DUMPFILE_INFOPACKAGE SYS.DBMS_DATAPUMP
SYSPACKAGEKUPF$FILEGET_FILE_LISTPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILEGET_FORMATTED_TIMEPACKAGE SYS.DBMS_DATAPUMP, PACKAGE SYS.DBMS_DATAPUMP_INT, PACKAGE SYS.DBMS_DATAPUMP_UTL, PACKAGE SYS.DBMS_METADATA, PACKAGE SYS.DBMS_METADATA_INT, PACKAGE SYS.DBMS_METADATA_UTIL, PACKAGE SYS.KUPC$QUEUE_INT, PACKAGE SYS.KUPC$QUE_INT, PACKAGE SYS.KUPD$DATA, PACKAGE SYS.KUPD$DATA_INT, PACKAGE SYS.KUPM$MCP, PACKAGE SYS.KUPP$PROC, PACKAGE SYS.KUPU$UTILITIES, PACKAGE SYS.KUPU$UTILITIES_INT, PACKAGE SYS.KUPV$FT, PACKAGE SYS.KUPV$FT_INT, PACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPF$FILEGET_FULL_FILENAMEPACKAGE SYS.DBMS_DATAPUMP, PACKAGE SYS.DBMS_DATAPUMP_INT, PACKAGE SYS.DBMS_DATAPUMP_UTL
SYSPACKAGEKUPF$FILEGET_MAX_CSWIDTHPACKAGE SYS.DBMS_DATAPUMP, PACKAGE SYS.DBMS_DATAPUMP_INT, PACKAGE SYS.DBMS_DATAPUMP_UTL, PACKAGE SYS.DBMS_METADATA, PACKAGE SYS.DBMS_METADATA_INT, PACKAGE SYS.DBMS_METADATA_UTIL, PACKAGE SYS.KUPC$QUEUE_INT, PACKAGE SYS.KUPC$QUE_INT, PACKAGE SYS.KUPD$DATA, PACKAGE SYS.KUPD$DATA_INT, PACKAGE SYS.KUPM$MCP, PACKAGE SYS.KUPP$PROC, PACKAGE SYS.KUPU$UTILITIES, PACKAGE SYS.KUPU$UTILITIES_INT, PACKAGE SYS.KUPV$FT, PACKAGE SYS.KUPV$FT_INT, PACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPF$FILEINITPACKAGE SYS.KUPM$MCP, PACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPF$FILEINIT_TDX_STATSPACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPF$FILEIS_DUMPFILE_SET_CONSISTENTPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILEJOB_MODESPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILELOCATE_MASTERPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILEMARK_FILES_AS_UNUSABLEPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILEMASTER_TABLE_UNLOAD_STARTEDPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILEOPEN_CONTEXTPACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPF$FILEREAD_LOBPACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPF$FILERELEASE_FILESPACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPF$FILERESET_EOFPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILESET_DEBUGPACKAGE SYS.KUPP$PROC
SYSPACKAGEKUPF$FILETERMPACKAGE SYS.KUPM$MCP, PACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPF$FILETRACEPACKAGE SYS.DBMS_DATAPUMP, PACKAGE SYS.DBMS_DATAPUMP_INT, PACKAGE SYS.DBMS_DATAPUMP_UTL, PACKAGE SYS.DBMS_METADATA, PACKAGE SYS.DBMS_METADATA_INT, PACKAGE SYS.DBMS_METADATA_UTIL, PACKAGE SYS.DBMS_PLUGTS, PACKAGE SYS.DBMS_TTS, PACKAGE SYS.KUPC$QUEUE_INT, PACKAGE SYS.KUPC$QUE_INT, PACKAGE SYS.KUPD$DATA, PACKAGE SYS.KUPD$DATA_INT, PACKAGE SYS.KUPM$MCP, PACKAGE SYS.KUPP$PROC, PACKAGE SYS.KUPU$UTILITIES, PACKAGE SYS.KUPU$UTILITIES_INT, PACKAGE SYS.KUPV$FT, PACKAGE SYS.KUPV$FT_INT, PACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPF$FILEVERIFY_DUMPFILE_SETPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILEWRITE_LOBPACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPF$FILE_INTCLOSE_CONTEXTPACKAGE SYS.KUPF$FILE
SYSPACKAGEKUPF$FILE_INTCREATE_DUMP_FILEPACKAGE SYS.KUPF$FILE
SYSPACKAGEKUPF$FILE_INTCREATE_KEY_INFOPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILE_INTDELETE_DUMP_FILEPACKAGE SYS.KUPF$FILE
SYSPACKAGEKUPF$FILE_INTENCODE_PWDPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILE_INTEXAMINE_DUMP_FILEPACKAGE SYS.KUPF$FILE
SYSPACKAGEKUPF$FILE_INTFLUSH_LOBPACKAGE SYS.KUPF$FILE
SYSPACKAGEKUPF$FILE_INTGET_BLKBUF_SIZESPACKAGE SYS.KUPF$FILE
SYSPACKAGEKUPF$FILE_INTGET_DEBUG_EVENTPACKAGE SYS.KUPF$FILE, PACKAGE SYS.UTL_XML
SYSPACKAGEKUPF$FILE_INTGET_DEFAULT_CREDENTIALPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILE_INTGET_DEFAULT_FILENAMEPACKAGE SYS.KUPF$FILE
SYSPACKAGEKUPF$FILE_INTGET_ENCODED_PWDPACKAGE SYS.KUPF$FILE, PACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILE_INTGET_FORMATTED_TIMEPACKAGE SYS.DBMS_DATAPUMP, PACKAGE SYS.DBMS_DATAPUMP_INT, PACKAGE SYS.DBMS_DATAPUMP_UTL, PACKAGE SYS.DBMS_METADATA, PACKAGE SYS.DBMS_METADATA_INT, PACKAGE SYS.DBMS_METADATA_UTIL, PACKAGE SYS.KUPC$QUEUE_INT, PACKAGE SYS.KUPC$QUE_INT, PACKAGE SYS.KUPD$DATA, PACKAGE SYS.KUPD$DATA_INT, PACKAGE SYS.KUPF$FILE, PACKAGE SYS.KUPM$MCP, PACKAGE SYS.KUPP$PROC, PACKAGE SYS.KUPU$UTILITIES, PACKAGE SYS.KUPU$UTILITIES_INT, PACKAGE SYS.KUPV$FT, PACKAGE SYS.KUPV$FT_INT, PACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPF$FILE_INTGET_FULL_FILENAMEPACKAGE SYS.KUPF$FILE
SYSPACKAGEKUPF$FILE_INTGET_MAX_CSWIDTHPACKAGE SYS.KUPF$FILE
SYSPACKAGEKUPF$FILE_INTGTOPPACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPF$FILE_INTINITPACKAGE SYS.KUPF$FILE
SYSPACKAGEKUPF$FILE_INTIS_DUMPFILE_A_RESTFILEPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILE_INTOPEN_CONTEXTPACKAGE SYS.KUPF$FILE
SYSPACKAGEKUPF$FILE_INTPARSE_FILENAMEPACKAGE SYS.KUPF$FILE
SYSPACKAGEKUPF$FILE_INTREAD_LOBPACKAGE SYS.KUPF$FILE
SYSPACKAGEKUPF$FILE_INTRELEASE_FILESPACKAGE SYS.KUPF$FILE
SYSPACKAGEKUPF$FILE_INTSET_DEBUGPACKAGE SYS.KUPF$FILE
SYSPACKAGEKUPF$FILE_INTSET_TRANS_PARAMSPACKAGE SYS.KUPF$FILE
SYSPACKAGEKUPF$FILE_INTTERMPACKAGE SYS.KUPF$FILE
SYSPACKAGEKUPF$FILE_INTVERIFY_KEY_INFOPACKAGE SYS.KUPF$FILE, PACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPF$FILE_INTWRITE_LOBPACKAGE SYS.KUPF$FILE
SYSPACKAGEKUPM$MCPCLOSE_JOBPACKAGE SYS.DBMS_DATAPUMP
SYSPACKAGEKUPM$MCPDISPATCHPACKAGE SYS.KUPC$QUEUE
SYSPACKAGEKUPM$MCPSET_DEBUGPACKAGE SYS.KUPP$PROC
SYSPACKAGEKUPP$PROCCREATE_MASTER_PROCESSPACKAGE SYS.KUPV$FT
SYSPACKAGEKUPP$PROCCREATE_WORKER_PROCESSESPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPU$UTILITIESGET_REMOTE_DBLINK_USERPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPU$UTILITIESREPLACE_XML_VALUESPACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPU$UTILITIES_INTCHECK_TBS_FOR_TDECOL_TABSPACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPU$UTILITIES_INTCREATE_DIRECTORYPACKAGE SYS.DBMS_DATAPUMP_UTL
SYSPACKAGEKUPU$UTILITIES_INTDEBUGPACKAGE SYS.KUPU$UTILITIES
SYSPACKAGEKUPU$UTILITIES_INTDIRECTORY_SCANPACKAGE SYS.DBMS_PLUGTS, PACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPU$UTILITIES_INTGET_DP_UPDATE_LOCKPACKAGE SYS.DBMS_METADATA, PACKAGE SYS.DBMS_METADATA_DIFF, PACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPU$UTILITIES_INTGET_PARAMETER_VALUEPACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPU$UTILITIES_INTGET_REMOTE_DBLINK_USERPACKAGE SYS.KUPU$UTILITIES
SYSPACKAGEKUPU$UTILITIES_INTINTALGCONVPACKAGE SYS.DBMS_METADATA_UTIL, PACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPU$UTILITIES_INTRELEASE_DP_UPDATE_LOCKPACKAGE SYS.DBMS_METADATA, PACKAGE SYS.DBMS_METADATA_DIFF, PACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPU$UTILITIES_INTSET_DEBUGPACKAGE SYS.KUPP$PROC
SYSPACKAGEKUPV$FTATTACH_JOBPACKAGE SYS.DBMS_DATAPUMP, PACKAGE SYS.KUPM$MCP, PACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPV$FTOPEN_JOBPACKAGE SYS.DBMS_DATAPUMP
SYSPACKAGEKUPV$FT_INTACTIVE_CLIENT_COUNTPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPV$FT_INTBUILD_MTABLE_INDEXESPACKAGE SYS.KUPM$MCP
SYSPACKAGEKUPV$FT_INTCREATE_NEW_JOBPACKAGE SYS.KUPV$FT
SYSPACKAGEKUPV$FT_INTDEBUG_ENABLEDPACKAGE SYS.KUPV$FT
SYSPACKAGEKUPV$FT_INTGET_DEBUG_INFOPACKAGE SYS.KUPV$FT
SYSPACKAGEKUPV$FT_INTON_BEHALFPACKAGE SYS.KUPV$FT
SYSPACKAGEKUPV$FT_INTSET_DEBUGPACKAGE SYS.KUPP$PROC
SYSPACKAGEKUPV$FT_INTSET_EVENTPACKAGE SYS.KUPC$QUE_INT, PACKAGE SYS.KUPD$DATA, PACKAGE SYS.KUPP$PROC, PACKAGE SYS.KUPW$WORKER
SYSPACKAGEKUPW$WORKERSET_DEBUGPACKAGE SYS.KUPP$PROC
SYSPACKAGEOUTLN_PKG_INTERNALPACKAGE SYS.OUTLN_PKG
SYSPACKAGEPRVT_ACCESS_ADVISORSETUP_USERPACKAGE PRVT_ADVISOR
SYSPACKAGEUTL_XMLPARSEEXPRPACKAGE SYS.DBMS_METADATA
SYSPACKAGEUTL_XMLPARSEQUERYPACKAGE SYS.DBMS_METADATA
SYSPACKAGEWWV_DBMS_SQL_APEX_050100APEX_050100.WWV_FLOW_DYNAMIC_EXEC, APEX_050100.WWV_FLOW_SESSION_RAS, APEX_050100.WWV_FLOW_UPGRADE
SYSPACKAGEXS_DATA_SECURITY_UTIL_INTPACKAGE XS_DATA_SECURITY_UTIL
SYSPROCEDUREEXECASUSERPACKAGE GGSYS.GGSHARDING, PACKAGE GSMADMIN_INTERNAL.DBMS_GSM_CLOUDADMIN, PACKAGE GSMADMIN_INTERNAL.DBMS_GSM_COMMON, PACKAGE GSMADMIN_INTERNAL.DBMS_GSM_DBADMIN, PACKAGE GSMADMIN_INTERNAL.DBMS_GSM_POOLADMIN, PACKAGE GSMADMIN_INTERNAL.DBMS_GSM_UTILITY, PROCEDURE GSMADMIN_INTERNAL.EXECUTEDDL
WMSYSPACKAGELTADMPACKAGE OWM_IEXP_PKG, PACKAGE WMSYS.LT, PACKAGE WMSYS.LTDDL, PACKAGE WMSYS.LTDTRG, PACKAGE WMSYS.LTPRIV, PACKAGE WMSYS.LTRIC, PACKAGE WMSYS.LTUTIL, PACKAGE WMSYS.LT_CTX_PKG, PACKAGE WMSYS.LT_EXPORT_PKG, PACKAGE WMSYS.OWM_BULK_LOAD_PKG, PACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.OWM_DML_PKG, PACKAGE WMSYS.OWM_DYNSQL_ACCESS, PACKAGE WMSYS.OWM_MIG_PKG, PACKAGE WMSYS.OWM_MP_PKG, PACKAGE WMSYS.UD_TRIGS, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTAQPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM
WMSYSPACKAGELTDDLPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTRIC, PACKAGE WMSYS.LTUTIL, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTDTRGPACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTRIC, PACKAGE WMSYS.OWM_DYNSQL_ACCESS, PACKAGE WMSYS.UD_TRIGS, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTPRIVPACKAGE WMSYS.LT
WMSYSPACKAGELTRICGETVARIABLE_CPACKAGE WMSYS.OWM_DYNSQL_ACCESS
WMSYSPACKAGELTRICSETVARIABLEPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM
WMSYSPACKAGELTRICPACKAGE OWM_MP_PKG, PACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTDDL, PACKAGE WMSYS.LTDTRG, PACKAGE WMSYS.OWM_BULK_LOAD_PKG, PACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.OWM_DYNSQL_ACCESS, PACKAGE WMSYS.OWM_IEXP_PKG, PACKAGE WMSYS.OWM_MIG_PKG, PACKAGE WMSYS.UD_TRIGS, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILADDUSERDEFINEDHINTPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILADDWCPPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILALLOCATE_UNIQUEPACKAGE WMSYS.LTDDL, PACKAGE WMSYS.OWM_BULK_LOAD_PKG, PACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.OWM_MIG_PKG, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILALLOWROWLEVELLOCKINGPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM
WMSYSPACKAGELTUTILCHECKADDTOPOGEOLAYERERRORSPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILCHECKDELTOPOGEOLAYERERRORSPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILCHECKDOMAININDEXPRIVSPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILCLEANUPBDDLPACKAGE WMSYS.OWM_DDL_PKG
WMSYSPACKAGELTUTILCLEANUPCDDLPACKAGE WMSYS.OWM_DDL_PKG
WMSYSPACKAGELTUTILCLEANUPDVPACKAGE WMSYS.LTDDL
WMSYSPACKAGELTUTILCLEANUPEVPACKAGE WMSYS.LTDDL
WMSYSPACKAGELTUTILCLEANUPMETADATAPACKAGE WMSYS.LTDDL
WMSYSPACKAGELTUTILCLEANUPMETADATABYUSERPACKAGE WMSYS.LTADM, PACKAGE WMSYS.OWM_DYNSQL_ACCESS
WMSYSPACKAGELTUTILCLEANUPSTALEMETADATAPACKAGE WMSYS.LT_EXPORT_PKG, PACKAGE WMSYS.OWM_DYNSQL_ACCESS
WMSYSPACKAGELTUTILCREATEINLISTFROMQUERYPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTDTRG, PACKAGE WMSYS.LTRIC, PACKAGE WMSYS.OWM_DYNSQL_ACCESS, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILCREATEPKWHERECLAUSEPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTRIC
WMSYSPACKAGELTUTILDELETEFULLROLLBACKMARKERPACKAGE WMSYS.LT_CTX_PKG
WMSYSPACKAGELTUTILDELETEUNDOCODEPACKAGE WMSYS.LTDDL, PACKAGE WMSYS.OWM_BULK_LOAD_PKG, PACKAGE WMSYS.OWM_DDL_PKG
WMSYSPACKAGELTUTILDELETEUNDOCODECHECKPOINTSPACKAGE WMSYS.LT_CTX_PKG
WMSYSPACKAGELTUTILDELETEUNDOCODERANGEPACKAGE WMSYS.LTDDL
WMSYSPACKAGELTUTILDISALLOWIFWITHVTPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILEXECEVUNDOPACKAGE WMSYS.LTDDL
WMSYSPACKAGELTUTILEXECLOGPACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILEXECUTESQLLOGPACKAGE WMSYS.LT, PACKAGE WMSYS.LT_EXPORT_PKG, PACKAGE WMSYS.OWM_MIG_PKG
WMSYSPACKAGELTUTILEXISTSBIRPACKAGE WMSYS.LTRIC
WMSYSPACKAGELTUTILEXISTSBURPACKAGE WMSYS.LTRIC
WMSYSPACKAGELTUTILEXISTSCONSTRAINTPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILEXISTSFULLROLLBACKMARKERPACKAGE WMSYS.LTDDL
WMSYSPACKAGELTUTILEXISTSTOPOLOGYPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILFIXTOPOLOGYIMPORTPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILFIXVTAB_COMPRESSPACKAGE WMSYS.LTADM
WMSYSPACKAGELTUTILFIXVTAB_REFRESHPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILFIXVTAB_ROLLBACKPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILGENFIXCRNONSEQNFRESHINSPACKAGE WMSYS.LTADM
WMSYSPACKAGELTUTILGENWMCOLSUPDATESTMNTPACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILGETBASETABLENAMEPACKAGE WMSYS.LTADM, PACKAGE WMSYS.OWM_IEXP_PKG, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILGETBATCHWHERECLAUSESPACKAGE WMSYS.LTADM, PACKAGE WMSYS.OWM_DYNSQL_ACCESS
WMSYSPACKAGELTUTILGETCOLINFOPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.OWM_IEXP_PKG
WMSYSPACKAGELTUTILGETCOLLISTPACKAGE WMSYS.LTADM
WMSYSPACKAGELTUTILGETCOLSTRPACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTDTRG, PACKAGE WMSYS.OWM_BULK_LOAD_PKG, PACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.OWM_IEXP_PKG, PACKAGE WMSYS.UD_TRIGS, PACKAGE WMSYS.WM_DDL_UTIL, WMSYS.OWM_MIG_PKG
WMSYSPACKAGELTUTILGETCOLUMNPLUSEXPRESSIONPACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILGETCRSTATUSPACKAGE WMSYS.LT, PACKAGE WMSYS.OWM_BULK_LOAD_PKG
WMSYSPACKAGELTUTILGETCURRENTLOCKINGMODEPACKAGE WMSYS.LT_CTX_PKG
WMSYSPACKAGELTUTILGETCURVERPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTRIC, PACKAGE WMSYS.OWM_IEXP_PKG
WMSYSPACKAGELTUTILGETDISTINCTOBJECTPACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.OWM_MIG_PKG, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILGETDISVERPACKAGE WMSYS.LT, PACKAGE WMSYS.LTDDL, PACKAGE WMSYS.OWM_MIG_PKG, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILGETHISTOPTIONPACKAGE OWM_IEXP_PKG, PACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTDDL, PACKAGE WMSYS.LTRIC, PACKAGE WMSYS.OWM_BULK_LOAD_PKG, PACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILGETINDEXTABLEPACKAGE WMSYS.LT, PACKAGE WMSYS.LTDTRG
WMSYSPACKAGELTUTILGETINDEXTABLESPACEPACKAGE WMSYS.LTDDL, PACKAGE WMSYS.OWM_MIG_PKG, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILGETNESTEDCOLUMNVIEWPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILGETNESTEDTABLECOLSTRPACKAGE WMSYS.LTDTRG, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILGETNESTEDTABLEMETADATACOLUMNSPACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILGETNESTEDTABLETYPEPACKAGE WMSYS.OWM_DDL_PKG
WMSYSPACKAGELTUTILGETNEXTVERSIONPACKAGE WMSYS.LTADM
WMSYSPACKAGELTUTILGETNOTNULLCONSTRAINTPACKAGE WMSYS.OWM_DDL_PKG
WMSYSPACKAGELTUTILGETNTPKEYCOLSPACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTDTRG, PACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.OWM_MP_PKG
WMSYSPACKAGELTUTILGETPKEYINFOPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILGETPKEYINFO_VTPACKAGE WMSYS.LTADM
WMSYSPACKAGELTUTILGETPKINDEXINFOPACKAGE WMSYS.LTDDL, PACKAGE WMSYS.OWM_MIG_PKG, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILGETRLSWHERECLAUSEPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILGETSEQUENCEOPTIONSPACKAGE WMSYS.OWM_DDL_PKG
WMSYSPACKAGELTUTILGETSIDPACKAGE WMSYS.LTADM, PACKAGE WMSYS.LT_CTX_PKG
WMSYSPACKAGELTUTILGETSNOPACKAGE WMSYS.LT_CTX_PKG
WMSYSPACKAGELTUTILGETSPACEUSAGEPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILGETTABLETABLESPACEPACKAGE WMSYS.LTDDL, PACKAGE WMSYS.OWM_DDL_PKG, WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILGETTOPOFEATURETABINFOPACKAGE WMSYS.LTDTRG
WMSYSPACKAGELTUTILGETTOPOINFOPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILGETTRIGGERSPACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTDTRG
WMSYSPACKAGELTUTILGETUDHINTPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTRIC, PACKAGE WMSYS.OWM_BULK_LOAD_PKG, PACKAGE WMSYS.OWM_DYNSQL_ACCESS, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILGETUNDOCODECLOBPACKAGE WMSYS.OWM_BULK_LOAD_PKG
WMSYSPACKAGELTUTILGETVALIDTIMEOPTIONPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTDDL, PACKAGE WMSYS.LTDTRG, PACKAGE WMSYS.LTRIC, PACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.OWM_MIG_PKG, PACKAGE WMSYS.OWM_MP_PKG, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILGETVARIABLE_RPACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILGETVERINDEXNAMEPACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTRIC, PACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.OWM_IEXP_PKG, PACKAGE WMSYS.OWM_MP_PKG
WMSYSPACKAGELTUTILGETVTIDPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTDDL, PACKAGE WMSYS.LTDTRG, PACKAGE WMSYS.LTRIC, PACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.OWM_IEXP_PKG, PACKAGE WMSYS.WM_DDL_UTIL, TRIGGER WMSYS.WM$BCT_I_TRIG, TRIGGER WMSYS.WM$CC_I_TRIG, TRIGGER WMSYS.WM$CP_D_TRIG, TRIGGER WMSYS.WM$CP_I_TRIG, TRIGGER WMSYS.WM$CP_U_TRIG, TRIGGER WMSYS.WM$CT_I_TRIG, TRIGGER WMSYS.WM$CT_U_TRIG, TRIGGER WMSYS.WM$HT_I_TRIG, TRIGGER WMSYS.WM$LI_I_TRIG, TRIGGER WMSYS.WM$MT_I_TRIG, TRIGGER WMSYS.WM$NCT_I_TRIG, TRIGGER WMSYS.WM$RLT_I_TRIG, TRIGGER WMSYS.WM$RTT_I_TRIG, TRIGGER WMSYS.WM$RT_D_TRIG, TRIGGER WMSYS.WM$RT_I_TRIG, TRIGGER WMSYS.WM$UDP_I_TRIG, TRIGGER WMSYS.WM$UD_U_TRIG, TRIGGER WMSYS.WM$UI_I_TRIG, TRIGGER WMSYS.WM$VET_I_TRIG, TRIGGER WMSYS.WM$VET_U_TRIG
WMSYSPACKAGELTUTILGETWHERECLAUSESTRPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTRIC, PACKAGE WMSYS.LT_EXPORT_PKG, PACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.OWM_MIG_PKG, PACKAGE WMSYS.OWM_MP_PKG, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILGETWORKSPACELOCKIDPACKAGE WMSYS.LTADM, TRIGGER WMSYS.WM$BCT_I_TRIG, TRIGGER WMSYS.WM$LI_I_TRIG, TRIGGER WMSYS.WM$MGWT_I_TRIG, TRIGGER WMSYS.WM$MPWT_I_TRIG, TRIGGER WMSYS.WM$MT_I_TRIG, TRIGGER WMSYS.WM$MW_I_TRIG, TRIGGER WMSYS.WM$NT_I_TRIG, TRIGGER WMSYS.WM$RWT_I_TRIG, TRIGGER WMSYS.WM$VHT_I_TRIG, TRIGGER WMSYS.WM$VT_I_TRIG, TRIGGER WMSYS.WM$WPT_D_TRIG, TRIGGER WMSYS.WM$WPT_I_TRIG, TRIGGER WMSYS.WM$WPT_U_TRIG, TRIGGER WMSYS.WM$WST_I_TRIG, TRIGGER WMSYS.WM$WT_I_TRIG, WMSYS.LT_CTX_PKG
WMSYSPACKAGELTUTILGETWORKSPACELOCKMODEPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM
WMSYSPACKAGELTUTILGET_EXPANDED_NEXTVERS_NPPACKAGE WMSYS.LTADM
WMSYSPACKAGELTUTILGRANTOLSPRIVSPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILHASCRCHILDPACKAGE WMSYS.LT, PACKAGE WMSYS.LTRIC
WMSYSPACKAGELTUTILHASDEFERREDCHILDPACKAGE WMSYS.LTADM
WMSYSPACKAGELTUTILHASFEATURETABLEPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILHASNESTEDTABLECOLUMNPACKAGE WMSYS.LTADM
WMSYSPACKAGELTUTILHASOLSPOLICYPACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILHASRICCASCADINGCONSTRAINTPACKAGE WMSYS.LTADM
WMSYSPACKAGELTUTILHASRICSETNULLCONSTRAINTPACKAGE WMSYS.LTADM
WMSYSPACKAGELTUTILHASVIRTUALCOLUMNPACKAGE WMSYS.LTDTRG
WMSYSPACKAGELTUTILHASWOOVERWRITEOPTIONPACKAGE WMSYS.LT, PACKAGE WMSYS.LTRIC
WMSYSPACKAGELTUTILHISTWITHDATETYPEPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTDTRG
WMSYSPACKAGELTUTILHISTWITHDATETYPEEVPACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILINSERTFULLROLLBACKMARKERPACKAGE WMSYS.LTDDL, PACKAGE WMSYS.LT_CTX_PKG
WMSYSPACKAGELTUTILINVERSIONEDSTATEPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILISIMPLICITSPPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILISLEAFSTATEPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILISMODIFIEDPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILISMODIFIEDINSUBTREEPACKAGE WMSYS.LT, PACKAGE WMSYS.OWM_DYNSQL_ACCESS
WMSYSPACKAGELTUTILISOBJECTTABLEPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILISSPATIALINSTALLEDPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTDDL, PACKAGE WMSYS.LTRIC, PACKAGE WMSYS.LT_EXPORT_PKG, PACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.OWM_MIG_PKG
WMSYSPACKAGELTUTILISSPLITINSUBTREEPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILISTOPOFEATURETABLEPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTDTRG
WMSYSPACKAGELTUTILISTOPOLOGYINDEXTABLEPACKAGE WMSYS.LT, PACKAGE WMSYS.LTDTRG, PACKAGE WMSYS.LT_EXPORT_PKG
WMSYSPACKAGELTUTILISTOPOLOGYRELATIONTABLEPACKAGE WMSYS.LTDTRG
WMSYSPACKAGELTUTILISVERSIONEDTABLEPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILISVERSIONENABLEDTOPOLOGYPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILISWORKSPACEOWNERPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILMOVEWMMETADATAPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILNEEDTOEXECUTETRIGGERSPACKAGE WMSYS.LTADM
WMSYSPACKAGELTUTILNUMTRIGGERSTOEXECUTEPACKAGE WMSYS.LTADM
WMSYSPACKAGELTUTILPARSESTRINGLISTPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILPOPULATEROWIDRANGESPACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILPREFIXSTRPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.OWM_DYNSQL_ACCESS, PACKAGE WMSYS.OWM_MIG_PKG, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILQB_BLOCK_REPLACEPACKAGE WMSYS.LTADM
WMSYSPACKAGELTUTILREMOVEUSERDEFINEDHINTPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILRENAMESAVEPOINTPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILRENAMEWORKSPACEPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILREQUIRESTRIGGERSONTOPVIEWPACKAGE WMSYS.LTDTRG
WMSYSPACKAGELTUTILRESETALLSEQUENCESPACKAGE WMSYS.LT_EXPORT_PKG
WMSYSPACKAGELTUTILRESOLVESYNONYMPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTRIC
WMSYSPACKAGELTUTILRESTARTSEQUENCEPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILSEPARATECLOBINTO2PARTSPACKAGE WMSYS.LTADM
WMSYSPACKAGELTUTILSETVARIABLEPACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELTUTILTOPOTABLECHECKPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILTO_CLOB_PACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM
WMSYSPACKAGELTUTILUPDATESDOTOPOMETADATADVPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILUPDATESDOTOPOMETADATAEVPACKAGE WMSYS.LT
WMSYSPACKAGELTUTILWM$GETDBCOMPATIBLESTRPACKAGE WMSYS.OWM_IEXP_PKG
WMSYSPACKAGELTUTILWRITETOLOGPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTDDL, PACKAGE WMSYS.LTDTRG, PACKAGE WMSYS.LTRIC, PACKAGE WMSYS.OWM_BULK_LOAD_PKG, PACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.OWM_IEXP_PKG, PACKAGE WMSYS.OWM_MIG_PKG, PACKAGE WMSYS.OWM_MP_PKG, PACKAGE WMSYS.UD_TRIGS, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELT_CTX_PKGALLOWDDLOPERATIONPACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTRIC, PACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.OWM_DYNSQL_ACCESS
WMSYSPACKAGELT_CTX_PKGCHVLTLPACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTDTRG, PACKAGE WMSYS.LTRIC, PACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELT_CTX_PKGGETLTTABLENAMEWMSYS.LTDTRG
WMSYSPACKAGELT_CTX_PKGGETMULTIWORKSPACESWMSYS.LT
WMSYSPACKAGELT_CTX_PKGGETPURGEOPTIONPACKAGE WMSYS.LTDDL, PACKAGE WMSYS.LTRIC, PACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGELT_CTX_PKGGETSESSIONATTRIBUTESPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LT_EXPORT_PKG
WMSYSPACKAGELT_CTX_PKGGETVARIABLE_BPACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTDTRG, PACKAGE WMSYS.OWM_DYNSQL_ACCESS
WMSYSPACKAGELT_CTX_PKGGETVARIABLE_NPACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTDTRG, PACKAGE WMSYS.LTUTIL, PACKAGE WMSYS.OWM_DYNSQL_ACCESS
WMSYSPACKAGELT_CTX_PKGGETVARIABLE_VPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTAQ, PACKAGE WMSYS.LTUTIL
WMSYSPACKAGELT_CTX_PKGGETVTTABLENAMEWMSYS.LTDTRG
WMSYSPACKAGELT_CTX_PKGSETACTIVETIMEFORDMLWMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETCALLSTACKASINVALIDPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTUTIL, PACKAGE WMSYS.LT_EXPORT_PKG, PACKAGE WMSYS.OWM_DYNSQL_ACCESS, PACKAGE WMSYS.OWM_MIG_PKG
WMSYSPACKAGELT_CTX_PKGSETCALLSTACKASVALIDPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTUTIL, PACKAGE WMSYS.LT_EXPORT_PKG, PACKAGE WMSYS.OWM_DYNSQL_ACCESS, PACKAGE WMSYS.OWM_MIG_PKG
WMSYSPACKAGELT_CTX_PKGSETCOMMITVARSWMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETCOMPRESSWORKSPACEPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETCONFLICTSTATEPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETCOPYVARSPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETDIFFVERSPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETFLIPVERSIONONREFRESHPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETFREEZESTATUSPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM
WMSYSPACKAGELT_CTX_PKGSETIMPORTVARSPACKAGE WMSYS.LT, PACKAGE WMSYS.OWM_DYNSQL_ACCESS
WMSYSPACKAGELT_CTX_PKGSETINSTANTPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.OWM_IEXP_PKG
WMSYSPACKAGELT_CTX_PKGSETLOCKMODEPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETMPROOTPACKAGE WMSYS.OWM_MP_PKG
WMSYSPACKAGELT_CTX_PKGSETMPWORKSPACEPACKAGE WMSYS.OWM_MP_PKG
WMSYSPACKAGELT_CTX_PKGSETMULTIWORKSPACESPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETNEWMPVARSPACKAGE WMSYS.OWM_MP_PKG
WMSYSPACKAGELT_CTX_PKGSETNEWROOTANCVERSIONPACKAGE WMSYS.OWM_MP_PKG
WMSYSPACKAGELT_CTX_PKGSETOPCONTEXTPACKAGE WMSYS.LTUTIL
WMSYSPACKAGELT_CTX_PKGSETPOSTVARSPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETROWLOCKSTATUSPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETSAVEPOINTPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM
WMSYSPACKAGELT_CTX_PKGSETSTATEATTRIBUTESPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETTABMRGWOREMOVEEVENTPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETTABMRGWREMOVEEVENTPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETTABREFRESHEVENTPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETTRIGGEREVENTPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETTSINSTANTPACKAGE WMSYS.LTADM
WMSYSPACKAGELT_CTX_PKGSETUSERPACKAGE WMSYS.LTPRIV
WMSYSPACKAGELT_CTX_PKGSETVALIDTIMEPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETVALIDTIMEFILTEROFFPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETVALIDTIMEFILTERONPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETVARIABLEPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTUTIL, PACKAGE WMSYS.OWM_IEXP_PKG
WMSYSPACKAGELT_CTX_PKGSETVERAFTINSTANTPACKAGE WMSYS.LTADM
WMSYSPACKAGELT_CTX_PKGSETVERBEFINSTANTPACKAGE WMSYS.LTADM
WMSYSPACKAGELT_CTX_PKGSETVERSIONPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.OWM_IEXP_PKG
WMSYSPACKAGELT_CTX_PKGSETVERSIONANDSTATEPACKAGE WMSYS.OWM_IEXP_PKG
WMSYSPACKAGELT_CTX_PKGSETWRITERSTATEPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETWSPCMRGWOREMOVEEVENTPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGSETWSPCMRGWREMOVEEVENTPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGTO_TIMESTAMP_TZ_PACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGUNSETCOMMITVARSPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGUNSETCOPYVARSPACKAGE WMSYS.LT
WMSYSPACKAGELT_CTX_PKGUNSETIMPORTVARSPACKAGE WMSYS.LT, PACKAGE WMSYS.OWM_DYNSQL_ACCESS
WMSYSPACKAGELT_CTX_PKGUNSETPOSTVARSPACKAGE WMSYS.LT
WMSYSPACKAGELT_EXPORT_PKGEXPORT_SCHEMASPACKAGE WMSYS.LT
WMSYSPACKAGEOWM_ASSERT_PKGPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTDDL, PACKAGE WMSYS.LTDTRG, PACKAGE WMSYS.LTRIC, PACKAGE WMSYS.LTUTIL, PACKAGE WMSYS.LT_EXPORT_PKG, PACKAGE WMSYS.OWM_BULK_LOAD_PKG, PACKAGE WMSYS.OWM_CPKG_PKG, PACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.OWM_DYNSQL_ACCESS, PACKAGE WMSYS.OWM_IEXP_PKG, PACKAGE WMSYS.OWM_MIG_PKG, PACKAGE WMSYS.OWM_MP_PKG, PACKAGE WMSYS.OWM_VSCRIPT_PKG, PACKAGE WMSYS.UD_TRIGS, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGEOWM_BULK_LOAD_PKGPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.OWM_DYNSQL_ACCESS
WMSYSPACKAGEOWM_CPKG_PKGPACKAGE WMSYS.LTDDL, PACKAGE WMSYS.OWM_DYNSQL_ACCESS, PACKAGE WMSYS.OWM_MIG_PKG
WMSYSPACKAGEOWM_DDL_PKGPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTDDL, PACKAGE WMSYS.LTDTRG, PACKAGE WMSYS.LTRIC, PACKAGE WMSYS.LTUTIL, PACKAGE WMSYS.LT_EXPORT_PKG, PACKAGE WMSYS.OWM_BULK_LOAD_PKG, PACKAGE WMSYS.OWM_DYNSQL_ACCESS, PACKAGE WMSYS.OWM_MIG_PKG, PACKAGE WMSYS.OWM_MP_PKG, PACKAGE WMSYS.UD_TRIGS, PACKAGE WMSYS.WM_DDL_UTIL
WMSYSPACKAGEOWM_DML_PKGTRIGGER WMSYS.WM$CP_I_TRIG, TRIGGER WMSYS.WM$CP_U_TRIG, TRIGGER WMSYS.WM$CT_I_TRIG, TRIGGER WMSYS.WM$CT_U_TRIG, TRIGGER WMSYS.WM$EI_I_TRIG, TRIGGER WMSYS.WM$EI_U_TRIG, TRIGGER WMSYS.WM$EV_I_TRIG, TRIGGER WMSYS.WM$HT_I_TRIG, TRIGGER WMSYS.WM$MGWT_I_TRIG, TRIGGER WMSYS.WM$MPWT_I_TRIG, TRIGGER WMSYS.WM$RT_I_TRIG, TRIGGER WMSYS.WM$RWT_I_TRIG, TRIGGER WMSYS.WM$SAV_I_TRIG, TRIGGER WMSYS.WM$UDP_I_TRIG, TRIGGER WMSYS.WM$UD_U_TRIG, TRIGGER WMSYS.WM$UI_I_TRIG, TRIGGER WMSYS.WM$VET_I_TRIG, TRIGGER WMSYS.WM$VET_U_TRIG, TRIGGER WMSYS.WM$VTH_I_TRIG, TRIGGER WMSYS.WM$VTH_U_TRIG, TRIGGER WMSYS.WM$WPT_D_TRIG, TRIGGER WMSYS.WM$WPT_I_TRIG, TRIGGER WMSYS.WM$WPT_U_TRIG, TRIGGER WMSYS.WM$WST_I_TRIG, TRIGGER WMSYS.WM$WT_I_TRIG, TRIGGER WMSYS.WM$WT_U_TRIG
WMSYSPACKAGEOWM_IEXP_PKGPACKAGE WMSYS.LT, PACKAGE WMSYS.OWM_DYNSQL_ACCESS
WMSYSPACKAGEOWM_MIG_PKGRECOVERMIGRATINGTABLEPACKAGE WMSYS.LT
WMSYSPACKAGEOWM_MP_PKGPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTPRIV, PACKAGE WMSYS.LTRIC, PACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.OWM_DYNSQL_ACCESS
WMSYSPACKAGEOWM_VSCRIPT_PKGSTARTQUEUEPACKAGE WMSYS.LT_EXPORT_PKG
WMSYSPACKAGEOWM_VSCRIPT_PKGSTOPQUEUEPACKAGE WMSYS.LT_EXPORT_PKG
WMSYSPACKAGEOWM_VSCRIPT_PKGWM$CONVERTVERSIONSTRPACKAGE WMSYS.LTUTIL
WMSYSPACKAGEOWM_VSCRIPT_PKGWM$GETDBPARAMETERPACKAGE WMSYS.LT_CTX_PKG
WMSYSPACKAGEUD_TRIGSPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTDDL, PACKAGE WMSYS.LTDTRG, PACKAGE WMSYS.LTRIC, PACKAGE WMSYS.LTUTIL, PACKAGE WMSYS.LT_EXPORT_PKG, PACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.OWM_DML_PKG, PACKAGE WMSYS.OWM_DYNSQL_ACCESS, PACKAGE WMSYS.OWM_MIG_PKG
WMSYSPACKAGEWM_DDL_UTILPACKAGE WMSYS.LT, PACKAGE WMSYS.LTADM, PACKAGE WMSYS.LTDDL, PACKAGE WMSYS.LTUTIL, PACKAGE WMSYS.OWM_DDL_PKG, PACKAGE WMSYS.OWM_DYNSQL_ACCESS, PACKAGE WMSYS.OWM_IEXP_PKG, PACKAGE WMSYS.OWM_MIG_PKG
XDBPACKAGEDBMS_CLOBUTILPACKAGE XDB.DBMS_XMLDOM, PACKAGE XDB.DBMS_XMLPARSER, PACKAGE XDB.DBMS_XSLPROCESSOR
XDBPACKAGEDBMS_JSON_INTPACKAGE XDB.DBMS_JSON
XDBPACKAGEDBMS_SODA_DMLPACKAGE XDB.DBMS_SODA_ADMIN

Updated on 2018-05-01: amended query to produce no subprogram name for standalone procedures and functions (was SYS in two cases); included complete query (procedure_accessors.sql); updated result table.

Where is the Symbian OS stuff?

$
0
0

I’ve released the last version of StartUp in December 1999 and the last version of Crypto in March 2002. That’s quite a long time ago. If you need one of these programs (source or ARM or WINS  binary) just drop me an email via the contact form.

Using UTL_XML.PARSEQUERY for SQL Dependency Analysis

$
0
0

Last week I had a talk at Oracle’s OpenWorld 2011 titled Modern PL/SQL Code Checking and Dependency Analysis.

The problem I described in chapter 4 was to find all view columns using the column UNIT_COST of the table COSTS in the SH schema. Other usages of this column (e.g. in where or order by clauses) have to be ignored. To solve this problem within the Oracle Database Server 11.2 a parser is necessary (at least I’m not aware of another solution). Even a DBA_DEPENDENCY_COLUMNS view as described in Rob van Wijk’s post is not enough to solve this problem.

However, in this particular case no custom or 3rd party parser is necessary. Oracle provides a procedure named PARSEQUERY in the PL/SQL package UTL_XML which is in fact well suited to solve this problem as I will show later. First, I’d like explain which columns should be found by a dependency analysis procedure based on some sample views.

Oracle’s sales history demo schema SH provides view named PROFITS, which is defined as follows:

CREATE OR REPLACE VIEW PROFITS AS
SELECT s.channel_id,
       s.cust_id,
       s.prod_id,
       s.promo_id,
       s.time_id,
       c.unit_cost,
       c.unit_price,
       s.amount_sold,
       s.quantity_sold,
       c.unit_cost * s.quantity_sold TOTAL_COST
  FROM costs c, sales s
 WHERE c.prod_id = s.prod_id
   AND c.time_id = s.time_id
   AND c.channel_id = s.channel_id
   AND c.promo_id = s.promo_id;

The columns using COSTS.UNIT_COSTS are highlighted.

The following view uses the column TOTAL_COSTS in GROSS_MARGIN (line 14) and GROSS_MARGIN_PERCENT (lines 14 and 15). The usage is not evident at the first glance since it is based on the column GROSS_MARGIN (line 4) of the named query GM and the column COST (line 8) in GM’s subquery. This kind of dependencies need to be identified.

CREATE OR REPLACE VIEW GROSS_MARGINS AS
WITH 
   gm AS (
      SELECT time_id, revenue, revenue - cost AS gross_margin
        FROM (
           SELECT time_id,
                  unit_price * quantity_sold AS revenue,
                  total_cost AS cost
             FROM profits
        )
   )
SELECT t.fiscal_year,
       SUM(revenue) AS revenue,
       SUM(gross_margin) AS gross_margin,
       round(100 * SUM(gross_margin) / SUM(revenue), 2) 
          AS gross_margin_percent
  FROM gm
 INNER JOIN times t ON t.time_id = gm.time_id
 GROUP BY t.fiscal_year
 ORDER BY t.fiscal_year;

The next view does not present the data of COSTS.UNIT_COST as a column, even if the view depends on the table COSTS

CREATE OR REPLACE VIEW REVENUES AS
SELECT fiscal_year, revenue
  FROM gross_margins;

The last view uses COSTS.UNIT_COST but not as part of a column expression and therefore has not to be reported. The usage in the order by clause is considered save.

CREATE OR REPLACE VIEW SALES_ORDERED_BY_GM AS
SELECT channel_id,
       cust_id,
       prod_id,
       promo_id,
       time_id,
       amount_sold,
       quantity_sold
  FROM profits
 ORDER BY (unit_price - unit_cost) DESC;

So, the following result of the dependency analysis is expected:

SCHEMAVIEWCOLUMN
SHPROFITSUNIT_COST
SHPROFITSTOTAL_COST
SHGROSS_MARGINSGROSS_MARGIN
SHGROSS_MARGINSGROSS_MARGIN_PERCENT

Exactly this result is created by the following query

SELECT *
  FROM TABLE(coldep_pkg.get_dep('sh', 'costs', 'unit_cost'));

Now I just list all the code snippets I’ve written to create this result. Please note, that this is considered just a proof-of-concept code to show how UTL_XML.PARSEQUERY could be used for SQL dependency analysis in conjunction with Oracle dictionary views. This means that this is not a complete implementation. For example wild cards (*) is not handled which may lead to missing dependencies. Additionally table/view sources are not checked which may lead to false positives (in case a column is used in multiple view/table sources). – Please feel free to complete the code. However, an update is highly appreciated ;-)

GRANT EXECUTE ON SYS.UTL_XML TO SH;

CREATE OR REPLACE TYPE "SH"."COLDEP_TYP" AS 
OBJECT (schema_name VARCHAR2(30), 
        view_name varchar2(30), 
        column_name VARCHAR2(30))
/
CREATE OR REPLACE TYPE "SH"."COLDEP_L" IS TABLE OF coldep_typ
/

CREATE OR REPLACE PACKAGE "SH"."COLDEP_PKG" IS
   FUNCTION parse_query(p_query IN VARCHAR2) RETURN xmltype;

   FUNCTION get_dep(p_schema_name IN VARCHAR2,
                    p_object_name IN VARCHAR2,
                    p_column_name IN VARCHAR2) RETURN coldep_l
      PIPELINED;

   FUNCTION process_view(p_schema_name IN VARCHAR2,
                         p_view_name   IN VARCHAR2,
                         p_column_name IN VARCHAR2,
                         p_query       IN CLOB) RETURN coldep_l;
END coldep_pkg;
/
CREATE OR REPLACE PACKAGE BODY "SH"."COLDEP_PKG" IS
   FUNCTION parse_query(p_query IN VARCHAR2) RETURN xmltype IS
      v_clob CLOB;
      v_xml  xmltype;
   BEGIN
      dbms_lob.createtemporary(v_clob, TRUE);
      -- parse query and get XML as CLOB
      sys.utl_xml.parsequery(USER, p_query, v_clob);
      -- create XMLTYPE from CLOB 
      v_xml := xmltype.createxml(v_clob);
      dbms_lob.freetemporary(v_clob);
      RETURN v_xml;
   END parse_query;

   FUNCTION get_dep(p_schema_name IN VARCHAR2,
                    p_object_name IN VARCHAR2,
                    p_column_name IN VARCHAR2) RETURN coldep_l
      PIPELINED IS
   BEGIN
      -- query dictionary dependencies
      FOR v_dep IN (SELECT d.owner AS schema_name,
                           d.name  AS view_name,
                           v.text  AS query_text
                      FROM all_dependencies d
                     INNER JOIN all_views v
                        ON v.owner = d.owner
                           AND v.view_name = d.name
                     WHERE d.referenced_owner = upper(p_schema_name)
                           AND d.referenced_name = upper(p_object_name)
                           AND d.type = 'VIEW')
      LOOP
         -- process every fetched view
         FOR v_views IN (
            SELECT VALUE(pv) coldep
              FROM TABLE(process_view(v_dep.schema_name,
                                      v_dep.view_name,
                                      p_column_name,
                                      v_dep.query_text)) pv)
         LOOP
            -- return column usages in v_dep.view_name
            PIPE ROW(v_views.coldep);
            -- get column usages of views using v_dep.view_name (recursive calls)
            FOR v_recursive IN (
               SELECT VALUE(dep) coldep
                 FROM TABLE(get_dep(v_views.coldep.schema_name,
                                    v_views.coldep.view_name,
                                    v_views.coldep.column_name)) dep)
            LOOP
               -- return column usages of recursive call
               PIPE ROW(v_recursive.coldep);
            END LOOP;
         END LOOP;
      END LOOP;
   END get_dep;

   FUNCTION process_view(p_schema_name IN VARCHAR2,
                         p_view_name   IN VARCHAR2,
                         p_column_name IN VARCHAR2,
                         p_query       IN CLOB) RETURN coldep_l IS
      v_search_l       coldep_l := coldep_l(coldep_typ(NULL,
                                                       NULL,
                                                       p_column_name));
      v_xml            xmltype;
      v_previous_count INTEGER := 0;
      v_coldep_l       coldep_l := coldep_l();
   BEGIN
      -- parse view query
      v_xml := parse_query(p_query);
      -- get inline dependencies from secondary select lists
      -- TODO: handle table/view source and wildcard properly 
      WHILE v_previous_count &lt; v_search_l.count
      LOOP
         v_previous_count := v_search_l.count;
         FOR v_secondary IN (
            SELECT nvl(x.alias_name, x.column_reference) AS alias_name
              FROM (SELECT t.select_list_item,
                           t.alias_name,
                           extractvalue(VALUE(c), 'COLUMN') AS column_reference
                      FROM xmltable('//SELECT_LIST_ITEM[ancestor::FROM or ancestor::WITH]'
                              passing v_xml 
                              columns select_list_item xmltype path '//SELECT_LIST_ITEM',
                                      alias_name VARCHAR2(30) path '//COLUMN_ALIAS') t,
                           TABLE(xmlsequence(extract(select_list_item, '//COLUMN'))) c) x
             WHERE upper(x.column_reference) IN (SELECT upper(column_name) 
                                                   FROM TABLE(v_search_l))
               AND upper(alias_name) NOT IN (SELECT upper(column_name)
                                              FROM TABLE(v_search_l)))
         LOOP
            -- add internal column usage
            v_search_l.extend;
            v_search_l(v_search_l.count) := coldep_typ(NULL,
                                                       NULL,
                                                       v_secondary.alias_name);
         END LOOP;
      END LOOP;
      -- analyze primary select list
      -- TODO: handle table/view source and wildcard properly 
      FOR v_primary IN (
         SELECT x.column_id, atc.column_name
           FROM (SELECT t.select_list_item,
                        t.column_id,
                        extractvalue(VALUE(c), 'COLUMN') AS column_reference
                   FROM xmltable('//SELECT_LIST_ITEM[not (ancestor::FROM) and not (ancestor::WITH)]'
                           passing v_xml 
                           columns column_id FOR ordinality,
                                   select_list_item xmltype path '//SELECT_LIST_ITEM') t,
                        TABLE(xmlsequence(extract(select_list_item, '//COLUMN'))) c) x
                  INNER JOIN all_tab_columns atc
                     ON atc.owner = p_schema_name
                    AND atc.table_name = p_view_name
                    AND atc.column_id = x.column_id
                  WHERE upper(x.column_reference) IN (SELECT upper(column_name)
                                                        FROM TABLE(v_search_l))
                  ORDER BY x.column_id)
      LOOP
         -- add external column usage
         v_coldep_l.extend;
         v_coldep_l(v_coldep_l.count) := coldep_typ(p_schema_name,
                                                    p_view_name,
                                                    v_primary.column_name);
      END LOOP;
      -- return column dependencies   
      RETURN v_coldep_l;
   END process_view;
END coldep_pkg;
/

Below you find the XML parser output of the query defined in the view GROSS_MARGINS. The model becomes quite clear, even if I could not find a schema description.

<QUERY>
  <WITH>
    <WITH_ITEM>
      <QUERY_ALIAS>GM</QUERY_ALIAS>
      <QUERY>
        <SELECT>
          <SELECT_LIST>
            <SELECT_LIST_ITEM>
              <COLUMN_REF>
                <COLUMN>TIME_ID</COLUMN>
              </COLUMN_REF>
            </SELECT_LIST_ITEM>
            <SELECT_LIST_ITEM>
              <COLUMN_REF>
                <COLUMN>REVENUE</COLUMN>
              </COLUMN_REF>
            </SELECT_LIST_ITEM>
            <SELECT_LIST_ITEM>
              <SUB>
                <COLUMN_REF>
                  <COLUMN>REVENUE</COLUMN>
                </COLUMN_REF>
                <COLUMN_REF>
                  <COLUMN>COST</COLUMN>
                </COLUMN_REF>
              </SUB>
              <COLUMN_ALIAS>GROSS_MARGIN</COLUMN_ALIAS>
            </SELECT_LIST_ITEM>
          </SELECT_LIST>
        </SELECT>
        <FROM>
          <FROM_ITEM>
            <QUERY>
              <SELECT>
                <SELECT_LIST>
                  <SELECT_LIST_ITEM>
                    <COLUMN_REF>
                      <TABLE>PROFITS</TABLE>
                      <COLUMN>TIME_ID</COLUMN>
                    </COLUMN_REF>
                  </SELECT_LIST_ITEM>
                  <SELECT_LIST_ITEM>
                    <MUL>
                      <COLUMN_REF>
                        <TABLE>PROFITS</TABLE>
                        <COLUMN>UNIT_PRICE</COLUMN>
                      </COLUMN_REF>
                      <COLUMN_REF>
                        <TABLE>PROFITS</TABLE>
                        <COLUMN>QUANTITY_SOLD</COLUMN>
                      </COLUMN_REF>
                    </MUL>
                    <COLUMN_ALIAS>REVENUE</COLUMN_ALIAS>
                  </SELECT_LIST_ITEM>
                  <SELECT_LIST_ITEM>
                    <COLUMN_REF>
                      <TABLE>PROFITS</TABLE>
                      <COLUMN>TOTAL_COST</COLUMN>
                    </COLUMN_REF>
                    <COLUMN_ALIAS>COST</COLUMN_ALIAS>
                  </SELECT_LIST_ITEM>
                </SELECT_LIST>
              </SELECT>
              <FROM>
                <FROM_ITEM>
                  <TABLE>PROFITS</TABLE>
                </FROM_ITEM>
              </FROM>
            </QUERY>
          </FROM_ITEM>
        </FROM>
      </QUERY>
    </WITH_ITEM>
  </WITH>
  <SELECT>
    <SELECT_LIST>
      <SELECT_LIST_ITEM>
        <COLUMN_REF>
          <TABLE_ALIAS>T</TABLE_ALIAS>
          <COLUMN>FISCAL_YEAR</COLUMN>
        </COLUMN_REF>
      </SELECT_LIST_ITEM>
      <SELECT_LIST_ITEM>
        <SUM>
          <COLUMN_REF>
            <COLUMN>REVENUE</COLUMN>
          </COLUMN_REF>
        </SUM>
        <COLUMN_ALIAS>REVENUE</COLUMN_ALIAS>
      </SELECT_LIST_ITEM>
      <SELECT_LIST_ITEM>
        <SUM>
          <COLUMN_REF>
            <COLUMN>GROSS_MARGIN</COLUMN>
          </COLUMN_REF>
        </SUM>
        <COLUMN_ALIAS>GROSS_MARGIN</COLUMN_ALIAS>
      </SELECT_LIST_ITEM>
      <SELECT_LIST_ITEM>
        <ROUND>
          <DIV>
            <MUL>
              <LITERAL>100</LITERAL>
              <SUM>
                <COLUMN_REF>
                  <COLUMN>GROSS_MARGIN</COLUMN>
                </COLUMN_REF>
              </SUM>
            </MUL>
            <SUM>
              <COLUMN_REF>
                <COLUMN>REVENUE</COLUMN>
              </COLUMN_REF>
            </SUM>
          </DIV>
          <LITERAL>2</LITERAL>
        </ROUND>
        <COLUMN_ALIAS>GROSS_MARGIN_PERCENT</COLUMN_ALIAS>
      </SELECT_LIST_ITEM>
    </SELECT_LIST>
  </SELECT>
  <FROM>
    <FROM_ITEM>
      <JOIN>
        <INNER/>
        <JOIN_TABLE_1>
          <QUERY_ALIAS>GM</QUERY_ALIAS>
        </JOIN_TABLE_1>
        <JOIN_TABLE_2>
          <TABLE>TIMES</TABLE>
          <TABLE_ALIAS>T</TABLE_ALIAS>
        </JOIN_TABLE_2>
        <ON>
          <EQ>
            <COLUMN_REF>
              <TABLE>TIMES</TABLE>
              <TABLE_ALIAS>T</TABLE_ALIAS>
              <COLUMN>TIME_ID</COLUMN>
            </COLUMN_REF>
            <COLUMN_REF>
              <TABLE_ALIAS>GM</TABLE_ALIAS>
              <COLUMN>TIME_ID</COLUMN>
            </COLUMN_REF>
          </EQ>
        </ON>
      </JOIN>
    </FROM_ITEM>
  </FROM>
  <GROUP_BY>
    <EXPRESSION_LIST>
      <EXPRESSION_LIST_ITEM>
        <COLUMN_REF>
          <TABLE_ALIAS>T</TABLE_ALIAS>
          <COLUMN>FISCAL_YEAR</COLUMN>
        </COLUMN_REF>
      </EXPRESSION_LIST_ITEM>
    </EXPRESSION_LIST>
  </GROUP_BY>
  <ORDER_BY>
    <ORDER_BY_LIST>
      <ORDER_BY_LIST_ITEM>
        <COLUMN_REF>
          <TABLE_ALIAS>T</TABLE_ALIAS>
          <COLUMN>FISCAL_YEAR</COLUMN>
        </COLUMN_REF>
      </ORDER_BY_LIST_ITEM>
    </ORDER_BY_LIST>
  </ORDER_BY>
</QUERY>

Please note that UTL_XML.PARSEQUERY is suited for extended query dependency analysis only. DML may be parsed, but the resulting model is incomplete with 11.2.0.2 (e.g. clauses missing in the select statement are not included in the model, like the SET clause in an update statement). If you need to analyze PL/SQL beyond PL/Scope you still may need a 3rd party parser.

Apple iPad Camera Connection Kit – What’s faster USB or SD Card?

$
0
0

I’ve just bought a Sony DSC RX100 camera with an Apple iPad Camera Connection Kit. The kit contains an USB adapter and a SD Card adapter. I intend to copy photos to my iPad for review and backup purposes during my holidays. My son argued that the SD card adapter must be faster since the SD Card has a read/write capability of 45 MB/s and the USB 2.0 connection is limited to 35 MB/s. – Sounds reasonable, but nonetheless I had to test that…

… so I shot 300 photos with a total size of 2’155’522’108 bytes. Here are the result of some copy scenarios.

#Copy ScenarioTime in secondsTransfer rate in MB/s
1Copy photos from MacBook Retina card reader to a Mac OS folder using cp47.743.1
2Copy photos from RX100 camera connected via USB adapter to a Mac OS folder using cp211.79.7
3Copy photos from SD card adapter to my iPad 3 using Photos app221.19.3
4Copy photos from RX100 camera connected via USB adapter to my iPad 3 using Photos app279.87.3

My son was correct, the SD card adapter delivers better throughput results than the USB adapter. Further investigation is required to figure out why the transfer rates are far below the theoretical maximums. However, for me both options are good enough and the transfer rates are acceptable.

Building Comma Separated Values with Oracle & SQL

$
0
0

From time to time I’m asked to aggregate strings from multiple records into a single column using SQL. Here’s an example, showing a comma separated list of ordered employee names per department based on the famous EMP and DEPT tables.

DEPTNODNAMEENAME_LIST
10ACCOUNTINGCLARK, KING, MILLER
20RESEARCHADAMS, FORD, JONES, SCOTT, SMITH
30SALESALLEN, BLAKE, JAMES, MARTIN, TURNER, WARD
40OPERATIONS

Oracle introduced the aggregate function LISTAGG for that purpose in 11.2. If you may use LISTAGG go for it, but if you have to work with an older version of the Oracle Database Server you might be interested in some other options which I discuss per Oracle Database version.

I cover just some options in this post, if you are interested in more than please visit Tim Hall’s String Aggregation Techniques on oracle-base.com.

Oracle7

More than twenty years ago PL/SQL was introduced as part of the Oracle Database Server version 7.0 allowing to write functions to be used in SQL statements. Back than something like the following was necessary to build comma separated values:

CREATE OR REPLACE FUNCTION deptno_to_ename_list(in_deptno IN VARCHAR2)
   RETURN VARCHAR2 IS
   CURSOR l_cur IS
      SELECT ename
        FROM emp
       WHERE deptno = in_deptno
       ORDER BY ename;
   l_ret VARCHAR2(2000);
BEGIN
   FOR l_rec IN l_cur
   LOOP
      IF l_cur%ROWCOUNT > 1 THEN
         l_ret := l_ret || ', ';
      END IF;
      l_ret := l_ret || l_rec.ename;
   END LOOP;
   RETURN l_ret;
END;
/

SELECT deptno, dname, deptno_to_ename_list(deptno) AS ename_list
  FROM dept
 ORDER BY deptno;

Oracle8

Version 8.0 came with the object option allowing to solve the problem in more generic ways.

CREATE OR REPLACE TYPE string_tabtype IS TABLE OF VARCHAR2(2000);

CREATE OR REPLACE FUNCTION collection_to_comma_list(
   in_strings IN string_tabtype
) RETURN VARCHAR2 IS
   l_ret VARCHAR2(2000);
BEGIN
   IF in_strings.COUNT > 0 THEN
      FOR i IN 1 .. in_strings.COUNT
      LOOP
         IF i > 1 THEN
            l_ret := l_ret || ', ';
         END IF;
         l_ret := l_ret || in_strings(i);
      END LOOP;
   END IF;
   RETURN l_ret;
END;
/

SELECT d.deptno,
       d.dname,
       collection_to_comma_list(
          CAST(
             MULTISET(
                SELECT ename
                  FROM emp e
                 WHERE e.deptno = d.deptno
                 ORDER BY ename
             ) AS string_tabtype
          )
       ) AS ename_list
  FROM dept d
 ORDER BY d.deptno;

Another option was to use a REF CURSOR instead of a collection type. The PL/SQL part was executable in Oracle7 too, but the CURSOR expression was not available back than. BTW: SYS_REFCURSOR was introduced in 9.0, so this specific PL/SQL type is really necessary with version 8.0.

CREATE OR REPLACE PACKAGE mytypes_pkg IS
   TYPE refcursor_type IS REF CURSOR; 
END mytypes_pkg;
/ 

CREATE OR REPLACE FUNCTION cursor_to_comma_list(
   in_refcursor IN mytypes_pkg.refcursor_type
) RETURN VARCHAR2 IS
   l_string VARCHAR2(2000);
   l_ret VARCHAR2(2000);
BEGIN
   LOOP
      FETCH in_refcursor INTO l_string;
      EXIT WHEN in_refcursor%NOTFOUND;
      IF in_refcursor%ROWCOUNT > 1 THEN
         l_ret := l_ret || ', ';
      END IF;
      l_ret := l_ret || l_string;
   END LOOP;
   CLOSE in_refcursor;
   RETURN l_ret;
END;
/

SELECT d.deptno,
       d.dname,
       cursor_to_comma_list(
          CURSOR(
             SELECT ename
               FROM emp e
              WHERE e.deptno = d.deptno
              ORDER BY ename
          ) 
       ) AS ename_list
  FROM dept d
 ORDER BY d.deptno;

Oracle9i Release 1

Version 9.0 came with basic XML support which allowed to aggregate strings without the need of a helper function.

SELECT d.deptno,
       d.dname,
       RTRIM(
          SYS_XMLAGG(
             SYS_XMLGEN(
                e.ename||', '
             )
          ).EXTRACT(
             '/ROWSET/ROW/text()'
          ).getStringVal(),
          ', '
       ) AS ename_list
  FROM dept d
  LEFT JOIN emp e ON e.deptno = d.deptno
 GROUP BY d.deptno, d.dname
 ORDER BY d.deptno, d.dname;

In this solution the sort order of the aggregated strings is not definable. Version 9i introduced also user-defined aggregate functions. To use them you need to implement the ODCIAggregate interface which allows you also to sort the result.

CREATE OR REPLACE TYPE string_tabtype AS TABLE OF VARCHAR2(2000);

CREATE OR REPLACE TYPE mylistagg_type AS OBJECT (
  strings string_tabtype,
  STATIC FUNCTION ODCIAggregateInitialize(
     sctx        IN OUT mylistagg_type
  ) RETURN NUMBER,
  MEMBER FUNCTION ODCIAggregateIterate(
     SELF        IN OUT mylistagg_type,
     value       IN     VARCHAR2 
  ) RETURN NUMBER,
  MEMBER FUNCTION ODCIAggregateTerminate(
     SELF        IN     mylistagg_type,
     returnValue OUT    VARCHAR2,
     flags       IN     NUMBER
  ) RETURN NUMBER,
  MEMBER FUNCTION ODCIAggregateMerge(
     SELF        IN OUT  mylistagg_type,
     ctx2        IN      mylistagg_type)
    RETURN NUMBER
);

CREATE OR REPLACE TYPE BODY mylistagg_type IS
  STATIC FUNCTION ODCIAggregateInitialize(
     sctx        IN OUT mylistagg_type
  ) RETURN NUMBER IS
  BEGIN
    sctx := mylistagg_type(string_tabtype());
    RETURN ODCIConst.Success;
  END;

  MEMBER FUNCTION ODCIAggregateIterate(
     SELF        IN OUT mylistagg_type,
     value       IN     VARCHAR2 
  ) RETURN NUMBER IS
  BEGIN
    SELF.strings.EXTEND;
    SELF.strings(SELF.strings.COUNT) := VALUE;
    RETURN ODCIConst.Success;
  END ODCIAggregateIterate;

  MEMBER FUNCTION ODCIAggregateTerminate(
     SELF        IN     mylistagg_type,
     returnValue OUT    VARCHAR2,
     flags       IN     NUMBER
  ) RETURN NUMBER IS
     l_sorted_strings string_tabtype;
     l_return_value VARCHAR2(2000);
  BEGIN
    SELECT COLUMN_VALUE 
      BULK COLLECT INTO l_sorted_strings 
      FROM TABLE(strings) 
     ORDER BY COLUMN_VALUE;
    FOR i IN 1 .. l_sorted_strings.COUNT
    LOOP
       IF i > 1 THEN
          l_return_value := l_return_value || ', ';
       END IF;
       l_return_value := l_return_value || l_sorted_strings(i);
    END LOOP;
    returnValue := l_return_value;
    RETURN ODCIConst.Success;  
  END ODCIAggregateTerminate;

  MEMBER FUNCTION ODCIAggregateMerge(
     SELF        IN OUT  mylistagg_type,
     ctx2        IN      mylistagg_type
  ) RETURN NUMBER IS
  BEGIN
    FOR i IN 1 .. ctx2.strings.COUNT 
    LOOP
       SELF.strings.EXTEND;
       SELF.strings(SELF.strings.COUNT) := ctx2.strings(i);
    END LOOP;
    RETURN ODCIConst.Success;
  END ODCIAggregateMerge;
END;
/

CREATE OR REPLACE FUNCTION mylistagg (
   in_string IN VARCHAR2
) RETURN VARCHAR2 PARALLEL_ENABLE AGGREGATE USING mylistagg_type;
/

SELECT d.deptno, d.dname, mylistagg(e.ename) AS ename_list
  FROM dept d
  LEFT JOIN emp e ON e.deptno = d.deptno
 GROUP BY d.deptno, d.dname
 ORDER BY d.deptno, d.dname;

Oracle9i Release 2

Version 9i Release 2 came with SQL/XML support and the function XMLAGG which replaces SYS_XMLAGG and allows to sort the elements to be aggregated (see line 7).

SELECT d.deptno,
       d.dname,
       RTRIM(
          XMLAGG(
             XMLELEMENT(
                "e", e.ename || ', '
             ) ORDER BY e.ename
          ).EXTRACT(
             '/e/text()'
          ).getStringVal(), ', '
       ) AS ename_list
  FROM dept d
  LEFT JOIN emp e ON e.deptno = d.deptno
 GROUP BY d.deptno, d.dname
 ORDER BY d.deptno, d.dname;

One may argue that using RTRIM to get rid of the last comma is not the way to interact with XML, especially since SQL/XML supports XSLT. But probably no one can deny that writing the appropriate stylesheet is a bit more complex and time-consuming. Nonetheless, here’s a XSLT example:

SELECT d.deptno,
       d.dname,
       XMLTRANSFORM(
          XMLAGG(
             XMLELEMENT(
                "e", e.ename
             ) ORDER BY e.ename
          ), '<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
                 <xsl:output method="text"/>
                 <xsl:template match="/">
                    <xsl:for-each select="*">
                       <xsl:if test="position() != 1">
                          <xsl:value-of select="'', ''"/>
                       </xsl:if>
                       <xsl:value-of select="."/>
                    </xsl:for-each>
                 </xsl:template>
              </xsl:stylesheet>'
       ).getStringVal() AS ename_list
  FROM dept d
  LEFT JOIN emp e ON e.deptno = d.deptno
 GROUP BY d.deptno, d.dname
 ORDER BY d.deptno, d.dname;

Oracle Database 11g Release 2

As mentioned at the beginning, in version 11g Release 2 Oracle finally introduced the aggregate function LISTAGG  to conveniently aggregate strings.

SELECT d.deptno,
       d.dname,
       LISTAGG (
          e.ename, ', '
       ) WITHIN GROUP (
          ORDER BY e.ename
       ) AS ename_list
  FROM dept d
  LEFT JOIN emp e ON e.deptno = d.deptno
 GROUP BY d.deptno, d.dname
 ORDER BY d.deptno, d.dname;

Performance Comparison

To compare the runtime performance of the different solution approaches I created 1 million rows in the dept table and 5 million rows in the emp table using this script and measured the second serial execution of a CREATE TABLE AS SELECT statement for each of the 8 described approaches against my 11.2.0.3 instance. The following figure summarizes the results.

RuntimeCSV

Conclusion

A lot of things have changed in the Oracle Database area since Oracle7, even in the niche of string aggregation. I recommend to use LISTAGG (8) whenever possible and avoid the use of SYS_XMLAGG (4) or XSLT (7) for string aggregation. The Collection Type (2) approach is a good alternative if you do not mind creating helper objects otherwise use XMLAGG (6).

Joining Temporal Intervals

$
0
0

From time to time a customer asks me how to join multiple tables with temporal intervals (e.g. defined by two columns such as valid_from and valid_to per row). The solution is quite simple if you may limit your query to a certain point in time like now, yesterday or similar. Such a time point becomes just an additional filter criterion per temporal table (e.g. t1.some_date between t2.valid_from and t2.valid_to). But in some cases this approach is not feasible, e.g. if you have to provide all relevant time intervals. I’ll explain in this post a solution approach based on the following data model.

TemporalJoinModel

This model is based on the famous EMP and DEPT tables. I added a reference table for job just to get an additional table to join. You find the SQL script to create and populate the model here.

I’d like to query data from the tables EMPV, DEPTV, JOBV and EMPV (manager). Here is the content of the tables reduced to the data relevant for empno 7788 (SCOTT).

SQL> SELECT * FROM empv WHERE empno = 7788 ORDER BY valid_from;

EMPVID EMPNO ENAME JOBNO  MGR HIREDATE     SAL COMM DEPTNO VALID_FROM  VALID_TO
------ ----- ----- ----- ---- ----------- ---- ---- ------ ----------- -----------
     8  7788 SCOTT     5 7566 19-APR-1987 3000          20 19-APR-1987 31-DEC-1989
    22  7788 Scott     5 7566 19-APR-1987 3000          20 01-JAN-1990 31-MAR-1991
    36  7788 Scott     5 7566 19-APR-1987 3300          20 01-APR-1991 31-DEC-9999

SQL> SELECT * FROM jobv WHERE jobno = 5 ORDER BY valid_from;

JOBVID JOBNO JOB     VALID_FROM  VALID_TO
------ ----- ------- ----------- -----------
     5     5 ANALYST 01-JAN-1980 20-JAN-1990
    10     5 Analyst 21-JAN-1990 31-DEC-9999

SQL> SELECT * FROM deptv WHERE deptno = 20 ORDER BY valid_from;

DEPTVID DEPTNO DNAME    LOC    VALID_FROM  VALID_TO
------- ------ -------- ------ ----------- -----------
      2     20 RESEARCH DALLAS 01-JAN-1980 28-FEB-1990
      6     20 Research DALLAS 01-MAR-1990 31-MAR-1990
     10     20 Research Dallas 01-APR-1990 31-DEC-9999

SQL> SELECT * FROM empv WHERE empno = 7566 ORDER BY valid_from;

EMPVID EMPNO ENAME JOBNO  MGR HIREDATE       SAL COMM DEPTNO VALID_FROM  VALID_TO
------ ----- ----- ----- ---- ----------- ------ ---- ------ ----------- -----------
     4  7566 JONES     4 7839 02-APR-1981   2975          20 02-APR-1981 31-DEC-1989
    18  7566 Jones     4 7839 02-APR-1981   2975          20 01-JAN-1990 31-MAR-1991
    32  7566 Jones     4 7839 02-APR-1981 3272.5          20 01-APR-1991 31-DEC-9999

The following figure visualizes the expected result of a temporal join using the data queried previously.

TemporalJoinOverview

In this case six result records (intervals) are expected. As you see the result is dependent on the number of different intervals or the distinct VALID_FROM values. The driving object is valid from 19-APR-1987 until 31-DEC-9999. VALID_FROM values outside of the validity are irrelevant (e.g. 01-JAN-1980 and 02-APR-1981).

Based on these information we are able to write the query. The highlighted inline view g produces a list of all distinct VALID_FROM values which will be used as additional join criterion for all temporal tables.

SELECT e.empno,
       MIN(g.valid_from) AS valid_from,
       LEAD(MIN(g.valid_from) - 1, 1, DATE '9999-12-31') OVER(
          PARTITION BY e.empno ORDER BY MIN(g.valid_from)
       ) AS valid_to,
       e.ename,
       j.job,
       e.mgr,
       m.ename AS mgr_ename,
       e.hiredate,
       e.sal,
       e.comm,
       e.deptno,
       d.dname,
       d.loc
  FROM empv e
 INNER JOIN (SELECT valid_from FROM empv
             UNION
             SELECT valid_from FROM deptv
             UNION
             SELECT valid_from FROM jobv) g
    ON g.valid_from BETWEEN e.valid_from AND e.valid_to
 INNER JOIN deptv d
    ON d.deptno = e.deptno
       AND g.valid_from BETWEEN d.valid_from AND d.valid_to
 INNER JOIN jobv j
    ON j.jobno = e.jobno
       AND g.valid_from BETWEEN j.valid_from AND j.valid_to
  LEFT JOIN empv m
    ON m.empno = e.mgr
       AND g.valid_from BETWEEN m.valid_from AND m.valid_to
 WHERE e.empno = 7788
 GROUP BY e.empno,
          e.ename,
          j.job,
          e.mgr,
          m.ename,
          e.hiredate,
          e.sal,
          e.comm,
          e.deptno,
          d.dname,
          d.loc
 ORDER BY empno, valid_from;

EMPNO VALID_FROM  VALID_TO    ENAME JOB      MGR MGR_ENAME HIREDATE     SAL COMM DEPTNO DNAME    LOC
----- ----------- ----------- ----- ------- ---- --------- ----------- ---- ---- ------ -------- ------
 7788 19-APR-1987 31-DEC-1989 SCOTT ANALYST 7566 JONES     19-APR-1987 3000          20 RESEARCH DALLAS
 7788 01-JAN-1990 20-JAN-1990 Scott ANALYST 7566 Jones     19-APR-1987 3000          20 RESEARCH DALLAS
 7788 21-JAN-1990 28-FEB-1990 Scott Analyst 7566 Jones     19-APR-1987 3000          20 RESEARCH DALLAS
 7788 01-MAR-1990 31-MAR-1990 Scott Analyst 7566 Jones     19-APR-1987 3000          20 Research DALLAS
 7788 01-APR-1990 31-MAR-1991 Scott Analyst 7566 Jones     19-APR-1987 3000          20 Research Dallas
 7788 01-APR-1991 31-DEC-9999 Scott Analyst 7566 Jones     19-APR-1987 3300          20 Research Dallas

The beauty of this approach is that it works with any granularity and it automatically merges identical intervals. In this example I use a granularity of a day, but this approach works also for granularity of seconds or even fraction of a seconds, e.g. if you are using a TIMESTAMP data type to define the interval boundaries.

It’s important to notice that I’ve used an including semantic for VALID_TO in this example. If you use an excluding semantic (VALID_TO = VALID_FROM of the subsequent interval) you have to amend the calculation of the VALID_TO and the join criteria (BETWEEN is not feasible with excluding semantic). Furthermore this example does not cover gaps in the historization. If you have gaps you need to amend the calculation of the VALID_TO column and ensure that you do not merge gaps. Merging intervals with a simple group by will produce wrong results if “disconnected” intervals have the same content. Issues are addressed in part 2 of this post.

Updated on 2012-12-28, emphasized possibility of wrong results and added link to part 2 of this post.


Merging Temporal Intervals with Gaps

$
0
0

In Joining Temporal Intervals I explained how to join multiple temporal tables. The provided solution merges also temporal intervals but – as pointed out in that post – may produce wrong results if the underlying driving table is not gaplessly historized. In this post I’ll explain how to merge temporal intervals with various data constellations correctly.

You find the scripts to create and populate the tables for scenario A and B here.

Scenario A – No overlapping intervals

This scenario handles consistent data. This means no overlapping intervals, no duplicate intervals, no including intervals, no negative intervals (valid_to < valid_from). Here is the content of the example table t1:

SQL> SELECT * FROM t1;

       VID        OID VALID_FROM  VALID_TO    C1         C2
---------- ---------- ----------- ----------- ---------- ----------
         1          1 01-JAN-2010 31-DEC-2010 A          B1
         2          1 01-JAN-2011 31-MAR-2011 A          B2
         3          1 01-JUN-2011 31-JAN-2012 A          B2
         4          1 01-APR-2012 31-DEC-9999 A          B4
         5          2 01-JAN-2010 31-JUL-2012 B          B1
         6          2 01-AUG-2012 31-DEC-9999 B          B2
        18          4 01-JAN-2010 30-SEP-2011 D          D1
        19          4 01-OCT-2011 30-SEP-2012            D2
        20          4 01-OCT-2012 31-DEC-9999 D          D3

I’d like to write a query which produces all intervals for the columns OID and C1 honoring gaps in the historization. For OID 1 I expect that record 1 and 2 are merged, but records 3 and 4 are not merged because the intervals are not “connected”. For OID 2 I expected to get a single merged interval. For OID 4 I expect to get 3 records, since the records with C1=’D’ are not connected.

So, the following query result is expected:

OID VALID_FROM  VALID_TO    C1
---------- ----------- ----------- ----------
         1 01-JAN-2010 31-MAR-2011 A
         1 01-JUN-2011 31-JAN-2012 A
         1 01-APR-2012 31-DEC-9999 A
         2 01-JAN-2010 31-DEC-9999 B
         4 01-JAN-2010 30-SEP-2011 D
         4 01-OCT-2011 30-SEP-2012
         4 01-OCT-2012 31-DEC-9999 D

The next query produces exactly this result.

WITH
   calc_various AS (
      -- produces column has_gap with the following meaning:
      -- 1: offset > 0 between current and previous record (gap)
      -- 0: offset = 0 between current and previous record (no gap)
      -- produces column new_group with the following meaning:
      -- 1: group-by-columns differ in current and previous record
      -- 0: same group-by-columns in current and previous record
      SELECT oid,
             valid_from,
             valid_to,
             c1,
             c2,
             CASE
                WHEN LAG(valid_to, 1, valid_from - 1) OVER(
                        PARTITION BY oid ORDER BY valid_from
                     ) = valid_from - 1 THEN
                   0
                ELSE
                   1
             END AS has_gap,
             CASE
                WHEN LAG(c1, 1, c1) OVER(
                        PARTITION BY oid ORDER BY valid_from
                     ) = c1 THEN
                   0
                ELSE
                   1
             END AS new_group
        FROM t1
   ),
   calc_group AS (
      -- produces column group_no, records with the same group_no
      -- are mergeable, group_no is calculated per oid 
      SELECT oid,
             valid_from,
             valid_to,
             c1,
             c2,
             SUM(has_gap + new_group) OVER(
                PARTITION BY oid ORDER BY oid, valid_from
             ) AS group_no
        FROM calc_various
   ),
   merged AS (
      -- produces the final merged result
      -- grouping by group_no ensures that gaps are honored
      SELECT oid,
             MIN(valid_from) AS valid_from,
             MAX(valid_to) AS valid_to,
             c1
        FROM calc_group
       GROUP BY oid, c1, group_no
       ORDER BY oid, valid_from
   )
-- main 
SELECT * FROM merged;

The usability of the WITH clause aka Subquery Factoring Clause improved significantly with 11gR2. Since then it’s not necessary to reference all named queries anymore. The named queries become real transient views and this simplifies debugging a lot. – If you replace the content of line 57 with “SELECT * FROM calc_various ORDER BY oid, valid_from;” the query produces the following result:

OID VALID_FROM  VALID_TO    C1 C2 HAS_GAP NEW_GROUP
--- ----------- ----------- -- -- ------- ---------
  1 01-JAN-2010 31-DEC-2010 A  B1       0         0
  1 01-JAN-2011 31-MAR-2011 A  B2       0         0
  1 01-JUN-2011 31-JAN-2012 A  B2       1         0
  1 01-APR-2012 31-DEC-9999 A  B4       1         0
  2 01-JAN-2010 31-JUL-2012 B  B1       0         0
  2 01-AUG-2012 31-DEC-9999 B  B2       0         0
  4 01-JAN-2010 30-SEP-2011 D  D1       0         0
  4 01-OCT-2011 30-SEP-2012    D2       0         1
  4 01-OCT-2012 31-DEC-9999 D  D3       0         1

You see that the value 1 for HAS_GAP indicates that the record is not “connected” with the previous record. Additionally the value 1 for the column NEW_GROUP indicates that the records must not be merged even if they are connected.

To simplify the calculation of NEW_GROUP for multiple group by columns (used in the named query “merged”) build a concatenated string of all relevant columns to deal with a single column similar to the column C1 in this example.

HAS_GAP and NEW_GROUP are used in the subsequent named query calc_group which produces the following result:

OID VALID_FROM  VALID_TO    C1 C2 GROUP_NO
--- ----------- ----------- -- -- --------
  1 01-JAN-2010 31-DEC-2010 A  B1        0
  1 01-JAN-2011 31-MAR-2011 A  B2        0
  1 01-JUN-2011 31-JAN-2012 A  B2        1
  1 01-APR-2012 31-DEC-9999 A  B4        2
  2 01-JAN-2010 31-JUL-2012 B  B1        0
  2 01-AUG-2012 31-DEC-9999 B  B2        0
  4 01-JAN-2010 30-SEP-2011 D  D1        0
  4 01-OCT-2011 30-SEP-2012    D2        1
  4 01-OCT-2012 31-DEC-9999 D  D3        2

The GROUP_NO is calculated per OID. It’s technically a running total of HAS_GAP + NEW_GROUP. All intervals with the same GROUP_NO are mergeable. E.g. record 2, 3 and 4 have a different GROUP_NO which ensures that every single gap is honored for OID 1.

Scenario B – With Overlapping intervals

Reality is that we have sometimes to deal with inconsistent data. E.g. duplicate data intervals, overlapping data intervals or even negative data intervals (valid_to < valid_from). I’ve created an example table t2 which is in fact a copy of t1 but includes an additional messy OID 3 with the following data:

SQL> SELECT * FROM t2 WHERE oid >= 3 ORDER BY vid;

       VID        OID VALID_FROM  VALID_TO    C1         C2
---------- ---------- ----------- ----------- ---------- ----------
         7          3 01-JAN-2010 31-DEC-2010 C          B1
         8          3 01-JAN-2010 31-MAR-2010 C          B2
         9          3 01-JUN-2010 31-AUG-2010 C          B3
        10          3 01-OCT-2010 31-DEC-2010 C          B4
        11          3 01-FEB-2011 30-JUN-2011 C          B5
        12          3 01-FEB-2011 30-JUN-2011 C          B6
        13          3 01-JUN-2011 31-AUG-2011 C          B7
        14          3 31-AUG-2011 30-SEP-2011 C          B8
        15          3 01-DEC-2011 31-MAY-2012 C          B9
        16          3 01-DEC-2011 31-MAY-2012 C          B9
        17          3 01-JUN-2012 31-DEC-2012 C          B10

The following figure visualizes the intervals of OID 3. The green patterns are expected to be used and the red pattern are expected to be ignored. The rationale is that the VID (version ID) is typically based on an Oracle Sequence and therefore it’s assumed that higher VIDs are newer and therefore more adequate. In real cases you may have additional information such a created_at or modified_on timestamps which helps you identifying the record to be used in conflicting situations. Since data is considered inconsistent in this scenario the cleansing strategy might be very different in real cases. However, cleansing is always the first step and in this example the highest VID has the highest priority in case of conflicts.

MergingOverviewScenarioB

In cleansing step 1 we gather all potential starting intervals. We do not need all these intervals, but since we will merge intervals in the last processing step, we do not have to care about some unnecessary intervals right now.

SQL> WITH
  2     o AS (
  3        -- object identifier and their valid_from values
  4        SELECT oid, valid_from FROM t2
  5        UNION
  6        SELECT oid, valid_to + 1 FROM t2
  7         WHERE valid_to != DATE '9999-12-31'
  8     )
  9  -- main
 10  SELECT * FROM o WHERE oid = 3;

       OID VALID_FROM
---------- -----------
         3 01-JAN-2010
         3 01-APR-2010
         3 01-JUN-2010
         3 01-SEP-2010
         3 01-OCT-2010
         3 01-JAN-2011
         3 01-FEB-2011
         3 01-JUN-2011
         3 01-JUL-2011
         3 31-AUG-2011
         3 01-SEP-2011
         3 01-OCT-2011
         3 01-DEC-2011
         3 01-JUN-2012
         3 01-JAN-2013

15 rows selected.

In cleansing step 2 we calculate the relevant VID for every previous result. Additionally we may inexpensively calculate if an interval is a gap.

SQL> WITH
  2     o AS (
  3        -- object identifier and their valid_from values
  4        SELECT oid, valid_from FROM t2
  5        UNION
  6        SELECT oid, valid_to + 1 FROM t2
  7         WHERE valid_to != DATE '9999-12-31'
  8     ),
  9     v AS (
 10        -- relevant version identifier per valid_from
 11        -- produces column is_gap
 12        SELECT o.oid,
 13               MAX(vid) AS vid,
 14               o.valid_from,
 15               NVL2(MAX(vid), 0, 1) AS is_gap
 16          FROM o
 17          LEFT JOIN t2
 18            ON t2.oid = o.oid
 19               AND o.valid_from BETWEEN t2.valid_from AND t2.valid_to
 20         GROUP BY o.oid, o.valid_from
 21     )
 22  -- main
 23  SELECT * FROM v WHERE oid = 3 ORDER BY valid_from;

       OID        VID VALID_FROM      IS_GAP
---------- ---------- ----------- ----------
         3          8 01-JAN-2010          0
         3          7 01-APR-2010          0
         3          9 01-JUN-2010          0
         3          7 01-SEP-2010          0
         3         10 01-OCT-2010          0
         3            01-JAN-2011          1
         3         12 01-FEB-2011          0
         3         13 01-JUN-2011          0
         3         13 01-JUL-2011          0
         3         14 31-AUG-2011          0
         3         14 01-SEP-2011          0
         3            01-OCT-2011          1
         3         16 01-DEC-2011          0
         3         17 01-JUN-2012          0
         3            01-JAN-2013          1

15 rows selected.

In cleansing step 3 we extend the previous result by the missing columns from table t2 and calculate the NEW_GROUP column with the same logic as in scenario A.

SQL> WITH
  2     o AS (
  3        -- object identifier and their valid_from values
  4        SELECT oid, valid_from FROM t2
  5        UNION
  6        SELECT oid, valid_to + 1 FROM t2
  7         WHERE valid_to != DATE '9999-12-31'
  8     ),
  9     v AS (
 10        -- relevant version identifier per valid_from
 11        -- produces column is_gap
 12        SELECT o.oid,
 13               MAX(vid) AS vid,
 14               o.valid_from,
 15               NVL2(MAX(vid), 0, 1) AS is_gap
 16          FROM o
 17          LEFT JOIN t2
 18            ON t2.oid = o.oid
 19               AND o.valid_from BETWEEN t2.valid_from AND t2.valid_to
 20         GROUP BY o.oid, o.valid_from
 21     ),
 22     combined AS (
 23        -- combines previous intermediate result v with t2
 24        -- produces the valid_to and new_group columns
 25        SELECT t2.vid,
 26               v.oid,
 27               v.valid_from,
 28               LEAD(v.valid_from - 1, 1, DATE '9999-12-31') OVER (
 29                  PARTITION BY v.oid ORDER BY v.valid_from
 30               ) AS valid_to,
 31               t2.c1,
 32               t2.c2,
 33               v.is_gap,
 34               CASE
 35                  WHEN LAG(t2.c1, 1, t2.c1) OVER(
 36                          PARTITION BY t2.oid ORDER BY t2.valid_from
 37                       ) = c1 THEN
 38                     0
 39                  ELSE
 40                     1
 41               END AS new_group
 42          FROM v
 43          LEFT JOIN t2
 44            ON t2.oid = v.oid
 45               AND t2.vid = v.vid
 46               AND v.valid_from BETWEEN t2.valid_from AND t2.valid_to
 47     )
 48  -- main
 49  SELECT * FROM combined WHERE oid = 3 ORDER BY valid_from;

VID OID VALID_FROM  VALID_TO    C1 C2  IS_GAP NEW_GROUP
--- --- ----------- ----------- -- --- ------ ---------
  8   3 01-JAN-2010 31-MAR-2010 C  B2       0         0
  7   3 01-APR-2010 31-MAY-2010 C  B1       0         0
  9   3 01-JUN-2010 31-AUG-2010 C  B3       0         0
  7   3 01-SEP-2010 30-SEP-2010 C  B1       0         0
 10   3 01-OCT-2010 31-DEC-2010 C  B4       0         0
      3 01-JAN-2011 31-JAN-2011             1         1
 12   3 01-FEB-2011 31-MAY-2011 C  B6       0         0
 13   3 01-JUN-2011 30-JUN-2011 C  B7       0         0
 13   3 01-JUL-2011 30-AUG-2011 C  B7       0         0
 14   3 31-AUG-2011 31-AUG-2011 C  B8       0         0
 14   3 01-SEP-2011 30-SEP-2011 C  B8       0         0
      3 01-OCT-2011 30-NOV-2011             1         1
 16   3 01-DEC-2011 31-MAY-2012 C  B9       0         0
 17   3 01-JUN-2012 31-DEC-2012 C  B10      0         0
      3 01-JAN-2013 31-DEC-9999             1         1

15 rows selected.

Now we have cleansed the data and are ready for the final steps “calc_group” and “merge” which are very similar to scenario A. The relevant difference is the highlighted line 70 which filters non-gap records. Here is the complete statement and the query result:

WITH
   o AS (
      -- object identifier and their valid_from values
      SELECT oid, valid_from FROM t2
      UNION
      SELECT oid, valid_to + 1 FROM t2 
       WHERE valid_to != DATE '9999-12-31'
   ),
   v AS (
      -- relevant version identifier per valid_from 
      -- produces column is_gap
      SELECT o.oid,
             MAX(vid) AS vid,
             o.valid_from,
             NVL2(MAX(vid), 0, 1) AS is_gap
        FROM o
        LEFT JOIN t2
          ON t2.oid = o.oid
             AND o.valid_from BETWEEN t2.valid_from AND t2.valid_to
       GROUP BY o.oid, o.valid_from
   ),
   combined AS (
      -- combines previous intermediate result v with t2
      -- produces the valid_to and new_group columns
      SELECT t2.vid,
             v.oid,
             v.valid_from,
             LEAD(v.valid_from - 1, 1, DATE '9999-12-31') OVER (
                PARTITION BY v.oid ORDER BY v.valid_from
             ) AS valid_to,
             t2.c1,
             t2.c2,
             v.is_gap,
             CASE
                WHEN LAG(t2.c1, 1, t2.c1) OVER(
                        PARTITION BY t2.oid ORDER BY t2.valid_from
                     ) = c1 THEN
                   0
                ELSE
                   1
             END AS new_group
        FROM v
        LEFT JOIN t2
          ON t2.oid = v.oid
             AND t2.vid = v.vid
             AND v.valid_from BETWEEN t2.valid_from AND t2.valid_to 
   ),
   calc_group AS (
      -- produces column group_no, records with the same group_no
      -- are mergeable, group_no is calculated per oid 
      SELECT oid,
             valid_from,
             valid_to,
             c1,
             c2,
             is_gap,
             SUM(is_gap + new_group) OVER(
                PARTITION BY oid ORDER BY oid, valid_from
             ) AS group_no
        FROM combined
   ),
   merged AS (
      -- produces the final merged result
      -- grouping by group_no ensures that gaps are honored
      SELECT oid,
             MIN(valid_from) AS valid_from,
             MAX(valid_to) AS valid_to,
             c1
        FROM calc_group
       WHERE is_gap = 0
       GROUP BY OID, c1, group_no
       ORDER BY OID, valid_from
   )
-- main 
SELECT * FROM merged;

OID VALID_FROM  VALID_TO    C1
---------- ----------- ----------- ----------
         1 01-JAN-2010 31-MAR-2011 A
         1 01-JUN-2011 31-JAN-2012 A
         1 01-APR-2012 31-DEC-9999 A
         2 01-JAN-2010 31-DEC-9999 B
         3 01-JAN-2010 31-DEC-2010 C
         3 01-FEB-2011 30-SEP-2011 C
         3 01-DEC-2011 31-DEC-2012 C
         4 01-JAN-2010 30-SEP-2011 D
         4 01-OCT-2011 30-SEP-2012
         4 01-OCT-2012 31-DEC-9999 D

If you change line 70 to “WHERE is_gap = 1” you’ll get all gap records, just a way to query non-existing intervals.

OID VALID_FROM  VALID_TO    C1
---------- ----------- ----------- ----------
         1 01-APR-2011 31-MAY-2011
         1 01-FEB-2012 31-MAR-2012
         3 01-JAN-2011 31-JAN-2011
         3 01-OCT-2011 30-NOV-2011
         3 01-JAN-2013 31-DEC-9999

Conclusion

Merging temporal intervals is challenging, especially if the history has gaps and the data is inconsistent as in scenario B. However, the SQL engine is a powerful tool to clean up data and merge the temporal intervals efficiently in a single SQL statement.

Joining Temporal Intervals Part 2

$
0
0

The solution I’ve provided in Joining Temporal Intervals produces wrong results if one or more temporal tables have gaps in their history or if disconnected intervals have the same content. In this post I’ll address both problems.

Test Data

The example queries are based on the same model as described in Joining Temporal Intervals. For the join of the tables EMPV, DEPTV, JOBV and EMPV (manager) I’ve amended the history to contain some gaps which are highlighted in the following listing.

SQL> SELECT * FROM empv WHERE empno = 7788 ORDER BY valid_from;

EMPVID EMPNO ENAME JOBNO  MGR HIREDATE       SAL COMM DEPTNO VALID_FROM  VALID_TO
------ ----- ----- ----- ---- ----------- ------ ---- ------ ----------- -----------
     8  7788 SCOTT     5 7566 19-APR-1987 3000.0          20 19-APR-1987 31-DEC-1989
    22  7788 Scott     5 7566 19-APR-1987 3000.0          20 01-JAN-1990 31-MAR-1991
    36  7788 Scott     5 7566 19-APR-1987 3300.0          20 01-APR-1991 31-JUL-1991
    43  7788 Scott     5 7566 01-JAN-1992 3500.0          20 01-JAN-1992 31-DEC-9999

SQL> SELECT * FROM jobv WHERE jobno = 5 ORDER BY valid_from;

    JOBVID JOBNO JOB       VALID_FROM  VALID_TO
---------- ----- --------- ----------- -----------
         5     5 ANALYST   01-JAN-1980 20-JAN-1990
        10     5 Analyst   22-JAN-1990 31-DEC-9999

SQL> SELECT * FROM deptv WHERE deptno = 20 ORDER BY valid_from;

   DEPTVID DEPTNO DNAME          LOC           VALID_FROM  VALID_TO
---------- ------ -------------- ------------- ----------- -----------
         2     20 RESEARCH       DALLAS        01-JAN-1980 28-FEB-1990
         6     20 Research       DALLAS        01-MAR-1990 31-MAR-1990
        10     20 Research       Dallas        01-APR-1990 31-DEC-9999

SQL> SELECT * FROM empv WHERE empno = 7566 ORDER BY valid_from;

EMPVID EMPNO ENAME JOBNO  MGR HIREDATE       SAL COMM DEPTNO VALID_FROM  VALID_TO
------ ----- ----- ----- ---- ----------- ------ ---- ------ ----------- -----------
     4  7566 JONES     4 7839 02-APR-1981 2975.0          20 02-APR-1981 31-DEC-1989
    18  7566 Jones     4 7839 02-APR-1981 2975.0          20 01-JAN-1990 31-MAR-1991
    32  7566 Jones     4 7839 02-APR-1981 3272.5          20 01-APR-1991 31-DEC-9999

SQL> SELECT * FROM empv WHERE mgr = 7788 ORDER BY valid_from;

EMPVID EMPNO ENAME JOBNO  MGR HIREDATE       SAL COMM DEPTNO VALID_FROM  VALID_TO
------ ----- ----- ----- ---- ----------- ------ ---- ------ ----------- -----------
    11  7876 ADAMS     1 7788 23-MAY-1987 1100.0          20 23-MAY-1987 31-DEC-1989
    25  7876 Adams     1 7788 23-MAY-1987 1100.0          20 01-JAN-1990 31-MAR-1991
    39  7876 Adams     1 7788 23-MAY-1987 1210.0          20 01-APR-1991 31-DEC-9999

From a business point of view Scott left the company on 31-JUL-1991 and came back on 01-JAN-1992 with a better salary. It’s important to notice, that Scott is Adams manager and Adams is therefore leaderless from 01-AUG-1991 until 31-DEC-1991. Additionally I fabricated a gap for JOBNO 5 on 21-JAN-1990.

You find the SQL script to create and populate the model here.

Gap-Aware Temporal Join

The following figure visualizes the expected result of the temporal join. The raw data intervals queried perviously are represented in blue and the join result in red. The yellow bars highlight the gaps in the source and result data set.

OverviewJoiningTemporalIntervalsPart2

Here is the query and the join result for EMPNO = 7788. Please note that the column LOC from table DEPV is not queried, which will reduce the number of final result intervals from 7 to 6.

SELECT e.empno,
       g.valid_from,
       LEAST(
          e.valid_to, 
          d.valid_to, 
          j.valid_to, 
          NVL(m.valid_to, e.valid_to),
          LEAD(g.valid_from - 1, 1, e.valid_to) OVER(
             PARTITION BY e.empno ORDER BY g.valid_from
          )
       ) AS valid_to,
       e.ename,
       j.job,
       e.mgr,
       m.ename AS mgr_ename,
       e.hiredate,
       e.sal,
       e.comm,
       e.deptno,
       d.dname
  FROM empv e
 INNER JOIN (SELECT valid_from FROM empv
             UNION
             SELECT valid_from FROM deptv
             UNION
             SELECT valid_from FROM jobv
             UNION
             SELECT valid_to + 1 FROM empv 
              WHERE valid_to != DATE '9999-12-31'
             UNION
             SELECT valid_to + 1 FROM deptv 
              WHERE valid_to != DATE '9999-12-31'
             UNION
             SELECT valid_to + 1 FROM jobv 
              WHERE valid_to != DATE '9999-12-31') g
    ON g.valid_from BETWEEN e.valid_from AND e.valid_to
 INNER JOIN deptv d
    ON d.deptno = e.deptno
       AND g.valid_from BETWEEN d.valid_from AND d.valid_to
 INNER JOIN jobv j
    ON j.jobno = e.jobno
       AND g.valid_from BETWEEN j.valid_from AND j.valid_to
  LEFT JOIN empv m
    ON m.empno = e.mgr
       AND g.valid_from BETWEEN m.valid_from AND m.valid_to
WHERE e.empno = 7788
ORDER BY 1, 2;

EMPNO VALID_FROM  VALID_TO    ENAME JOB     MGR MGR_E HIREDATE       SAL COMM DEPTNO DNAME
----- ----------- ----------- ----- ------- ---- ----- ----------- ------ ---- ------ --------
 7788 19-APR-1987 22-MAY-1987 SCOTT ANALYST 7566 JONES 19-APR-1987 3000.0          20 RESEARCH
 7788 23-MAY-1987 31-DEC-1989 SCOTT ANALYST 7566 JONES 19-APR-1987 3000.0          20 RESEARCH
 7788 01-JAN-1990 20-JAN-1990 Scott ANALYST 7566 Jones 19-APR-1987 3000.0          20 RESEARCH
 7788 22-JAN-1990 28-FEB-1990 Scott Analyst 7566 Jones 19-APR-1987 3000.0          20 RESEARCH
 7788 01-MAR-1990 31-MAR-1990 Scott Analyst 7566 Jones 19-APR-1987 3000.0          20 Research
 7788 01-APR-1990 31-MAR-1991 Scott Analyst 7566 Jones 19-APR-1987 3000.0          20 Research
 7788 01-APR-1991 31-JUL-1991 Scott Analyst 7566 Jones 19-APR-1987 3300.0          20 Research
 7788 01-JAN-1992 31-DEC-9999 Scott Analyst 7566 Jones 01-JAN-1992 3500.0          20 Research

The highlighted inline view g produces a list of all distinct VALID_FROM values which will be used as additional join criterion for all temporal tables. Unlinke in Joining Temporal Intervals all interval endpoints need also be considered to identify gaps.

The calculation of the VALID_TO column is a bit laborious (see highlighted lines 3 to 11). You need to get the lowest value for VALID_TO of all involved intervals, including outer joined intervals like m. Also the subsequent VALID_FROM has to be considered since the inline view g is providing all VALID_FROM values to be probed and they may be completely independent of the involved intervals.

The remaining part of the query is quite simple. I’ve highlighted the rows in the result set which should be merged in a subsequent step (see line 4 and 8).

If you change line 46 to “WHERE e.mgr = 7788” you get the following result:

EMPNO VALID_FROM  VALID_TO    ENAME JOB      MGR MGR_E HIREDATE       SAL COMM DEPTNO DNAME
----- ----------- ----------- ----- ------- ---- ----- ----------- ------ ---- ------ --------
 7876 23-MAY-1987 31-DEC-1989 ADAMS CLERK   7788 SCOTT 23-MAY-1987 1100.0          20 RESEARCH
 7876 01-JAN-1990 20-JAN-1990 Adams CLERK   7788 Scott 23-MAY-1987 1100.0          20 RESEARCH
 7876 21-JAN-1990 21-JAN-1990 Adams Clerk   7788 Scott 23-MAY-1987 1100.0          20 RESEARCH
 7876 22-JAN-1990 28-FEB-1990 Adams Clerk   7788 Scott 23-MAY-1987 1100.0          20 RESEARCH
 7876 01-MAR-1990 31-MAR-1990 Adams Clerk   7788 Scott 23-MAY-1987 1100.0          20 Research
 7876 01-APR-1990 31-MAR-1991 Adams Clerk   7788 Scott 23-MAY-1987 1100.0          20 Research
 7876 01-APR-1991 31-JUL-1991 Adams Clerk   7788 Scott 23-MAY-1987 1210.0          20 Research
 7876 01-AUG-1991 31-DEC-1991 Adams Clerk   7788       23-MAY-1987 1210.0          20 Research
 7876 01-JAN-1992 31-DEC-9999 Adams Clerk   7788 Scott 23-MAY-1987 1210.0          20 Research

This result is interesting for several reasons. First, there are two highlighted records which should be merged in a subsequent step. Second, the line 10 represents the time where Scott was not employed by this company. Third, the records in line 9 and 11 must not be merged in a subsequent step. They are identical (beside VALID_FROM and VALID_TO) but the intervals are not connected.

Merging Temporal Intervals

If you look at the result of the previous query you might be tempted to avoid the merging step since there are just a few intervals which need merging. However, in real live scenarios you might easily end up with daily intervals for large tables, since the inline-view g considers all valid_from and valid_to columns of all involved tables. Sooner or later you will think about merging temporal intervals or about other solutions to reduce the result set. – If you’re skeptical, then querying for “e.empno = 7369” might give you an idea what I’m taking about (21 intervals before merge, 6 intervals after merge).

Since I covered this topic in Merging Temporal Intervals with Gaps I’ll provide the query to produce the final result and explain the specialities only.

WITH 
   joined AS (
      -- gap-aware temporal join
      -- produces result_cols to calculate new_group in the subsequent query
      SELECT e.empno,
             g.valid_from,
             LEAST(
                e.valid_to, 
                d.valid_to, 
                j.valid_to, 
                NVL(m.valid_to, e.valid_to),
                LEAD(g.valid_from - 1, 1, e.valid_to) OVER(
                   PARTITION BY e.empno ORDER BY g.valid_from
                )
             ) AS valid_to,
             (
                e.ename 
                || ',' || j.job 
                || ',' || e.mgr 
                || ',' || m.ename 
                || ',' || TO_CHAR(e.hiredate,'YYYY-MM-DD') 
                || ',' || e.sal 
                || ',' || e.comm 
                || ',' || e.deptno 
                || ',' || d.dname 
             ) AS result_cols,
             e.ename,
             j.job,
             e.mgr,
             m.ename AS mgr_ename,
             e.hiredate,
             e.sal,
             e.comm,
             e.deptno,
             d.dname             
        FROM empv e
       INNER JOIN (SELECT valid_from FROM empv
                   UNION
                   SELECT valid_from FROM deptv
                   UNION
                   SELECT valid_from FROM jobv
                   UNION
                   SELECT valid_to + 1 FROM empv 
                    WHERE valid_to != DATE '9999-12-31'
                   UNION
                   SELECT valid_to + 1 FROM deptv 
                    WHERE valid_to != DATE '9999-12-31'
                   UNION
                   SELECT valid_to + 1 FROM jobv 
                    WHERE valid_to != DATE '9999-12-31') g
          ON g.valid_from BETWEEN e.valid_from AND e.valid_to
       INNER JOIN deptv d
          ON d.deptno = e.deptno
             AND g.valid_from BETWEEN d.valid_from AND d.valid_to
       INNER JOIN jobv j
          ON j.jobno = e.jobno
             AND g.valid_from BETWEEN j.valid_from AND j.valid_to
        LEFT JOIN empv m
          ON m.empno = e.mgr
             AND g.valid_from BETWEEN m.valid_from AND m.valid_to
   ),
   calc_various AS (
      -- produces columns has_gap, new_group
      SELECT empno,
             valid_from,
             valid_to,
             result_cols,
             ename,
             job,
             mgr,
             mgr_ename,
             hiredate,
             sal,
             comm,
             deptno,
             dname,
             CASE
                WHEN LAG(valid_to, 1, valid_from - 1) OVER(
                        PARTITION BY empno ORDER BY valid_from
                     ) = valid_from - 1 THEN
                   0
                ELSE
                   1
             END AS has_gap,
             CASE 
                WHEN LAG(result_cols, 1, result_cols) OVER (
                        PARTITION BY empno ORDER BY valid_from
                     ) = result_cols THEN
                   0
                ELSE
                   1
             END AS new_group
        FROM joined
   ),
   calc_group AS (
      -- produces column group_no
      SELECT empno,
             valid_from,
             valid_to,
             ename,
             job,
             mgr,
             mgr_ename,
             hiredate,
             sal,
             comm,
             deptno,
             dname,
              SUM(has_gap + new_group) OVER(
                PARTITION BY empno ORDER BY valid_from
             ) AS group_no
        FROM calc_various
   ),
   merged AS (
      -- produces the final merged result
      -- grouping by group_no ensures that gaps are honored
      SELECT empno,
             MIN(valid_from) AS valid_from,
             MAX(valid_to) AS valid_to,
             ename,
             job,
             mgr,
             mgr_ename,
             hiredate,
             sal,
             comm,
             deptno,
             dname
        FROM calc_group
       GROUP BY empno,
                group_no,
                ename,
                job,
                mgr,
                mgr_ename,
                hiredate,
                sal,
                comm,
                deptno,
                dname
       ORDER BY empno,
                valid_from
   )   
-- main
select * FROM merged WHERE empno = 7788;

EMPNO VALID_FROM  VALID_TO    ENAME JOB      MGR MGR_E HIREDATE       SAL COMM DEPTNO DNAME
----- ----------- ----------- ----- ------- ---- ----- ----------- ------ ---- ------ --------
 7788 19-APR-1987 31-DEC-1989 SCOTT ANALYST 7566 JONES 19-APR-1987 3000.0          20 RESEARCH
 7788 01-JAN-1990 20-JAN-1990 Scott ANALYST 7566 Jones 19-APR-1987 3000.0          20 RESEARCH
 7788 22-JAN-1990 28-FEB-1990 Scott Analyst 7566 Jones 19-APR-1987 3000.0          20 RESEARCH
 7788 01-MAR-1990 31-MAR-1991 Scott Analyst 7566 Jones 19-APR-1987 3000.0          20 Research
 7788 01-APR-1991 31-JUL-1991 Scott Analyst 7566 Jones 19-APR-1987 3300.0          20 Research
 7788 01-JAN-1992 31-DEC-9999 Scott Analyst 7566 Jones 01-JAN-1992 3500.0          20 Research

The named query “joined” produces an additional column RESULT_COLS (see highlighted lines 16-26). It’s simply a concatenation of all columns used in the group by clause of the named query “merged”. RESULT_COLS is used in the named query “calc_various” (see highlighted lines 86-88) to calculate the column NEW_GROUP. NEW_GROUP is set to 1 if the value of RESULT_COLS is different for the current and previous row. NEW_GROUP ensures that Adam’s intervals valid from 01-APR-1991 and valid from 01-JAN-1992 are not merged. See line 7 and 9 in the following listing.

EMPNO VALID_FROM  VALID_TO    ENAME JOB      MGR MGR_E HIREDATE       SAL COMM DEPTNO DNAME
----- ----------- ----------- ----- ------- ---- ----- ----------- ------ ---- ------ --------
 7876 23-MAY-1987 31-DEC-1989 ADAMS CLERK   7788 SCOTT 23-MAY-1987 1100.0          20 RESEARCH
 7876 01-JAN-1990 20-JAN-1990 Adams CLERK   7788 Scott 23-MAY-1987 1100.0          20 RESEARCH
 7876 21-JAN-1990 28-FEB-1990 Adams Clerk   7788 Scott 23-MAY-1987 1100.0          20 RESEARCH
 7876 01-MAR-1990 31-MAR-1991 Adams Clerk   7788 Scott 23-MAY-1987 1100.0          20 Research
 7876 01-APR-1991 31-JUL-1991 Adams Clerk   7788 Scott 23-MAY-1987 1210.0          20 Research
 7876 01-AUG-1991 31-DEC-1991 Adams Clerk   7788       23-MAY-1987 1210.0          20 Research
 7876 01-JAN-1992 31-DEC-9999 Adams Clerk   7788 Scott 23-MAY-1987 1210.0          20 Research

The ORA_HASH function over RESULT_COLS could give me a shorter representation of all result columns. But since I do not detect hash collisions and this would lead to wrong results in some rare data constellations, I decided not to use a hash function.

The named queries “calc_various”, “calc_group” and “merged” are based on the query in scenario A of Merging Temporal Intervals with Gaps. The columns HAS_GAP, NEW_GROUP and GROUP_NO are explained in this post.

Conclusion

Joining and merging temporal intervals is indeed very challenging. Even if I showed in this post that it is doable I recommend to choose a simpler solution whenever feasible. E.g. limiting the query to a certain point in time for all involved temporal tables, since this eliminates the need of merging temporal intervals and even simplifies the gap-aware temporal join.

Loading Historical Data Into Flashback Archive Enabled Tables

$
0
0

Oracle provides via OTN an import solution for FBA (Flashback Data Archive also known as Total Recall). The solution extends the SCN to TIMESTAMP mapping plus provides a wrapper to existing APIs to populate the history. However, issues like using a customized mapping period/precision or ORA-1466 when using the AS OF TIMESTAMP clause are not addressed. I show in this post how to load historical data into flashback archive enabled tables using the standard API. Unfortunately there are still some experimental actions necessary to get a fully functional result, at least with version 11.2.0.3.4.

Test Scenario

There are various reasons why you may want to load historical data into a FBA enable table. E.g. for testing purposes, to move FBA enabled tables from one database instance to another or to migrate conventionally historized tables. This example is based on a migration scenario. Table T1 has the following content and shall be migrated to FBA. You find the script to create and populate the table T1 here.

SQL> SELECT * FROM t1;

VID OID CREATED_AT          OUTDATED_AT         C1 C2
--- --- ------------------- ------------------- -- --
  1   1 2012-12-19 13:00:57 2012-12-21 08:31:01 A  A1
  2   1 2012-12-21 08:31:01 2012-12-23 20:58:05 A  A2
  3   1 2012-12-23 20:58:05 2012-12-27 11:40:41 A  A3
  4   1 2012-12-27 11:40:41 9999-12-31 23:59:59 A  A4
  5   2 2012-12-20 13:51:55 9999-12-31 23:59:59 B  B1
  6   4 2012-12-22 11:03:22 2012-12-23 19:36:08 C  C1
  7   4 2012-12-28 14:25:50 2012-12-30 17:10:39 C  C1
  8   4 2012-12-30 17:10:39 2012-12-31 12:05:40 C  C2

The column VID is the version identifier and the primary key. OID is the object identifier, which is unique at every point in time. CREATED_AT and OUTDATED_AT define the interval boundaries. Column C1 and C2 are the payload columns, which may change over time.

The following queries return data valid at 2012-12-23 19:36:08 and now.

SQL> SELECT * FROM t1
  2   WHERE created_at <= TIMESTAMP '2012-12-23 19:36:08'
  3     AND outdated_at > TIMESTAMP '2012-12-23 19:36:08';

VID OID CREATED_AT          OUTDATED_AT         C1 C2
--- --- ------------------- ------------------- -- --
  2   1 2012-12-21 08:31:01 2012-12-23 20:58:05 A  A2
  5   2 2012-12-20 13:51:55 9999-12-31 23:59:59 B  B1

SQL> SELECT * FROM t1 WHERE outdated_at > SYSDATE;

VID OID CREATED_AT          OUTDATED_AT         C1 C2
--- --- ------------------- ------------------- -- --
  4   1 2012-12-27 11:40:41 9999-12-31 23:59:59 A  A4
  5   2 2012-12-20 13:51:55 9999-12-31 23:59:59 B  B1

It’s important to notice that OID 4 (see highlighted line above) is not part of the first query result, since OUTDATED_AT has an excluding semantic. I mention this fact because the column ENDSCN in the table SYS_FBA_HIST_<object_id> uses also an excluding semantic, which simplifies the migration process, at least in this area.

From UTC to SCN

Oracle uses its own time standard SCN for FBA. The SCN is initialized during database creation and is valid for a single Oracle instance, even if synchronization mechanisms among database instances exist (see also MOS note 1376995.1). A SCN may not represent a date-time value before 1988-01-01 00:00:00. The time spent between two SCNs is varying, it may be shorter when the database instance is executing a lot of transactions and it may be longer in more idle times or when the database instance is shut down. Oracle uses the table SYS.SMON_SCN_TIME to map SCN to TIMESTAMPs and vice versa and provides the functions SCN_TO_TIMESTAMP and TIMESTAMP_TO_SCN for that purpose.

So what is the first date-time value which may be converted into a SCN?

SQL> SELECT MIN(time_dp) AS min_time_dp,
  2         MIN(scn) AS min_scn,
  3         CAST(scn_to_timestamp(MIN(scn)) AS DATE) AS min_scn_to_ts,
  4         MAX(dbtimezone) AS dbtimezone,
  5         sessiontimezone
  6    FROM sys.smon_scn_time;

MIN_TIME_DP            MIN_SCN MIN_SCN_TO_TS       DBTIMEZONE SESSIONTIMEZONE
------------------- ---------- ------------------- ---------- ---------------
2010-09-05 22:40:05      18669 2010-09-06 00:40:05 +00:00     +01:00

The first value is 2010-09-06 00:40:05. You may notice the two hour difference to MIN_TIME_DP. Oracle stores the date values in this table in UTC (DBTIMEZONE) and my database server’s time zone is CET (Central European Time) which is UTC+01:00 at the point of query (see SESSIONTIMEZONE). However, in September daylight saving time was active and back then CET was UTC+02:00. That explains the two hour difference.

Let’s test the boundary values.

SQL> SELECT timestamp_to_scn(TIMESTAMP '2010-09-06 00:40:05') AS SCN FROM dual;

       SCN
----------
     18669

SQL> SELECT timestamp_to_scn(TIMESTAMP '2010-09-06 00:40:04') AS SCN FROM dual;
SELECT timestamp_to_scn(TIMESTAMP '2010-09-06 00:40:04') AS SCN FROM dual
       *
ERROR at line 1:
ORA-08180: no snapshot found based on specified time
ORA-06512: at "SYS.TIMESTAMP_TO_SCN", line 1

As expected 2010-09-06 00:40:05 works and 2010-09-06 00:40:04 raises an ORA-8180 error.

I expect that you get complete different values in your database, since the SMON process deletes “old” values in SYS.SMON_SCN_TIME based on UNDO and FBA configuration. Since this table is solely used for timestamp to SCN conversion I assume it is save to extend it manually. In fact the PL/SQL package DBMS_FDA_MAPPINGS provided by Oracle is exactly doing that, but distributes the remaining SCNs between 1988-01-01 and the MIN(time_dp) in SYS.SMON_SCN_TIME uniformly. In my case there are only 18868 SCNs remaining to be assigned to timestamps before 2010-09-06 00:40:05. So if I know that I won’t need timestamps to be mapped to SCN let’s say before 2010-01-01 I may use the remaining values to improve the precision of timestamp to SCN mappings for this reduced period.

Im my case I do not need to extend the mapping in SYS.SMON_SCN_TIME, but here is an example how it can be done:

SQL> INSERT INTO smon_scn_time (
  2     thread,
  3     orig_thread,
  4     time_mp,
  5     time_dp,
  6     scn_wrp,
  7     scn_bas,
  8     scn,
  9     num_mappings
 10  )
 11  SELECT 0 AS thread,
 12         0 AS orig_thread,
 13         (
 14            CAST(
 15               to_timestamp_tz(
 16                  '2010-09-06 00:40:04 CET',
 17                  'YYYY-MM-DD HH24:MI:SS TZR'
 18               ) AT TIME ZONE 'UTC' AS DATE
 19            ) - DATE '1970-01-01'
 20         ) * 60 * 60 * 24 AS time_mp,
 21         CAST(
 22            to_timestamp_tz(
 23               '2010-09-06 00:40:04 CET',
 24               'YYYY-MM-DD HH24:MI:SS TZR'
 25            ) at TIME ZONE 'UTC' AS DATE
 26         ) AS time_dp,
 27         FLOOR((MIN(scn) - 1) / POWER(2, 32)) AS scn_wrp,
 28         MOD(MIN(scn) - 1, POWER(2, 32)) AS scn_bas,
 29         MIN(scn) - 1 AS scn,
 30         0 AS num_mappings
 31    FROM sys.smon_scn_time;

1 row created.

SQL> SELECT timestamp_to_scn(TIMESTAMP '2010-09-06 00:40:04') AS scn FROM dual;

       SCN
----------
     18668

TIME_MP is the number of seconds since 1970-01-01. TIME_DP is the date value of TIME_MP. The calculation of these columns includes time zone conversion from CET to UTC which makes it a bit verbose. SCN_WRAP counts the number of times SCN_BASE has reached its 32-bit value maximum of 4294967295.

Alternatively you may simply extend the SYS.SMON_SCN_TIME based on SYS.V$LOG_HISTORY.

INSERT INTO smon_scn_time
   (thread,
    orig_thread,
    time_mp,
    time_dp,
    scn_wrp,
    scn_bas,
    scn,
    num_mappings)
   SELECT 0 AS thread,
          0 AS orig_thread,
          (first_time - DATE '1970-01-01') * 60 * 60 * 24 AS time_mp,
          first_time AS time_dp,
          floor(first_change# / power(2, 32)) AS scn_wrp,
          MOD(first_change#, power(2, 32)) AS scn_bas,
          first_change# AS scn,
          0 AS num_mappings
     FROM v$log_history
    WHERE first_time < (SELECT MIN(time_dp)
                          FROM smon_scn_time);

BTW: The TIMESTAMP_TO_SCN function uses an own kind of result cache. You may need to restart the database if you undo changes in SYS.SMON_SCN_TIME.

Next, we should check if the mapping from timestamp to SCN is precise enough, means a SCN should not be used by multiple timestamps. Here’s the query for T1:

SQL> SELECT COUNT(DISTINCT ts) AS cnt_ts,
  2         COUNT(DISTINCT timestamp_to_scn(ts)) AS cnt_ts_to_scn
  3    FROM (SELECT created_at ts FROM t1
  4          UNION
  5          SELECT outdated_at FROM t1
  6          MINUS
  7          SELECT TIMESTAMP '9999-12-31 23:59:59' FROM dual);

    CNT_TS CNT_TS_TO_SCN
---------- -------------
        10            10

T1 is ready to be migrated to a FBA enabled table if CNT_TS and CNT_TS_TO_SCN are equal. If the values are different you have basically two options. a) amend T1 (e.g. change timestamps or merge intervals) or b) amend SYS.SMON_SCN_TIME as explained above.

Migration

The following script creates a flashback archive and the FBA enabled table T2. At the end a small PL/SQL block is executed to create views for the 3 SYS_FBA_…_<object_id> tables.

-- create FBA
CREATE FLASHBACK ARCHIVE fba TABLESPACE USERS QUOTA 10M RETENTION 10 YEAR;

-- create FBA enabled table t2
CREATE TABLE t2 (
   oid         NUMBER(4,0)  NOT NULL PRIMARY KEY,
   c1          VARCHAR2(10) NOT NULL,
   c2          VARCHAR2(10) NOT NULL
) FLASHBACK ARCHIVE fba;

-- enforce visibility of SYS_FBA tables
BEGIN
   dbms_flashback_archive.disassociate_fba(owner_name => USER, table_name => 'T2');
   dbms_flashback_archive.reassociate_fba(owner_name => USER, table_name => 'T2');
END;
/

-- create views on SYS_FBA tables
DECLARE
   PROCEDURE create_view(in_view_name  IN VARCHAR2,
                         in_table_name IN VARCHAR2) IS
   BEGIN
      EXECUTE IMMEDIATE 'CREATE OR REPLACE VIEW ' || in_view_name ||
                        ' AS SELECT * FROM ' || in_table_name;
   END create_view;
BEGIN
   FOR l_rec IN (SELECT object_id
                   FROM user_objects
                  WHERE object_name = 'T2')
   LOOP
      create_view('T2_DDL_COLMAP','SYS_FBA_DDL_COLMAP_' || l_rec.object_id);
      create_view('T2_HIST', 'SYS_FBA_HIST_' || l_rec.object_id);
      create_view('T2_TCRV', 'SYS_FBA_TCRV_' || l_rec.object_id);
   END LOOP;
END;
/

Flashback Query (since Oracle9i) and Flashback Data Archive (since Oracle 11g) are tightly coupled. The SYS_FBA_…<object_id> tables are created delayed by the FBDA background process. DML and querying table T2 is possible anyway with the help of UNDO and Flashback Query. The highlighted lines show the PL/SQL block to enforce the creation of the SYS_FBA…<object_id> tables. This step is necessary to create the views T2_DDL_COLMAP, T2_HIST and T2_TCRV. These views simplify the access to the underlying tables in my SQL scripts.

The next script copies data from table T1 to T2. Basically that’s what the PL/SQL package DBMS_FDA_IMPORT provided by Oracle does.

-- migrate current rows
INSERT INTO t2 (OID, c1, c2)
   SELECT OID, c1, c2
     FROM t1
    WHERE outdated_at > SYSDATE;
COMMIT;

-- enable DML on FBA tables
BEGIN
   dbms_flashback_archive.disassociate_fba(owner_name => USER, table_name => 'T2');
END;
/

-- migrate T1 rows into T2
INSERT INTO t2_hist (RID, STARTSCN, ENDSCN, XID, OPERATION, OID, C1, C2)
-- outdated INSERTs (simulating INSERT/DELETE logic)
SELECT NULL AS rid,
       timestamp_to_scn(created_at) AS startscn,
       timestamp_to_scn(outdated_at) AS endscn,
       NULL AS XID,
       'I' AS operation,
       OID,
       c1,
       c2
  FROM t1
 WHERE outdated_at < SYSDATE
-- current INSERTs (workaround for ORA-55622 on insert into T2_TRCV)
UNION ALL
SELECT t2.rowid AS rid,
       timestamp_to_scn(t1.created_at) AS startscn,
       h.startscn AS endscn,
       NULL AS XID,
       'I' AS operation,
       t2.OID,
       t2.c1,
       t2.c2
  FROM t1 t1
 INNER JOIN t2
    ON t2.oid = t1.oid
 INNER JOIN t2_tcrv h
    ON h.RID = t2.rowid
 WHERE t1.outdated_at > SYSDATE;

COMMIT;

-- disable DML on FBA tables
BEGIN
   dbms_flashback_archive.reassociate_fba(owner_name => USER, table_name => 't2');
END;
/

The following queries return data valid at 2012-12-23 19:36:08 and now (as for T1 above).

SQL> SELECT * FROM t2 AS OF SCN timestamp_to_scn(TIMESTAMP '2012-12-23 19:36:08');

OID C1 C2
--- -- --
  1 A  A2
  2 B  B1

SQL> SELECT * FROM t2;

OID C1 C2
--- -- --
  1 A  A4
  2 B  B1

As you see the results are identical with the ones of T1. The script here compares the result for every single point in time in T1 with T2. I’ve run it without detecting differences.

Migration Issue 1 – ORA-1466 When Using AS OF TIMESTAMP Clause

In the examples above I avoided the use of the AS OF TIMESTAMP clause and used the AS OF SCN clause instead. The reason becomes apparent when executing the following query:

SQL> SELECT * FROM t2 AS OF TIMESTAMP TIMESTAMP '2012-12-23 19:36:08';
SELECT * FROM t2 AS OF TIMESTAMP TIMESTAMP '2012-12-23 19:36:08'
              *
ERROR at line 1:
ORA-01466: unable to read data - table definition has changed

Ok, we have not changed the table definition, but was the table definition for T2 valid on 2012-12-23 19:36:08?

SQL> SELECT column_name,
  2         startscn,
  3         endscn,
  4         CAST(scn_to_timestamp(startscn) AS DATE) AS start_ts
  5    FROM t2_ddl_colmap;

COLUMN_NAME       STARTSCN     ENDSCN START_TS
----------- -------------- ---------- -------------------
OID           161382016632            2013-01-03 00:17:42
C1            161382016632            2013-01-03 00:17:42
C2            161382016632            2013-01-03 00:17:42

Since I’ve created the table on 2013-01-03 00:17:42 Oracle assumes that no columns exist before this point in time. However, the AS OF SCN clause is not that picky.

To fix the problem we need to update T2_DDL_COLMAP but unfortunately DBMS_FLASHBACK_ARCHIVE.DISASSOCIATE_FBA does not allow us to change the content of the DDL_COLMAP directly (it is possible indirectly by altering the HIST table, but this is not helpful in this case).

I’ve written a PL/SQL package TVD_FBA_HELPER which updates SYS.TAB$ behind the scenes to overcome this restriction. Please consult Oracle Support how to proceed if you plan to use it in productive environments. The package is provided as is and of course you are using it at your own risk.

Here is the script to fix validity of the DDL_COLMAP:

-- enable DML FBA table
BEGIN
   tvd_fba_helper.disassociate_col_map(in_owner_name => USER, in_table_name => 'T2');
END;
/

-- enforce ddl colmap consistentcy (valid for first entries)
UPDATE t2_ddl_colmap
   SET startscn =
       (SELECT MIN(startscn)
          FROM t2_hist);
COMMIT;

-- disable DML an FBA tables
BEGIN
   tvd_fba_helper.reassociate_col_map(in_owner_name => USER, in_table_name => 'T2');
END;
/

Now the AS OF TIMESTAMP clause works as well:

SQL> SELECT * FROM t2 AS OF TIMESTAMP TIMESTAMP '2012-12-23 19:36:08';

OID C1 C2
--- -- --
  1 A  A2
  2 B  B1

Migration Issue 2 – Different Number of Intervals

If you query the full history of T2 you get 10 rows, but T1 contains 8 rows only.

SQL> SELECT ROWNUM AS vid, OID, created_at, outdated_at, c1, c2
  2    FROM (SELECT OID,
  3                 TO_DATE(TO_CHAR(versions_starttime, 'YYYY-MM-DD HH24:MI:SS'),
  4                         'YYYY-MM-DD HH24:MI:SS') AS created_at,
  5                 NVL(TO_DATE(TO_CHAR(versions_endtime, 'YYYY-MM-DD HH24:MI:SS'),
  6                             'YYYY-MM-DD HH24:MI:SS'),
  7                     TIMESTAMP '9999-12-31 23:59:59') AS outdated_at,
  8                 c1,
  9                 c2
 10            FROM t2 VERSIONS BETWEEN TIMESTAMP TIMESTAMP '2012-12-19 13:00:57'
 11                                     AND SYSTIMESTAMP
 12           ORDER BY 1, 2);

VID OID CREATED_AT          OUTDATED_AT         C1 C2
--- --- ------------------- ------------------- -- --
  1   1 2012-12-19 13:00:53 2012-12-21 08:20:19 A  A1
  2   1 2012-12-21 08:20:19 2012-12-23 20:58:03 A  A2
  3   1 2012-12-23 20:58:03 2012-12-27 11:40:40 A  A3
  4   1 2012-12-27 11:40:40 2013-01-03 00:50:19 A  A4
  5   1 2013-01-03 00:50:19 9999-12-31 23:59:59 A  A4
  6   2 2012-12-20 13:51:54 2013-01-03 00:50:19 B  B1
  7   2 2013-01-03 00:50:19 9999-12-31 23:59:59 B  B1
  8   4 2012-12-22 10:26:35 2012-12-23 19:36:07 C  C1
  9   4 2012-12-28 14:25:50 2012-12-30 17:10:38 C  C1
 10   4 2012-12-30 17:10:38 2012-12-31 12:05:39 C  C2

The highlighted rows for OID 1 and 2 could be merged. The content is not really wrong it’s just different to T1. The reason is, that DBMS_FLASHBACK_ARCHIVE.DISASSOCIATE_FBA does not allow us to modify T2_TCRV (SYS_FBA_TCRV…<object_id> table). This table contains validity information for the current rows in T2.

SQL> SELECT rid,
  2         startscn,
  3         CAST(scn_to_timestamp(startscn) AS DATE) AS start_ts,
  4         endscn,
  5         op
  6    FROM t2_tcrv;

RID                      STARTSCN START_TS                ENDSCN O
------------------ -------------- ------------------- ---------- -
AAAZ1PAAIAAAAb0AAA   161382018095 2013-01-03 00:50:19            I
AAAZ1PAAIAAAAb0AAB   161382018095 2013-01-03 00:50:19            I

That’s why I had to inserted the actual rows also in T2_HIST.

But with the help of the PL/SQL package TVD_FBA_HELPER the data may be fixed as follows:

-- enable DML on FBA tables
BEGIN
   dbms_flashback_archive.disassociate_fba(owner_name => USER, table_name => 'T2');
   tvd_fba_helper.disassociate_tcrv(in_owner_name => USER, in_table_name => 'T2');
END;
/

-- extend begin of validity in TCRV table and fix HIST table accordingly
MERGE INTO t2_tcrv t
USING (SELECT rid, startscn FROM t2_hist b) s
   ON (s.rid = t.rid)
 WHEN MATCHED THEN
    UPDATE SET t.startscn = s.startscn;
DELETE FROM t2_hist WHERE rid IN (SELECT rid FROM t2_tcrv);
COMMIT;

-- disable DML an FBA tables
BEGIN
   tvd_fba_helper.reassociate_tcrv(in_owner_name => USER, in_table_name => 'T2');
   dbms_flashback_archive.reassociate_fba(owner_name => USER, table_name => 'T2');
END;
/

Now the query returns 8 rows:

SQL> SELECT ROWNUM AS vid, OID, created_at, outdated_at, c1, c2
  2    FROM (SELECT OID,
  3                 TO_DATE(TO_CHAR(versions_starttime, 'YYYY-MM-DD HH24:MI:SS'),
  4                         'YYYY-MM-DD HH24:MI:SS') AS created_at,
  5                 NVL(TO_DATE(TO_CHAR(versions_endtime, 'YYYY-MM-DD HH24:MI:SS'),
  6                             'YYYY-MM-DD HH24:MI:SS'),
  7                     TIMESTAMP '9999-12-31 23:59:59') AS outdated_at,
  8                 c1,
  9                 c2
 10            FROM t2 VERSIONS BETWEEN TIMESTAMP TIMESTAMP '2012-12-19 13:00:57'
 11                                     AND SYSTIMESTAMP
 12           ORDER BY 1, 2);

VID OID CREATED_AT          OUTDATED_AT         C1 C2
--- --- ------------------- ------------------- -- --
  1   1 2012-12-19 13:00:53 2012-12-21 08:20:19 A  A1
  2   1 2012-12-21 08:20:19 2012-12-23 20:58:03 A  A2
  3   1 2012-12-23 20:58:03 2012-12-27 11:40:40 A  A3
  4   1 2012-12-27 11:40:40 9999-12-31 23:59:59 A  A4
  5   2 2012-12-20 13:51:54 9999-12-31 23:59:59 B  B1
  6   4 2012-12-22 10:26:35 2012-12-23 19:36:07 C  C1
  7   4 2012-12-28 14:25:50 2012-12-30 17:10:38 C  C1
  8   4 2012-12-30 17:10:38 2012-12-31 12:05:39 C  C2

The timestamps are slightly different to the ones in T1, but that’s expected behavior since the precision of TIMESTAMP_TO_SCN conversion is limited to around 3 seconds.

Conclusion

Loading historical data into FBA enable tables requires a strategy to populate historical mappings in SYS.SMON_SCN_TIME. Afterwards you may load the associated SYS_FBA_HIST_<object_id> table with the help of the Oracle supplied PL/SQL package DBMS_FLASHBACK_ARCHIVE and its procedures DISASSOCIATE_FBA and REASSOCIATE_FBA. I recommend this approach for 11gR2 Database environments.

The solutions to fix migration issue 1 (ORA-1466 when using AS OF TIMESTAMP clause) and migration issue 2 (different number of intervals) are considered experimental. I suggest to contact Oracle Support to discuss how to proceed if you need a solution in this area.

Updated on 2014-04-10, changed calculation of SCN_WRP and SCN_BAS in Extend SMON_SCN_TIME (1) and Extend SMON_SCN_TIME (2), changed link to new version of TVD_FBA_HELPER.

Trivadis PL/SQL & SQL CodeAnalyzer Released

$
0
0

A month ago I had a talk about “Extending the Oracle Data Dictionary for Fine-Grained PL/SQL and SQL Analysis” during the ODTUG Kscope13 conference in New Orleans. Oracle data dictionary views as DBA_IDENTIFIERS or DBA_DEPENDENCIES are in many cases sufficient to analyze static PL/SQL and SQL code within the Oracle database. But what if more detailed analysis are required, such as the use of tables or columns in PL/SQL package units, in SQL statements or in SQL statement clauses? Wouldn’t a DBA_OBJECT_USAGE view – showing DML and query operations on tables/views per database object – be a helpful tool?

TVDCA – the Trivadis PL/SQL and SQL CodeAnalyzer – is such a tool and helps you to overcome several analysis restrictions in an Oracle 10g, 11g or 12c database. At Kscope13 some of my attentive session attendees got an USB stick with TVDCA 0.4.1 Beta. In the meantime I was busy fixing bugs to proudly present you now an updated trial/preview version free of charge in the download section of this blog.

The following query might give you an idea of the functionality of tvdca:

SQL> SELECT object_name, procedure_name, operation, table_name, column_name
  2    FROM tvd_object_col_usage_v
  3   WHERE owner = 'TVDCA'
  4         AND object_type = 'PACKAGE BODY';

OBJECT_NAME    PROCEDURE_NAME OPERATION TABLE_NAME           COLUMN_NAME
-------------- -------------- --------- -------------------- ----------------
TVD_COLDEP_PKG GET_DEP        SELECT    DBA_DEPENDENCIES     NAME
TVD_COLDEP_PKG GET_DEP        SELECT    DBA_DEPENDENCIES     OWNER
TVD_COLDEP_PKG GET_DEP        SELECT    DBA_DEPENDENCIES     REFERENCED_NAME
TVD_COLDEP_PKG GET_DEP        SELECT    DBA_DEPENDENCIES     REFERENCED_OWNER
TVD_COLDEP_PKG GET_DEP        SELECT    TVD_PARSED_OBJECTS_V OBJECT_NAME
TVD_COLDEP_PKG GET_DEP        SELECT    TVD_PARSED_OBJECTS_V OBJECT_TYPE
TVD_COLDEP_PKG GET_DEP        SELECT    TVD_PARSED_OBJECTS_V OWNER
TVD_COLDEP_PKG PROCESS_VIEW   SELECT    DBA_TAB_COLUMNS      COLUMN_ID
TVD_COLDEP_PKG PROCESS_VIEW   SELECT    DBA_TAB_COLUMNS      OWNER
TVD_COLDEP_PKG PROCESS_VIEW   SELECT    DBA_TAB_COLUMNS      TABLE_NAME

Please have a look at my slides or the information in the download section if you are interested to learn more about tvdca.

Trivadis PL/SQL & SQL CodeChecker Released

$
0
0

In August 2009 Trivadis – the company I work for – released the first version of their PL/SQL & SQL Coding Guidelines. Back then we made our PL/SQL assessments based on interviews and checked the code against our guidelines using Code Xpert, SQL*Plus scripts and some manual/visual checks. You may imagine that this approach had some shortcomings, especially if you think about repeating the process after some corrections.

Back than the idea was born to build a tool which allows to run the checks fully automated and make it part of our continuous integration environment.

Today I’m proud to release the first public beta version of TVDCC – the Trivadis PL/SQL & SQL CodeChecker. TVDCC is a file based command line utility and does not require a connection to an Oracle database at any time. Simply run

tvdcc path=.

to scan the current directory including all subdirectories for SQL*Plus files and to create HTML and Excel reports.

See my download area for more information about TVDCC and to grab your copy of TVDCC. Any feedback is highly appreciated.

Viewing all 118 articles
Browse latest View live