Using subqueries in Oracle Data Integrator (ODI) interfaces for complex data integration requirements

Uli Bethke Oracle Data Integrator (ODI)

Oracle Data Integrator does not have any built in functionality for subqueries in interfaces. This is one of the reasons why you will find people claiming that ODI has shortcomings when dealing with complex transformations.

Shortcomings of existing workarounds

In the ODI community various workarounds have been suggested to address this. The following discussion thread on OTN summarises these efforts.

1) Use of WHERE EXISTS in the predicate
2) Use of a View that contains the complex subquery transformation logic as the source datastore.
3) Use of a series of temporary (yellow) interfaces.

While all of these are valid workarounds each of the above approaches has drawbacks of its own.

The use of WHERE EXISTS only addresses a small subset of requirements. Using views has one big disadvantage. You will lose data lineage. By using views you will not be able to trace mappings from source to target and as a result lose one of the advantages that an ETL tool offers over scripting. In theory performance should not suffer as Oracle uses predicate pushing and view merging. Tom Kyte has written an excellent article about this for Oracle Magazine a while back if you require more information (look for the section on Views and Merging/Pushing at the bottom of the article). The third option is to use a series of temporary interfaces. However, this has the disadvantage of physically setting down and materialising the resultset for each subquery, rather than processing everything in memory. Depending on the circumstances, you may experience degradation in performance.

Proposed workaround

I propose another workaround here that addresses all of the above shortcomings. We will use a combination of a temp (yellow) interface and a custom knowledge module to address the issues. Each subquery in our complex transformation requirement will be assigned to a temporary interface. I have written a custom integration knowledge module that will store the SQL of the subquery for each temporary interface in a table. We then stick all of our temp interfaces with the subqueries into a package and then let the knowledge module combine the individual subqueries into one complex query before loading the data into our target table. Take the following as an example. We have a requirement to rank the Top 10 purchasing customers and insert them into a target table. In SQL this can be done by using the rank() analytic function in a subquery and then filtering on this subquery. In our solution this will translate into two interfaces. One temp interface for the subquery with the analytic function. Our custom KM will store the SQL for this subquery in a table. We will then need a second (non-temp) interface that filters on the top 10 purchasing customers, combines the two queries, executes the resulting complex query, and inserts the resultset into the target table.

You can download the custom subquery knowledge module and the other scripts from here.

Note: This is is just a prototype at the moment to demonstrate that it is possible to execute complex transformations in ODI. At the moment the solution is specific to Oracle. I intend to rewrite parts of the knowledge module using Jython arrays to store the subqueries over the next couple of weeks to make the solution technology agnostic.

Teach me Big Data to Advance my Career

The subquery knowledge module

Before I give a working example I want to give a brief overview on the custom knowledge module that is our work horse.

Step 1. Create table that stores the SQL for the subqueries.

Nothing spectacular here. We create a table that will hold the SQL for our subqueries. The data type for the sql query column is a CLOB (this will become relevant in a later step). We also have a session number column to store the ODI session number for each subquery. This will allow us to concurrently use the knowledge module.

1

Step 2: Store SQL pieces for subquery in Jython variables

In step two we store the SQL pieces for the subquery in Jython variables (note Jython as the technology). We retrieve the individual SQL components via the ODI substitution API.

2

Note: Make sure that you use the tab key for indentation. This seems to be a peculiarity of Jython.

Note also that I have built in a simple check for the presence of analytic functions into the code. If we find an analytic function in the source to target column mappings we do not generate the GROUP BY SQL piece as otherwise an error will be thrown. This simple check is missing from all of the out of the box knowledge modules. As a result analytic functions such as SUM() OVER, MAX() OVER etc. can not be used in interfaces. I consider this a bug in the getGrpBy() API substitution method in ODI.

Note: There is a clever workaround for the above described in Metalink note 807527.1

Step 3: Debug subquery

In this step we build a debugging mechanism into the knowledge module. We use the Jython raise function to throw an exception and print out the value of the v_query variable to the Operator. This will allow us to quickly review the value of the generated SQL in the Operator module and if necessary allow us to debug it.

3

Note that you have to select the Ignore Errors checkbox. Otherwise execution of the interface will terminate at this step (after all we are forcefully raising an error).

Step 4: Store SQL for subquery in table

In this step we will use the Jython technology to store the SQL for the subquery together with the ODI session number in a table. We use a Jython stored procedure call to accomplish this.

4

Note that you have to use indentation using your tab key exactly as in the screenshot as otherwise an error will be thrown. Have a look at 424207.1 for more information on calling stored procedures from Jython.

Code for the stored procedure load_sql_query is as follows:

Step 5: Create complex query from individual subqueries

We only execute this step once we have run all of our subqueries. This step takes all of the subqueries and creates our final complex query. Condition for executing this step is that we are dealing with a non-temporary table that is a normal (non-temp) interface.

Command on Source

5

The getTargetTable method returns 'T' for a non temp target table and NULL for a temp target table.

Command on Target

6

Note: We only execute this step if the variable table_type is of type non-temp, that is equals T.

If we are dealing with a non-temp table and this is our final step in our complex transformation we generate the final complex query from the underlying subqueries by executing a stored procedure.

There may be a more elegant way of doing this but this seems to work for the moment. Originally I had thought of using the model clause for this but then noticed that this does not support CLOBs.

Step 6: Execute complex query and insert transformed data into target table

In this step we grab the transformed and substituted complex query from our table and execute it.

Command on Source

7

In the Command on Source we retrieve the complex SQL query from our database table.

As we are dealing with a CLOB here we first need to make some changes to the odiparams.bat file. Alternatively you can use version 5 or 6 of the JDBC driver (Metalink 423909.1). I have outlined in a previous post how to achieve this.

If we don’t want to install a new JDBC driver we need to add the following line to the odiparams.bat

set ODI_ADDITIONAL_JAVA_OPTIONS= "-Doracledatabasemetadata.get_lob_precision=false"

You will need to restart the Designer module for this to take effect.

Command on Target

7a

Note that we need to select the Ignore Errors checkbox. Otherwise the KM will fail at this step for the subqueries stage. A better approach for this may be to use Jython and pass the #query variable into a stored procedure that executes it as dynamic SQL.

Currently I use an INSERT statement for loading the target table. In a future version of this KM I also intend to add a MERGE statement as an option.

Step 7: Write complex query to Operator module

In a final step we write the complex query to the Operator. We can then easily debug any issues with it.

Command on Source

8

Command on Target

9

The subquery knowledge module in action – Two examples

Let’s now have a look at our new Knowledge Module in action. For this purpose we will use tables from the SH sample schema.

Example 1: Subquery with analytic function

In this example we want to retrieve the date on which the amount sold for a customer was the greatest. One way of achieving this is to use the ROW_NUMBER() analytic function in a subquery to rank the amount sold and then filter the Top 1 record from this subquery. So the query would look similar to the following

We will now generate this query with our new knowledge module.
First of all we will generate a temporary interface for the subquery with the analytic function in it.

10

Select the Staging Area Different From Target checkbox and from the dropdown select the schema you want to execute this in.

In the Target Datastore click on Untitled. Type SQ1 as the Name for the target datastore and select the Data schema radiobutton.

11

Drag the sales table from the SH model to the Sources area (you first need to reverse engineer this from the SH schema).

12

Drag and drop the columns CUST_ID, TIME_ID and AMOUNT_SOLD from the source datastore to the target datastore.

13

Right click inside the Target Datastore area and select Add a column

14

Name the new column rn and type in the value for the analytic function:

row_number() OVER (PARTITION BY SALES.CUST_ID ORDER BY SALES.AMOUNT_SOLD DESC, SALES.TIME_ID DESC)

15

Go to the Flow tab and select the custom subquery knowledge module.

16

That’s it, we have created the yellow interface for our subquery.

17

In a next step we now need to create the interface that takes the sales_amount yellow interface as its source and loads the target table.

Create an interface, name it sales_amount_filter, drag and drop the sales_amount interface to the Sources area, and create a filter as per screenshot below.

18

We need to create a table for our target datastore. Use the script below to create this table, reverse engineer it and drag and drop it to the target datastore area.

Next move on to the Flow tab and select our custom subquery knowledge module.

We now need to bring the two interfaces together in a package and then we are ready to load our target.

19

Execute the package and then switch to the Operator module. You should get something similar to the screenshot below.

20

Let’s have a look at the execution of the individual steps. We’re particularly interested in the subqueries and the final complex query. So let’s have a look at the debug steps where these will show up.

Below we see the query for the temp interface that contains the analytic function.

21

Next we’ll have a look at the query that is generated for the second (non-temp) interface.

22

As you can see from the figure above, the query selects from the SQ1 target table of the temporary interface.

We substitute SQ1 in step 12 of the package with the SQL query of the analytic function and then use this complex query to insert into the target table.

23

Example 2: Joining two subqueries

In the next example we take this a step further and join another subquery to the package from example 1.

We first create another subquery using a temp interface. As you can see from the figure below we aggregate the amount_sold by cust_id.

24

We then take this temp interface and the temp interface from example 1 and join these together on the cust_id in yet another interface.

25

The script for the target table sum_sales_amount is as follows

Next we stick the three interfaces into a package.

26

... and execute it.

From the Operator module we can see the generated SQL.

27

Teach me Big Data to Advance my Career

Conclusion

In this article I have shown you how you can execute complex queries and subqueries in ODI without loss of data lineage and without suffering performance degradation. So far I have not used the knowledge module in a production environment. So I am sure there is still room for improvement. However, from now on I will use and further develop this approach to meet complex transformation requirements in ODI.

Any input highly welcome. Also let me know of bugs that you come across or if you find that any of the above is unclear.

In order to master scripting in ODI I recommend the following books.

Java BeanShell

Scripting in Java: Languages, Frameworks, and Patterns

Jython

The Definitive Guide to Jython: Python for the Java Platform.

Jython Essentials (O'Reilly Scripting)