SQL quiz revisited: ROW_NUMBER, LAST_VALUE, FIRST_VALUE and the SQL knights value problem.

Uli Bethke analytic functions, Oracle, SQL for Analysis

Thank you all for your entries to the recent quiz. The guys from Packt should be shortly sending out the e-books to the winners.

Today I will walk you through some of the possible solutions and also briefly touch upon performance of these.

Last week's contest was based on a classic SQL problem. Anthony Molinaro, the author of the SQL Cookbook has named this the Knight Values problem because it is analogous to a knight's path in the game of chess: "You determine the result the way a knight determines a new location: by jumping to a row then turning and jumping to a different column". In our example we first have to find the highest amount sold for a customer and from there then get the corresponding time_id. Another complicating factor is that we want the query to be deterministic, i.e. we just want to get back one record per customer. So what happens if there is a tie between the amount_sold, e.g. the same amount_sold exists on 5/11/2009 and 7/11/2009. If we come across such a tie we would like to select the amount_sold with the highest time_id.

Note: You can make this query only deterministic if there is a unique key on the table. Otherwise you may end up with a tie. There is a way around this though by randomly selecting one of the values or performing a SELECT DISTINCT. I will give two examples further down.

Of course you can do this in classic SQL using a subquery:

A query such as the following would not be deterministic as it does not resolve the potential tie outlined above.

You can clearly see this if you insert the following record into our customer table

An alternative to the above is the use of analytic functions. You can either use ROW_NUMBER(),LAST_VALUE/FIRST_VALUE or KEEP FIRST/LAST with DENSE_RANK.

Let's first have a look at LAST_VALUE.

So what are we doing here? We first break up the customer table by cust_id and order the resultset by amount_sold. We then grab the last value in this ordered resultset, which will give us the time_id for the maximum amount_sold for each customer. If you want to avoid a tie between the same amount_sold to a customer on two different days you need to include a unique value in the ORDER BY clause. In our case we are including the time_id to avoid a tie. If you are using LAST_VALUE you need to include the windowing clause ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING. As per documentation: "If you omit the windowing_clause of the analytic_clause, it defaults to RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. This default sometimes returns an unexpected value, because the last value in the window is at the bottom of the window, which is not fixed. It keeps changing as the current row changes. For expected results, specify the windowing_clause as RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING. Alternatively, you can specify the windowing_clause as RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING."

We could of course use the FIRST_VALUE function where we can use the default windowing behaviour to our advantage and omit the clause altogether. Thus making it easier to read. We have to slightly change our query though:

One reader sent in a query to avoid the subquery altogether. Well, in Teradata you could have done this using analytic functions in the predicate clause.

Another way of solving this with analytic functions is to use KEEP FIRST/LAST with dense_rank

If you don't have a unique key in your dataset you can make the result deterministic by randomly selecting one of the tied records. This is achieved via row_number(). The solution below is my favourite as it is the most readable of the lot and always deterministic.

A couple of words on performance.

The pure SQL approach will perform a full scan of the customer table twice. This means that it will generate twice the amount of logical I/O. However, the analytic functions will use up a lot more memory or even worse temp disk space for sorting based on the PARTITION and the ORDER BY clause. By creating an index on the columns used in the partition clause and the order by clause (such as the one below) you can generate a WINDOW NO SORT explain plain. As a result the query will have to do no sorting at all and should generally speaking perform better than the pure SQL. However, like everything in life it all comes down to your specific situation.

If you want to read up on analytic functions I recommend the following books:
SQL Cookbook. This is my number one.
The Art of SQL.
SQL Hacks
SQL Tuning
Advanced SQL Functions in Oracle 10g. This one has more examples on analytic functions in Oracle.