Pythian Blog: Technical Track

Slow query to V$ views after DBRU Patching

One of our clients patched its EBS database 19.13 to DBRU 19.17 and faced an issue with one of the seeded programs. The program took longer to run, causing delays in concurrent processing due to incompatibility rules with other programs.

The Query

The problematic query responsible for sluggish performance was joining two V$ views: V$SQLAREA and V$SESSION. Here’s an example SQL monitor report:

==========================================
| Elapsed |   Cpu   | PL/SQL  |  Other   |
| Time(s) | Time(s) | Time(s) | Waits(s) |
==========================================
|      31 |      31 |    0.00 |     0.33 |
==========================================
 
SQL Plan Monitoring Details (Plan Hash Value=3702960425)
=====================================================================================================================================================
| Id |          Operation           |          Name           |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Activity | Activity Detail |
|    |                              |                         | (Estim) |      | Active(s) | Active |       | (Actual) |   (%)    |   (# samples)   |
=====================================================================================================================================================
|  0 | INSERT STATEMENT             |                         |         |      |           |        |     1 |          |          |                 |
|  1 |   LOAD TABLE CONVENTIONAL    | V_SQLAREA_TEMP          |         |      |           |        |     1 |          |          |                 |
|  2 |    NESTED LOOPS              |                         |       1 |      |        26 |     +6 |     1 |        0 |          |                 |
|  3 |     NESTED LOOPS             |                         |       1 |      |        26 |     +6 |     1 |      14M |          |                 |
|  4 |      MERGE JOIN CARTESIAN    |                         |       1 |      |        26 |     +6 |     1 |      14M |          |                 |
|  5 |       FIXED TABLE FULL       | X$KGLCURSOR_CHILD_SQLID |       1 |      |        26 |     +6 |     1 |    39414 |          |                 |
|  6 |       BUFFER SORT            |                         |     136 |      |        26 |     +6 | 39414 |      14M |          |                 |
|  7 |        FIXED TABLE FULL      | X$KSLWT                 |     136 |      |         1 |     +6 |     1 |      346 |          |                 |
|  8 |      FIXED TABLE FIXED INDEX | X$KSLED (ind:2)         |       1 |      |        30 |     +2 |   14M |      14M |          |                 |
|  9 |     FIXED TABLE FIXED INDEX  | X$KSUSE (ind:1)         |       1 |      |        30 |     +1 |   14M |        0 |          |                 |
=====================================================================================================================================================

The left row source of the Merge Join Cartesian at step 4 – X$KGLCURSOR_CHILD_SQLID – produced 39414 rows, although the estimate was just 1. As it usually happens, Oracle decided to use MJC, which led to data set explosion of up to 14M rows, which were all thrown away in the last stage of the final nested loops join. As a result, it took 31 seconds of CPU time to complete the query inefficiently.

The Issue

Missing table statistics leads to misestimates, and as such, the first thing you’d want to do is to gather statistics on the underlying fixed tables. Most of the time, statistics can be collected, but not for X$KGLCURSOR_CHILD_SQLID. Oracle does not allow statistics on this particular X$ view:

ERROR at line 1:
ORA-20000: Unable to analyze TABLE "SYS"."X$KGLCURSOR_CHILD_SQLID",
insufficient privileges or does not exist
ORA-06512: at "SYS.DBMS_STATS", line 24270
ORA-06512: at "SYS.DBMS_STATS", line 24332
ORA-06512: at line 1

Oracle thinks it’s OK: Gather fixed table statistics fails with ORA-20000 ORA-06512 (Doc ID 2454793.1)

The Plan

Checking AWR for past executions of this query confirmed the issue started after applying DBRU 19.17. But Why? Let’s check the execution plan from memory with details:

-------------------------------------------------------------------------------------------
| Id  | Operation                  | Name                    | Rows  | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT           |                         |       |       |     1 (100)|
|   1 |  LOAD TABLE CONVENTIONAL   | V_SQLAREA_TEMP          |       |       |            |
|   2 |   NESTED LOOPS             |                         |     1 | 19885 |     0   (0)|
|   3 |    NESTED LOOPS            |                         |     1 | 19863 |     0   (0)|
|   4 |     MERGE JOIN CARTESIAN   |                         |     1 | 19859 |     0   (0)|
|*  5 |      FIXED TABLE FULL      | X$KGLCURSOR_CHILD_SQLID |     1 | 19851 |     0   (0)|
|   6 |      BUFFER SORT           |                         |   136 |  1088 |     0   (0)|
|   7 |       FIXED TABLE FULL     | X$KSLWT                 |   136 |  1088 |     0   (0)|
|*  8 |     FIXED TABLE FIXED INDEX| X$KSLED (ind:2)         |     1 |     4 |     0   (0)|
|*  9 |    FIXED TABLE FIXED INDEX | X$KSUSE (ind:1)         |     1 |    22 |     0   (0)|
-------------------------------------------------------------------------------------------
 
Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
 
   1 - SEL$E94F3EA2
   5 - SEL$E94F3EA2 / X$KGLCURSOR_CHILD_SQLID@SEL$5
   7 - SEL$E94F3EA2 / W@SEL$9
   8 - SEL$E94F3EA2 / E@SEL$9
   9 - SEL$E94F3EA2 / S@SEL$9
... 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   5 - filter((INTERNAL_FUNCTION("CON_ID") AND "KGLOBT02"<>0 AND
              "INST_ID"=USERENV('INSTANCE')))
   8 - filter("W"."KSLWTEVT"="E"."INDX")
   9 - filter(("S"."INDX"="W"."KSLWTSID" AND (("S"."KSUSEPSI"="KGLOBT03" AND
              "S"."KSUSEPSI" IS NOT NULL) OR ("S"."KSUSESQI"="KGLOBT03" AND "S"."KSUSESQI" IS
              NOT NULL)) AND BITAND("S"."KSUSEFLG",1)<>0 AND BITAND("S"."KSSPAFLG",1)<>0 AND
              "S"."KSUUDSES"="FND_GLOBAL"."SESSION_ID"() AND INTERNAL_FUNCTION("S"."CON_ID") AND
              "S"."INST_ID"=USERENV('INSTANCE')))
 
Note
-----
   - SQL profile SYS_SQLPROF_XXXXXXXX used for this statement

Note that the query had a SQL Profile applied. So, what’s inside it?

SQL> @sql_profile_hints SYS_SQLPROF_XXXXXXXX

HINT
-------------------------------------------------------------------------------------------------------------------------------------------------
OPT_ESTIMATE(@"SEL$66C44981", JOIN, ("W"@"SEL$7", "S"@"SEL$7", "X$KGLCURSOR_CHILD_SQLID"@"SEL$4"), SCALE_ROWS=0.07086870307)
OPT_ESTIMATE(@"SEL$66C44981", JOIN, ("W"@"SEL$7", "E"@"SEL$7", "S"@"SEL$7", "X$KGLCURSOR_CHILD_SQLID"@"SEL$4"), SCALE_ROWS=0.07086870307)
OPT_ESTIMATE(@"SEL$66C44981", TABLE, "S"@"SEL$7", SCALE_ROWS=0.04348783125)
OPT_ESTIMATE(@"SEL$66C44981", TABLE, "X$KGLCURSOR_CHILD_SQLID"@"SEL$4", SCALE_ROWS=598440)

The SQL Profile is a typical SQL Tuning Advisor profile with OPT_ESTIMATE hints trying to correct cardinality misestimates.

But check the query block names in the SQL Profile and execution plan—they are different. There is SEL$9 in the execution plan, but SQL Profile uses SEL$7, and that difference in query block names makes OPT_ESTIMATE hints invalid. Apparently, V$ view definitions have significantly changed after the patching, which caused the query block name changes as well. That’s why the existing SQL Profile stopped working; the query ran longer and delayed the program’s start—eventually causing delays in the concurrent processing.

The Fix

Ideally, it is best to gather statistics on the underlying X$ views, but it’s not possible now due to Oracle’s internal restrictions. You can use SQL Patch, SQL Plan Baseline, or SQL Profile to correct the execution plan and improve performance.

No Comments Yet

Let us know what you think

Subscribe by email