Pythian Blog: Technical Track

Using the Oracle Wait Interface to Troubleshoot I/O Issues

Or, Why I Like Broccoli and Wait Events

There’s been a fair bit of discussion with respect to whose vegetables are the freshest in Ottawa. From one DBA to another, I’m here to tell you that St. Laurent Fruit and Vegetable (beside the Mideast Meat Market) is probably the place to go for the best prices on… well… fruits and vegetables. They also carry lots of bags, and they’re strategically placed so you’ll never need old men to help you get more :-)

What’s more interesting about grocery shopping is that it lets you daydream and go over all the good work you’ve done over the last week (assuming you go weekly and a certain amount good work done). Last week was particularly tough. One of our more stable clients had some serious “mystery” production problems. To paraphrase, their app that was running “OK” starting running “sucky”. I knew that nothing had changed from our end and so being the intrepid super-sleuths that we are, we determined that the problems were related to I/O. How? Here’s how (forgive me, but I haven’t really gone into massive amounts of detail on this one, it’s more of a general framework).

Step 1

Run a handy query originally written by Steve Adams. I use this as my very first step to troubleshooting anything that resembles an Oracle performance issue, and work from there.

-----------------------------------------------------------
--
-- Script:      waiters.sql
-- Purpose:     to count the waiters for each event type
-- For:         8.0 and higher
--
-- Copyright:   (c) 2000 Ixora Pty Ltd
-- Author:      Steve Adams
--
-----------------------------------------------------------
select /*+ ordered */
  substr(n.name, 1, 29)  event,
  t0,
  t1,
  t2,
  t3,
  t4,
  t5,
  t6,
  t7,
  t8,
  t9
from
  sys.v_$event_name  n,
  (select event e0, count(*)  t0 from sys.v_$session_wait group by event),
  (select event e1, count(*)  t1 from sys.v_$session_wait group by event),
  (select event e2, count(*)  t2 from sys.v_$session_wait group by event),
  (select event e3, count(*)  t3 from sys.v_$session_wait group by event),
  (select event e4, count(*)  t4 from sys.v_$session_wait group by event),
  (select event e5, count(*)  t5 from sys.v_$session_wait group by event),
  (select event e6, count(*)  t6 from sys.v_$session_wait group by event),
  (select event e7, count(*)  t7 from sys.v_$session_wait group by event),
  (select event e8, count(*)  t8 from sys.v_$session_wait group by event),
  (select event e9, count(*)  t9 from sys.v_$session_wait group by event)
where
  n.name != 'Null event' and
  n.name != 'null event' and
  n.name != 'rdbms ipc message' and
  n.name != 'pipe get' and
  n.name != 'virtual circuit status' and
  n.name not like '%timer%' and
  n.name not like '%slave wait' and
  n.name not like 'SQL*Net message from %' and
  n.name not like 'io done' and
  n.name != 'queue messages' and
  e0 (+) = n.name and
  e1 (+) = n.name and
  e2 (+) = n.name and
  e3 (+) = n.name and
  e4 (+) = n.name and
  e5 (+) = n.name and
  e6 (+) = n.name and
  e7 (+) = n.name and
  e8 (+) = n.name and
  e9 (+) = n.name and
  nvl(t0, 0) + nvl(t1, 0) + nvl(t2, 0) + nvl(t3, 0) + nvl(t4, 0) +
  nvl(t5, 0) + nvl(t6, 0) + nvl(t7, 0) + nvl(t8, 0) + nvl(t9, 0) > 0
order by
  nvl(t0, 0) + nvl(t1, 0) + nvl(t2, 0) + nvl(t3, 0) + nvl(t4, 0) +
  nvl(t5, 0) + nvl(t6, 0) + nvl(t7, 0) + nvl(t8, 0) + nvl(t9, 0)
/

It’s really a nifty “group by” from v$session_wait. So, if you really want, you can go straight there. This query showed me the following:

SQL*Net more data from client 1 1 1 1 1 1 1 1 1 1
latch free   2 2 2 2 2 2 2 2 2 2
db file sequential read  2 2 2 2 2 2 2 2 2 2
db file scattered read  3 3 3 3 3 3 3 3 3 3
enqueue    8 8 8 8 8 8 8 8 8 8
buffer busy waits  15 15 15 15 15 15 15 15 15 15

Ok, so now I know we definitely had a problem with enqueues and buffer_busy_waits. In addition, when I ran this a few times, I also encountered significant latch free waits. So what’s the deal? Poking around the web, you’ll come up with a few standard answers for each of these. To summarize, here’s a great paper on resolving latch free waits.

Looking in v$session_wait told me the latch I was waiting on was the kghalo latch for the shared_pool. This was expected. Our client was known for not binding any SQL, and so we were parsing everything. This wasn’t unusual.

The P2 column in v$session_wait tells you the latch# it’s waiting for, which can be looked up in v$latchname. In addition, for some system stats, you can run this:

select * from v$latch_misses order by 5 desc;

Step 2

What was unusual, however, were the absolute number of enqueues and buffer_busy_waits. Linking the sid from v$session_wait to v$session and then to v$sqlarea, a trend emerged. All the buffer_busy_waits were against the same object (in memory). In addition, the enqueues were also on the same object. How did I know this? Well, the waiting queries were one indication, however, I also looked at v$session_wait to get the information I needed.

When you look at buffer_busy waits, the P1 column gives you the file_name and the P2 column gives you the block#. P3 gives you the reason you’re waiting. Run the following to get the object you’re waiting on.

select * from dba_extents where file_id = [[your_file] and [your_block] between block_id and block_id + blocks;

Typically, you’re hitting hot blocks. You can try to use some of the techniques discussed here.

Step 3

Troubleshooting enqueue uses a similar strategy. For enqueues, the P2 column gives you the object_id of the object, and from there you figure out what’s going on.

select * from dba_objects where object_id = [your_P2_object];

In 10G, the interface is a little more useful, giving you the actual type of enqueue and the reason. In 8173, we’re not so fortunate. Either way, we managed to isolate a number of bad running queries this way, and it was obvious that we were hitting an I/O bottleneck based on one table in the database. My recommendation to the client was to check the disks. After they assured us that everything was okay, I went ahead and looked for query plan flips, etc.

At the end of 3 days of query tuning, one of the clients’ sys admins finally discovered that a battery had failed on the disk controller, causing all write cacheing to no longer be available. I’m sure there must have been a way to see this, but I just don’t know how.

To summarize folks, always check your batteries. But if you don’t, and your DBA starts lifting the hood and poking around, they’re sure to find tons of other stuff that can be fixed and that came to light only when the hardware didn’t behave nicely. Kinda like a nice stress test, eh?

What my team and I discovered was that the object in question needed some serious love, since the client does plan on growing their operation. Because of all the queries we fixed, their app is that much faster. So in the long run, this was good for us. Just like broccoli.

I’m off to dinner.

No Comments Yet

Let us know what you think

Subscribe by email