Share this
Oracle SQL Trace: Is it Safe for Production Use?
by Jared Still on Oct 17, 2023 12:11:42 PM
Perhaps you have been approached by a client or manager and tasked with troubleshooting one or more slow-running SQL statements.
The request may have been even more broad: an application is slow, it has been determined that that problem must be the database, and so now it is on the DBA's desk. And you are the DBA.
When trying to solve such problems, it is not too unusual to start with an AWR report, examine the execution plans, and drill down in ASH to determine where the problem lies.
While some good information may have been found, it may not be quite enough information to determine the cause of the application's slowness.
While ASH, AWR, and execution plans may be good at showing you where there may be some problems, they are not always enough to show you just where a problem lies.
The most accurate representation of where time is spent during a database session is by invoking SQL Trace.
There are multiple methods for enabling SQL tracing:
- alter session set events '10046 trace name context forever, level [8|12]';
- sys.dbms_system.set_ev(sid(n), serial(n), 10046, 8, '')
- alter session set sql_trace=true;
- dbms_monitor
The final method using dbms_monitor is considered the best choice.
The alter session method requires that the user who is enabling tracing has the alter session privilege. The sys.dbms_system.set_ev method requires DBA privileges. The alter session set sql_trace=true method is not useful, as the trace file will not contain any wait time values.
When requesting to run SQL Trace, the client or responsible user may object to using SQL Trace due to the additional overhead that tracing may incur.
Of course, there must be some overhead when tracing is enabled.
The question is this: Is the overhead more than the users can bear?
The answer to the question may depend on several factors regarding the issue in question:
- severity of the issue
- how badly it is impacting users
- the urgency of resolving the issue
The answer to these questions helps to determine if SQL Trace will impose an unbearable burden on the user of affected applications.
So, just how much perceived overhead is caused by SQL Trace?
The answer is, as it is with many things: It depends.
If the issue is causing much interruption of user's work, they may not mind if tracing is enabled, even if they think it may have some negative impact.
In other cases, the impact may be less severe, and users (and managers) are leery of anything that may cause further delays.
We can consider the results of tests run with varying parameters and find out the impact of enabling Oracle tracing.
Test Configuration
The way to determine if the overhead is acceptable is to do some testing.
sqlrun is a tool I developed for running SQL statements against a database using 1+ sessions. It is highly configurable; the following are some of the parameters and configuration possibilities:
- number of sessions
- think time between executions
- connection timing
- connect all simultaneously
- connect as quickly as possible, in succession
- interval between connections
- Multiple SQL statements can be run
- randomize the frequency of statements run
- Placeholder values (bind variables) can be supplied from a text file.
- DML can be used
- PL/SQL blocks can be used
Testing will involve two servers: the database server and the test software on another server.
Network latency between client and server is < 1 ms.
Two sets of tests will be run.
Note: 'no think time' means that the test SQL statements are run repeatedly in succession, as quickly as the client can submit them.
- no think time
- latency is < 1ms
- local client, but with 0.5 seconds think time
- each client will pause for 0.5 seconds between executions
Each of those preceding tests will also run with multiple trace levels.
- no tracing
- trace level 8
- trace level 12
There will be 50 clients per test.
All of the code and trace files used for this article are found here: Oracle Trace Overhead
Further details are found in the README.md in the github repo.
Test Environment
The test environment is as follows:
- Database Server:
- ora192rac01 (one node of a 2-node RAC)
- allocated 4 vCPUs
- 16 G RAM
- 1G network
- Client
- sqlrun, an Oracle Linux 7 VM
- 3 vCPUs
- 8G RAM
Oracle database is 19.12 Oracle clients are 19.16 Test software uses Perl 5, with the DBI and DBD::Oracle modules
Compiling Test Results
The mrskew utility is a tool created by Method R (Cary Millsap and Jeff Holt).
It is used to generate metrics from Oracle SQL Trace files.
This testing makes use of the mrskew utility, and the cull-snmfc.rc file to skip 'SQL*Net message from client' events >= 1 second.
# cull-snmfc.rc
# Jared Still 2023
# jkstill@gmail.com
# exlude snmfc (SQL*Net message from client) if >= 1 second
--init='
=encoding utf8
'
--where1='($name =~ q{message from client} and $af < 1) or ! ( $name =~ q{message from client})'
Using this rc file: mrskew --rc=cull-snfmc.rc.
This is equivilent to: mrskew --where1='($name =~ q{message from client} and $af < 1) or ! ( $name =~ q{message from client})' on the command line.
If you are a user of the Method R Workbench, you may find this rc file useful.
EVS Schema
EVS is the Electric Vehicles Sighting Schema.
The data was obtained from the Electric Vehicle Population data set.
See create-csv.sh
A subset of cities.csv and ev-models.csv will be used as placeholder values for the bind variables used in the test SQL files.
The Test Transaction
The following SQL scripts will make up a transaction:
SQL/Oracle/ev-cities.sql
select county,city,state
from cities
where
county = :1
and city = :2
and state = :3
SQL/Oracle/ev-insert.sql
insert into ev_sightings(make,model,county,city,state,date_sighted)
values (:1, :2, :3, :4, :5, sysdate)
SQL/Oracle/ev-location-select.sql
select count(*) ev_count
from ev_locations el
join cities ci on ci.city = el.city
and ci.county = el.county
and ci.state = el.state
and el.make = :1
and el.model = :2
and el.county = :3
and el.city = :4
and el.state = :5
join ev_models m on m.make = el.make
and m.model = el.model
SQL/Oracle/ev-select.sql
select make, model
from ev_models
where make = :1
and model = :2
Bind Values
The values for the SQL placeholders are found in these three files:
- SQL/Oracle/cities.csv
- SQL/Oracle/ev-models.csv
- SQL/Oracle/ev-sightings.csv
sqlrun-trace-overhead.sh
This script is used to call sqlrun.pl.
It accepts up to two parameters:
- no-trace
- trace [8|12]
sqlrun.pl will start 50 clients that run for 10 minutes.
The parameter --exe-delay was set to 0 for tests with no think time and '0.5' for tests that allowed think time.
#!/usr/bin/env bash
stMkdir () {
mkdir -p "$@"
[[ $? -ne 0 ]] && {
echo
echo failed to "mkdir -p $baseDir"
echo
exit 1
}
}
# convert to lower case
typeset -l rcMode=$1
typeset -l traceLevel=$2
set -u
[[ -z $rcMode ]] && {
echo
echo include 'trace' or 'no-trace' on the command line
echo
echo "eg: $0 [trace|no-trace]"
echo
exit 1
}
# another method to convert to lower case
#rcMode=${rcMode@L}
echo rcMode: $rcMode
declare traceArgs
case $rcMode in
trace)
[[ -z "$traceLevel" ]] && { echo "please set trace level. eg $0 trace 8"; exit 1;}
traceArgs=" --trace --trace-level $traceLevel ";;
no-trace)
traceLevel=0
traceArgs='';;
*) echo
echo "arguments are [trace|no-trace] - case is unimportant"
echo
exit 1;;
esac
db='ora192rac01/pdb1.jks.com'
#db='lestrade/orcl.jks.com'
username='evs'
password='evs'
baseDir=/mnt/vboxshare/trace-overhead
stMkdir -p $baseDir
ln -s $baseDir .
timestamp=$(date +%Y%m%d%H%M%S)
traceDir=$baseDir/trace/${rcMode}-${traceLevel}-${timestamp}
rcLogDir=$baseDir/trc-ovrhd
rcLogFile=$rcLogDir/xact-count-${rcMode}-${traceLevel}-${timestamp}.log
traceFileID="TRC-OVRHD-$traceLevel-$timestamp"
[[ -n $traceArgs ]] && { traceArgs="$traceArgs --tracefile-id $traceFileID"; }
[[ $rcMode == 'trace' ]] && { stMkdir -p $traceDir; }
stMkdir -p $rcLogDir
./sqlrun.pl \
--exe-mode sequential \
--connect-mode flood \
--tx-behavior commit \
--max-sessions 50 \
--exe-delay 0 \
--db "$db" \
--username $username \
--password "$password" \
--runtime 600 \
--tracefile-id $traceFileID \
--xact-tally \
--xact-tally-file $rcLogFile \
--pause-at-exit \
--sqldir $(pwd)/SQL $traceArgs
# do not continue until all sqlrun have exited
while :
do
echo checking for perl sqlrun to exit completely
chk=$(ps -flu$(id -un) | grep "[p]erl.*sqlrun")
[[ -z $chk ]] && { break; }
sleep 2
done
# cheating a bit as I know where the trace files are on the server
# ora192rac01:/u01/app/oracle/diag/rdbms/cdb/cdb1/trace/
[[ -n $traceArgs ]] && {
# get the trace files and remove them
# space considerations require removing the trace files after retrieval
rsync -av --remove-source-files oracle@ora192rac01:/u01/app/oracle/diag/rdbms/cdb/cdb1/trace/*${traceFileID}.trc ${traceDir}/
# remove the .trm files
ssh oracle@ora192rac01 rm /u01/app/oracle/diag/rdbms/cdb/cdb1/trace/*${traceFileID}.trm
echo Trace files are in $traceDir/
echo
}
echo RC Log is $rcLogFile
echo
overhead.sh
The script overhead.sh was used to allow for the unattended running of tests.
#!/usr/bin/env bash
# run these several times
# pause-at-exit will timeout in 20 seconds for unattended running
for i in {1..3}
do
./sqlrun-trace-overhead.sh no-trace
./sqlrun-trace-overhead.sh trace 8
./sqlrun-trace-overhead.sh trace 12
done
The Results
The results are interesting.
First, let's consider the tests that used a 0.5-second think time.
The number of transactions per client are recorded in a log at the end of each run.
The results are stored in directories named for the tests.
Log results are summarized via overhead-xact-sums.sh
overhead-xact-sums.sh
#!/usr/bin/env bash
#for rcfile in trace-overhead-no-think-time/trc-ovrhd/*
for dir in trace-overhead-.5-sec-think-time trace-overhead-no-think-time
do
echo
echo "dir: $dir"
echo
for traceLevel in 0 8 12
do
testNumber=0
echo " Trace Level: $traceLevel"
for rcfile in $dir/trc-ovrhd/*-$traceLevel-*.log
do
(( testNumber++ ))
basefile=$(basename $rcfile)
xactCount=$(awk '{ x+=$2 }END{printf("%10d\n",x)}' $rcfile)
printf " Test: %1d Transactions: %8d\n" $testNumber $xactCount
done
echo
done
done
echo
0.5 Seconds Think Time
With 50 clients running for 10 minutes, with 0.5 seconds of think time between transactions, we should expect something near 60,000 total transactions.
( 50 sessions * 600 seconds ) / 0.5 seconds think time = 60,000
The number of transactions for all tests with 0.5 seconds of think time was between 59177 and 59476 transactions, which is fairly close to the estimate.
The estimate of 60,000 did not account for any overhead and was optimistic. It was not expected that 60k transactions would be reached.
At this rate, ~ 100 transactions per second are being performed on the database.
Trace Levels and Transaction Counts
Level |
Test #1 |
Test #2 |
Test #3 |
0 |
59386 |
59454 |
59476 |
8 |
59415 |
59365 |
59334 |
12 |
59411 |
59177 |
59200 |
The difference between tracing and not tracing would not be discernible by users.
We can see where the time was spent via level 8 and level 12 tracing, with a report for 1 set of the results each
Our built-in think time of 0.5 seconds has resulted in rather skewed results.
$ mrskew --rc=cull-snmfc.rc trace-overhead-.5-sec-think-time/trace/trace-8-20230920190529/*.trc
CALL-NAME DURATION % CALLS MEAN MIN MAX
------------------------------ ------------- ------ ------- -------- -------- --------
SQL*Net message from client 29,805.268316 99.9% 74,678 0.399117 0.000096 0.659325
log file sync 23.068209 0.1% 15,299 0.001508 0.000002 0.096168
EXEC 8.353714 0.0% 60,899 0.000137 0.000000 0.034094
enq: TX - index contention 2.494885 0.0% 99 0.025201 0.000067 0.037637
buffer busy waits 1.395004 0.0% 1,916 0.000728 0.000000 0.014468
reliable message 1.127232 0.0% 150 0.007515 0.000394 0.017183
FETCH 0.518977 0.0% 44,852 0.000012 0.000000 0.000942
enq: SQ - contention 0.287349 0.0% 159 0.001807 0.000011 0.004690
latch: cache buffers chains 0.267296 0.0% 89 0.003003 0.000000 0.013789
DLM cross inst call completion 0.191174 0.0% 268 0.000713 0.000000 0.014315
32 others 1.042712 0.0% 92,879 0.000011 0.000000 0.025532
------------------------------ ------------- ------ ------- -------- -------- --------
TOTAL (42) 29,844.014868 100.0% 291,288 0.102455 0.000000 0.659325
The 'think time' value of 1 second was built into cull-snmfc.rc was changed from 1 to 0.5.
$ mrskew --rc=cull-snmfc.rc trace-overhead-.5-sec-think-time/trace/trace-8-20230920190529/*.trc
CALL-NAME DURATION % CALLS MEAN MIN MAX
------------------------------ --------- ------ ------- -------- -------- --------
log file sync 23.068209 46.9% 15,299 0.001508 0.000002 0.096168
SQL*Net message from client 10.448160 21.2% 15,313 0.000682 0.000096 0.041637
EXEC 8.353714 17.0% 60,899 0.000137 0.000000 0.034094
enq: TX - index contention 2.494885 5.1% 99 0.025201 0.000067 0.037637
buffer busy waits 1.395004 2.8% 1,916 0.000728 0.000000 0.014468
reliable message 1.127232 2.3% 150 0.007515 0.000394 0.017183
FETCH 0.518977 1.1% 44,852 0.000012 0.000000 0.000942
enq: SQ - contention 0.287349 0.6% 159 0.001807 0.000011 0.004690
latch: cache buffers chains 0.267296 0.5% 89 0.003003 0.000000 0.013789
DLM cross inst call completion 0.191174 0.4% 268 0.000713 0.000000 0.014315
32 others 1.042712 2.1% 92,879 0.000011 0.000000 0.025532
------------------------------ --------- ------ ------- -------- -------- --------
TOTAL (42) 49.194712 100.0% 231,923 0.000212 0.000000 0.096168
Even though 50 clients ran for 600 seconds each, little work was done due to the 0.5-second think time built into the test.
Only 8.35 seconds were spent EXECuting ~60k database calls.
The rest is database overhead, mostly due to log file sync and normal client network traffic.
Here is the report for the Level 12 trace:
$ mrskew --rc=cull-snmfc.rc trace-overhead-.5-sec-think-time/trace/trace-12-20230920191552/*.trc
CALL-NAME DURATION % CALLS MEAN MIN MAX
------------------------------ --------- ------ ------- -------- -------- --------
log file sync 51.099850 64.7% 15,173 0.003368 0.000011 0.675610
SQL*Net message from client 11.407758 14.5% 15,293 0.000746 0.000099 0.234836
EXEC 8.363528 10.6% 60,893 0.000137 0.000000 0.039199
enq: TX - index contention 3.069491 3.9% 137 0.022405 0.000105 0.040529
buffer busy waits 1.493510 1.9% 1,826 0.000818 0.000001 0.031146
reliable message 0.653565 0.8% 148 0.004416 0.000215 0.036889
FETCH 0.535304 0.7% 44,868 0.000012 0.000000 0.000895
latch: cache buffers chains 0.320590 0.4% 109 0.002941 0.000001 0.021072
DLM cross inst call completion 0.286908 0.4% 320 0.000897 0.000000 0.035637
enq: SQ - contention 0.254433 0.3% 165 0.001542 0.000105 0.003494
30 others 1.444645 1.8% 93,031 0.000016 0.000000 0.039940
------------------------------ --------- ------ ------- -------- -------- --------
TOTAL (40) 78.929582 100.0% 231,963 0.000340 0.000000 0.675610
In this case, using Level 12 added very little overhead - the number of EXEC calls differed by only 6. There is also only a very small difference in EXEC calls.
Next, a rather high load was put on the database to see how the cost of tracing might escalate.
0 Seconds Think Time
Let's consider the tests that were run with no think time.
The first thing I noticed is that the number of transactions per test is about 116 times more than tests with 0.5 seconds of think time.
The Oracle database was being pushed quite hard by these tests, maxing out the capacity of the server.
When tracing was enabled, whether at level 8 or level 12 (with bind values), there is quite a large discrepancy in the number of transactions performed.
Trace Levels and Transaction Counts
Level |
Test #1 |
Test #2 |
Test #3 |
0 |
7,157,228 |
6,758,097 |
6,948,090 |
8 |
4,529,157 |
4,195,232 |
4,509,073 |
12 |
4,509,640 |
4,126,749 |
4,532,872 |
The number of transactions decreased by ~ 40% whenever tracing was enabled.
The high cost of enabling Oracle tracing should not be a surprise in this set of tests. As with previous tests, the tracing level made little difference in the output.
There was no spare capacity on the server, so any extra tasks, such as writing trace files, would come at the expense of other processes.
Does this mean that if a database system is overloaded, Oracle tracing should not be used?
No. What it does mean is that you should be careful about how tracing is used.
This test intentionally overloaded the database server and then added more work by enabling Oracle trace on all 50 sessions.
In real life, it would be much better to choose only a few sessions to trace on such a busy database, perhaps even just one session, without bind values, so a start can be made on learning where the performance problems lie.
Why start with level 8 (no bind values)?
Because you probably do not know just how many placeholders appear in the SQL.
In the previous test, only a few placeholders appear in the SQL.
Each set of bind values causes (( 5 * number of placeholders ) + 2) lines to be written to the trace file.
For a set of 4 bind values, that would be 22 lines.
If there were 200 bind values, there would be 1002 more lines written to the trace file, which would significantly affect the time required to write the trace file.
Once you know it is safe to do so, you can dump bind values to trace if needed.
Here is a mrskew report for one set of the Level 8 tests:
$ mrskew --rc=cull-snmfc.rc trace-overhead-no-think-time/trace/trace-8-20230920123320/*.trc
CALL-NAME DURATION % CALLS MEAN MIN MAX
----------------------------- ------------- ------ ---------- -------- -------- --------
SQL*Net message from client 12,581.726556 47.9% 5,661,876 0.002222 0.000068 0.201148
log file sync 9,838.477278 37.4% 980,400 0.010035 0.000167 8.109759
buffer busy waits 1,360.903841 5.2% 276,513 0.004922 0.000000 3.682007
enq: TX - index contention 504.928985 1.9% 69,517 0.007263 0.000005 0.708188
library cache: mutex X 401.022833 1.5% 16,665 0.024064 0.000002 0.315565
EXEC 400.345960 1.5% 4,530,663 0.000088 0.000000 0.046535
latch: ges resource hash list 351.866071 1.3% 93,299 0.003771 0.000000 0.113945
latch: cache buffers chains 193.510957 0.7% 38,729 0.004997 0.000001 0.090246
latch: enqueue hash chains 188.572075 0.7% 55,370 0.003406 0.000001 0.123281
latch free 118.285486 0.4% 34,727 0.003406 0.000000 0.128214
43 others 352.240751 1.3% 10,231,718 0.000034 0.000000 3.680735
----------------------------- ------------- ------ ---------- -------- -------- --------
TOTAL (53) 26,291.880793 100.0% 21,989,477 0.001196 0.000000 8.109759
And here is Level 12:
$ mrskew --rc=cull-snmfc.rc trace-overhead-no-think-time/trace/trace-12-20230920131238/*.trc
CALL-NAME DURATION % CALLS MEAN MIN MAX
----------------------------- ------------- ------ ---------- -------- -------- --------
SQL*Net message from client 12,514.788817 48.6% 5,637,406 0.002220 0.000067 0.235348
log file sync 9,648.342333 37.5% 981,468 0.009831 0.000001 2.341050
buffer busy waits 1,210.492085 4.7% 257,119 0.004708 0.000000 0.154998
enq: TX - index contention 551.819642 2.1% 72,614 0.007599 0.000006 0.614945
EXEC 385.670903 1.5% 4,511,120 0.000085 0.000000 0.039221
library cache: mutex X 345.251766 1.3% 16,277 0.021211 0.000004 0.269818
latch: ges resource hash list 342.435248 1.3% 92,393 0.003706 0.000000 0.118340
latch: enqueue hash chains 183.029179 0.7% 54,804 0.003340 0.000000 0.096840
latch: cache buffers chains 171.021518 0.7% 35,512 0.004816 0.000000 0.106109
latch free 119.340511 0.5% 35,118 0.003398 0.000001 0.071759
42 others 258.500239 1.0% 10,189,551 0.000025 0.000000 0.639226
----------------------------- ------------- ------ ---------- -------- -------- --------
TOTAL (52) 25,730.692241 100.0% 21,883,382 0.001176 0.000000 2.341050
The dominant wait in each of these tests is SQL*Net message from client, simply due to the large number of calls that SELECT or INSERT a single row.
The Level 12 trace has only about 4% more overhead than the Level 8 trace. More on this later.
6 Millisecond think time
Let's choose a value for think time that allows sufficient time for writing the trace file.
How many transactions per second could the database maintain in the previous test with 0 seconds think time?
Test #1 had the highest number of transactions at 7,157,228 total transactions.
The total runtime for 50 clients was 600 seconds each, or something very close to 600 seconds.
This works out to approximately 4.2 ms per transaction without tracing.
milliseconds / ( transaction count / ( 50 sessions * 600 seconds )) = 4.19156 ms per transaction
1000 / (7,157,228 / ( 50 * 600))
How much time is required to write to the trace file?
We can get that by running strace on a test session running a level 12 trace.
The lib/Sqlrun.pm Perl module was modified to wait for user input after the database connection was made, and tracing was enabled, but before the testing was started.
Doing it this way allowed checking /proc/PID/fd of the test session to see the File Descriptor and name of each file opened:
l-wx------ 1 root root 64 Sep 18 11:40 1 -> /dev/null
lrwx------ 1 root root 64 Sep 18 11:40 10 -> socket:[113916942]
l-wx------ 1 root root 64 Sep 18 11:40 11 -> /u01/app/oracle/diag/rdbms/cdb/cdb2/trace/cdb2_ora_6237_TRC-OVRHD-12-20230918144006.trc
l-wx------ 1 root root 64 Sep 18 11:40 12 -> /u01/app/oracle/diag/rdbms/cdb/cdb2/trace/cdb2_ora_6237_TRC-OVRHD-12-20230918144006.trm
l-wx------ 1 root root 64 Sep 18 11:40 2 -> /dev/null
lrwx------ 1 root root 64 Sep 18 11:40 21 -> socket:[113914997]
lr-x------ 1 root root 64 Sep 18 11:40 3 -> /dev/null
lrwx------ 1 root root 64 Sep 18 11:40 4 -> anon_inode:[eventpoll]
lr-x------ 1 root root 64 Sep 18 11:40 5 -> /proc/6237/fd
lrwx------ 1 root root 64 Sep 18 11:40 6 -> socket:[113916934]
lr-x------ 1 root root 64 Sep 18 11:40 7 -> /u01/app/oracle/product/19.0.0/dbhome_1/rdbms/mesg/oraus.msb
lrwx------ 1 root root 64 Sep 18 11:40 8 -> anon_inode:[eventpoll]
lrwx------ 1 root root 64 Sep 18 11:40 9 -> socket:[113916941]
The two files of interest are:
/u01/app/oracle/diag/rdbms/cdb/cdb2/trace/cdb2_ora_6237_TRC-OVRHD-12-20230918144006.tr[cm]
These are FD 11 and 12.
So now strace is started as root for PID 6237:
strace -uoracle -p $pid -T -ttt -f -o trace/pid-6237.strace
Back to sqlrun: ENTER is pressed, and now just wait for the test to finish.
Get the trace file and sum up the time spent writing to file descriptors 11 and 12.
As it turns out, knowing the file descriptors was not actually necessary, as the only OS files written to were the Oracle trace files.
We can see that because the number of all writes matches the number of writes to FD 11 and 12:
$ grep -E 'write\([11|12]' trace/pid-6237.strace | wc -
1991736
$ grep -E 'write\(' trace/pid-6237.strace | wc -l
1991736
The write times are seen in this summary of the trace file by strace-breakdown.pl
$ ./strace-breakdown.pl < trace/pid-6237.strace
Total Counted Time: 1165.91173999967
Total Elapsed Time: 1218.10785794258
Unaccounted for Time: 52.1961179429084
Call Count Elapsed Min Max Avg
gettid 2 0.000006 0.000003 0.000003 0.000003
brk 2 0.000008 0.000004 0.000004 0.000004
getrlimit 4 0.000012 0.000003 0.000003 0.000003
mprotect 2 0.000013 0.000006 0.000007 0.000007
uname 3 0.000015 0.000004 0.000006 0.000005
setsockopt 5 0.000019 0.000003 0.000005 0.000004
getsockopt 6 0.000021 0.000003 0.000005 0.000004
epoll_ctl 7 0.000025 0.000003 0.000005 0.000004
chown 8 0.000054 0.000005 0.000010 0.000007
rt_sigaction 22 0.000074 0.000003 0.000011 0.000003
fcntl 22 0.000074 0.000003 0.000007 0.000003
rt_sigprocmask 20 0.000077 0.000003 0.000020 0.000004
fstat 1 0.000089 0.000089 0.000089 0.000089
geteuid 42 0.000126 0.000003 0.000003 0.000003
lstat 21 0.000135 0.000003 0.000031 0.000006
open 34 0.000174 0.000003 0.000014 0.000005
stat 56 0.000207 0.000003 0.000008 0.000004
close 32 0.000378 0.000003 0.000068 0.000012
semctl 15 0.001109 0.000015 0.000260 0.000074
munmap 24 0.001127 0.000008 0.000099 0.000047
mmap 104 0.001809 0.000005 0.000156 0.000017
shmdt 5 0.002062 0.000007 0.001878 0.000412
recvmsg 3220 0.021909 0.000003 0.000257 0.000007
sendmsg 2529 0.090821 0.000008 0.000918 0.000036
epoll_wait 2035 0.167436 0.000003 0.001321 0.000082
ioctl 534 0.359921 0.000009 0.065599 0.000674
getrusage 108898 0.726873 0.000002 0.006818 0.000007
semop 13702 0.877790 0.000003 0.001226 0.000064
lseek 969033 5.195529 0.000002 0.007086 0.000005
write 1991736 12.496773 0.000002 0.010083 0.000006
semtimedop 14161 15.281141 0.000004 0.109176 0.001079
read 53844 1130.685933 0.000003 20.464132 0.020999
On average, each write to the trace file consumes 6 microseconds, with a maximum time of 10 milliseconds.
A think time of 6 ms is roughly 1.5x the average transaction time, and should allow for maximizing the number transactions, without pushing the server so hard that run queues get too long, and resource starvation sets in.
So the same tests were run again, but this time with --exec-delay 0.006.
Here we can see how the database fared at this rate, without and with tracing.
Trace Levels and Transaction Counts
Level |
test 1 |
test 2 |
test 3 |
Avg |
Per Second |
Pct Decrease |
0 |
3,884,741 |
3,758,124 |
3,533,573 |
3,725,479 |
124.2 |
0 |
8 |
3,342,845 |
3,356,797 |
3,176,763 |
3,292,135 |
109.7 |
11.6 |
12 |
3,234,030 |
3,190,312 |
3,000,312 |
3,141,551 |
104.7 |
15.7 |
While the peak transaction count of 3,884,741 is only about 54% of the transaction rate for the 0-second think time test, this test is a much more reasonable approximation of a rather busy database.
The test parameter of setting a 6 ms think time will allow for some overhead, such as backups of the archive logs and database and some other normal processing.
With Level 8 tracing, will users notice the 11.6% change in response time? It may not be all that noticeable.
Even with Level 12 tracing, an overhead of 15.7% may be tolerable for a period of time.
Again, let's see a summary of the trace files from Level 8 and Level 12 tests.
Level 8:
$ mrskew --rc=cull-snmfc.rc trace-overhead-6ms-think-time/trace/trace-8-20230922134443/*.trc
CALL-NAME DURATION % CALLS MEAN MIN MAX
----------------------------- ------------- ------ ---------- -------- -------- --------
SQL*Net message from client 23,349.402048 80.6% 4,178,816 0.005588 0.000076 0.897703
log file sync 4,770.140432 16.5% 781,599 0.006103 0.000001 1.467388
EXEC 348.967653 1.2% 3,344,203 0.000104 0.000000 0.037502
enq: TX - index contention 302.733657 1.0% 13,056 0.023187 0.000007 0.768567
db file sequential read 40.958026 0.1% 7,495 0.005465 0.000194 1.456375
buffer busy waits 39.544771 0.1% 43,642 0.000906 0.000000 0.037842
FETCH 29.626422 0.1% 2,507,379 0.000012 0.000000 0.002080
latch: ges resource hash list 12.839064 0.0% 12,882 0.000997 0.000000 0.019123
read by other session 11.840345 0.0% 318 0.037234 0.000109 0.725882
library cache: mutex X 9.860030 0.0% 2,148 0.004590 0.000003 0.037839
41 others 38.610023 0.1% 5,045,359 0.000008 0.000000 0.264603
----------------------------- ------------- ------ ---------- -------- -------- --------
TOTAL (51) 28,954.522471 100.0% 15,936,897 0.001817 0.000000 1.467388
Level 12:
$ mrskew --rc=cull-snmfc.rc trace-overhead-6ms-think-time/trace/trace-12-20230922135525/*.trc
CALL-NAME DURATION % CALLS MEAN MIN MAX
----------------------------- ------------- ------ ---------- -------- -------- --------
SQL*Net message from client 22,734.891709 78.5% 4,042,810 0.005624 0.000075 0.096900
log file sync 5,495.080378 19.0% 753,598 0.007292 0.000001 3.784414
EXEC 338.382339 1.2% 3,235,392 0.000105 0.000000 0.039067
enq: TX - index contention 214.584393 0.7% 11,310 0.018973 0.000008 0.686723
buffer busy waits 45.639044 0.2% 42,989 0.001062 0.000000 0.240321
db file sequential read 31.671235 0.1% 5,896 0.005372 0.000205 0.730405
FETCH 27.233206 0.1% 2,425,754 0.000011 0.000000 0.003481
latch: ges resource hash list 14.484754 0.0% 13,369 0.001083 0.000001 0.023182
log file switch completion 12.803378 0.0% 217 0.059002 0.000987 0.241770
library cache: mutex X 10.028905 0.0% 2,106 0.004762 0.000002 0.046082
41 others 47.762738 0.2% 4,882,286 0.000010 0.000000 0.478157
----------------------------- ------------- ------ ---------- -------- -------- --------
TOTAL (51) 28,972.562079 100.0% 15,415,727 0.001879 0.000000 3.784414
Again, the modest number of SQL placeholders used did not really cause much of a time penalty when a Level 12 trace was run.
Comparing the EXEC counts for each shows the performance for Level 12 tracing degraded by only 3.3% as compared to Level 8.
In Conclusion
Is there any reason to be afraid of enabling Oracle tracing?
No, not really.
The key to successfully using Oracle tracing in a production environment is to first make sure you know the database where tracing is to be enabled.
If the system is quite busy, it may be necessary first to trace a single session to get a measurement of the overhead.
If you need bind values included, you can try a Level 12 trace and see if the number of bind values results in excessively large trace files.
Once you know what level of tracing is safe to use, you are well on your way to understanding the SQL performance problem that just landed on your desk.
Share this
- Technical Track (969)
- Oracle (400)
- MySQL (137)
- Cloud (131)
- Open Source (90)
- Google Cloud (83)
- DBA Lounge (76)
- Microsoft SQL Server (76)
- Technical Blog (74)
- Big Data (52)
- AWS (49)
- Google Cloud Platform (47)
- Cassandra (44)
- DevOps (41)
- Azure (38)
- Pythian (33)
- Linux (30)
- Database (26)
- Podcasts (25)
- Site Reliability Engineering (25)
- Performance (24)
- SQL Server (24)
- Microsoft Azure (23)
- Oracle E-Business Suite (23)
- PostgreSQL (23)
- Oracle Database (22)
- Docker (21)
- Group Blog Posts (20)
- Security (20)
- DBA (19)
- Log Buffer (19)
- SQL (19)
- Exadata (18)
- Mongodb (18)
- Oracle Cloud Infrastructure (OCI) (18)
- Oracle Exadata (18)
- Automation (17)
- Hadoop (16)
- Oracleebs (16)
- Amazon RDS (15)
- Ansible (15)
- Ebs (15)
- Snowflake (15)
- ASM (13)
- BigQuery (13)
- Patching (13)
- RDS (13)
- Replication (13)
- Data (12)
- GenAI (12)
- Kubernetes (12)
- Oracle 12C (12)
- Advanced Analytics (11)
- Backup (11)
- LLM (11)
- Machine Learning (11)
- OCI (11)
- Rman (11)
- Cloud Migration (10)
- Datascape Podcast (10)
- Monitoring (10)
- R12 (10)
- 12C (9)
- AI (9)
- Apache Cassandra (9)
- Data Guard (9)
- Infrastructure (9)
- Oracle 19C (9)
- Oracle Applications (9)
- Python (9)
- Series (9)
- AWR (8)
- Amazon Web Services (AWS) (8)
- Articles (8)
- High Availability (8)
- Oracle EBS (8)
- Percona (8)
- Powershell (8)
- Recovery (8)
- Weblogic (8)
- Apache Beam (7)
- Backups (7)
- Data Governance (7)
- Goldengate (7)
- Innodb (7)
- Migration (7)
- Myrocks (7)
- OEM (7)
- Oracle Enterprise Manager (OEM) (7)
- Performance Tuning (7)
- Authentication (6)
- ChatGPT-4 (6)
- Data Enablement (6)
- Database Performance (6)
- E-Business Suite (6)
- Fmw (6)
- Grafana (6)
- Oracle Enterprise Manager (6)
- Orchestrator (6)
- Postgres (6)
- Rac (6)
- Renew Refresh Republish (6)
- RocksDB (6)
- Serverless (6)
- Upgrade (6)
- 19C (5)
- Azure Data Factory (5)
- Azure Synapse Analytics (5)
- Cpu (5)
- Data Visualization (5)
- Disaster Recovery (5)
- Error (5)
- Generative AI (5)
- Google BigQuery (5)
- Indexes (5)
- Love Letters To Data (5)
- Mariadb (5)
- Microsoft (5)
- Proxysql (5)
- Scala (5)
- Sql Server Administration (5)
- VMware (5)
- Windows (5)
- Xtrabackup (5)
- Airflow (4)
- Analytics (4)
- Apex (4)
- Best Practices (4)
- Centrally Managed Users (4)
- Cli (4)
- Cloud FinOps (4)
- Cloud Spanner (4)
- Cockroachdb (4)
- Configuration Management (4)
- Container (4)
- Data Management (4)
- Data Pipeline (4)
- Data Security (4)
- Data Strategy (4)
- Database Administrator (4)
- Database Management (4)
- Database Migration (4)
- Dataflow (4)
- Dbsat (4)
- Elasticsearch (4)
- Fahd Mirza (4)
- Fusion Middleware (4)
- Google (4)
- Io (4)
- Java (4)
- Kafka (4)
- Middleware (4)
- Mysql 8 (4)
- Network (4)
- Ocidtab (4)
- Opatch (4)
- Oracle Autonomous Database (Adb) (4)
- Oracle Cloud (4)
- Pitr (4)
- Post-Mortem Analysis (4)
- Prometheus (4)
- Redhat (4)
- September 9Th 2015 (4)
- Sql2016 (4)
- Ssl (4)
- Terraform (4)
- Workflow (4)
- 2Fa (3)
- Alwayson (3)
- Amazon Relational Database Service (Rds) (3)
- Apache Kafka (3)
- Apexexport (3)
- Aurora (3)
- Azure Sql Db (3)
- Cdb (3)
- ChatGPT (3)
- Cloud Armor (3)
- Cloud Database (3)
- Cloud Security (3)
- Cluster (3)
- Consul (3)
- Cosmos Db (3)
- Cost Management (3)
- Covid19 (3)
- Crontab (3)
- Data Analytics (3)
- Data Integration (3)
- Database 12C (3)
- Database Monitoring (3)
- Database Troubleshooting (3)
- Database Upgrade (3)
- Databases (3)
- Dataops (3)
- Dbt (3)
- Digital Transformation (3)
- ERP (3)
- Google Chrome (3)
- Google Cloud Sql (3)
- Graphite (3)
- Haproxy (3)
- Heterogeneous Database Migration (3)
- Hugepages (3)
- Inside Pythian (3)
- Installation (3)
- Json (3)
- Keras (3)
- Ldap (3)
- Liquibase (3)
- Love Letter (3)
- Lua (3)
- Mfa (3)
- Multitenant (3)
- Mysql 5.7 (3)
- Mysql Configuration (3)
- Nginx (3)
- Nodetool (3)
- Non-Tech Articles (3)
- Oem 13C (3)
- Oms (3)
- Oracle 18C (3)
- Oracle Data Guard (3)
- Oracle Live Sql (3)
- Oracle Rac (3)
- Patch (3)
- Perl (3)
- Pmm (3)
- Pt-Online-Schema-Change (3)
- Rdbms (3)
- Recommended (3)
- Remote Teams (3)
- Reporting (3)
- Reverse Proxy (3)
- S3 (3)
- Spark (3)
- Sql On The Edge (3)
- Sql Server Configuration (3)
- Sql Server On Linux (3)
- Ssis (3)
- Ssis Catalog (3)
- Stefan Knecht (3)
- Striim (3)
- Sysadmin (3)
- System Versioned (3)
- Systemd (3)
- Temporal Tables (3)
- Tensorflow (3)
- Tools (3)
- Tuning (3)
- Vasu Balla (3)
- Vault (3)
- Vulnerability (3)
- Waf (3)
- 18C (2)
- Adf (2)
- Adop (2)
- Agent (2)
- Agile (2)
- Amazon Data Migration Service (2)
- Amazon Ec2 (2)
- Amazon S3 (2)
- Apache Flink (2)
- Apple (2)
- Apps (2)
- Ashdump (2)
- Atp (2)
- Audit (2)
- Automatic Backups (2)
- Autonomous (2)
- Autoupgrade (2)
- Awr Data Mining (2)
- Azure Sql (2)
- Azure Sql Data Sync (2)
- Bash (2)
- Business (2)
- Business Intelligence (2)
- Caching (2)
- Cassandra Nodetool (2)
- Cdap (2)
- Certification (2)
- Cloning (2)
- Cloud Cost Optimization (2)
- Cloud Data Fusion (2)
- Cloud Hosting (2)
- Cloud Infrastructure (2)
- Cloud Shell (2)
- Cloud Sql (2)
- Cloudscape (2)
- Cluster Level Consistency (2)
- Conferences (2)
- Consul-Template (2)
- Containerization (2)
- Containers (2)
- Cosmosdb (2)
- Costs (2)
- Cql (2)
- Cqlsh (2)
- Cyber Security (2)
- Data Discovery (2)
- Data Migration (2)
- Data Quality (2)
- Data Streaming (2)
- Data Warehouse (2)
- Database Consulting (2)
- Database Migrations (2)
- Dataguard (2)
- Datapump (2)
- Ddl (2)
- Debezium (2)
- Dictionary Views (2)
- Dms (2)
- Docker-Composer (2)
- Dr (2)
- Duplicate (2)
- Ecc (2)
- Elastic (2)
- Elastic Stack (2)
- Em12C (2)
- Encryption (2)
- Enterprise Data Platform (EDP) (2)
- Enterprise Manager (2)
- Etl (2)
- Events (2)
- Exachk (2)
- Filter Driver (2)
- Flume (2)
- Full Text Search (2)
- Galera (2)
- Gemini (2)
- General Purpose Ssd (2)
- Gh-Ost (2)
- Gke (2)
- Google Workspace (2)
- Hanganalyze (2)
- Hdfs (2)
- Health Check (2)
- Historical Trends (2)
- Incremental (2)
- Infiniband (2)
- Infrastructure As Code (2)
- Innodb Cluster (2)
- Innodb File Structure (2)
- Innodb Group Replication (2)
- Install (2)
- Internals (2)
- Java Web Start (2)
- Kibana (2)
- Log (2)
- Log4J (2)
- Logs (2)
- Memory (2)
- Merge Replication (2)
- Metrics (2)
- Mutex (2)
- MySQLShell (2)
- NLP (2)
- Neo4J (2)
- Node.Js (2)
- Nosql (2)
- November 11Th 2015 (2)
- Ntp (2)
- Oci Iam (2)
- Oem12C (2)
- Omspatcher (2)
- Opatchauto (2)
- Open Source Database (2)
- Operational Excellence (2)
- Oracle 11G (2)
- Oracle Datase (2)
- Oracle Extended Manager (Oem) (2)
- Oracle Flashback (2)
- Oracle Forms (2)
- Oracle Installation (2)
- Oracle Io Testing (2)
- Pdb (2)
- Podcast (2)
- Puppet (2)
- Pythian Europe (2)
- R12.2 (2)
- Redshift (2)
- Remote DBA (2)
- Remote Sre (2)
- SAP (2)
- SAP HANA Cloud (2)
- Sap Migration (2)
- Scale (2)
- Schema (2)
- September 30Th 2015 (2)
- September 3Rd 2015 (2)
- Shell (2)
- Simon Pane (2)
- Single Sign-On (2)
- Sql Server On Gke (2)
- Sqlplus (2)
- Sre (2)
- Ssis Catalog Error (2)
- Ssisdb (2)
- Standby (2)
- Statspack Mining (2)
- Systemstate Dump (2)
- Tablespace (2)
- Technical Training (2)
- Tempdb (2)
- Tfa (2)
- Throughput (2)
- Tls (2)
- Tombstones (2)
- Transactional Replication (2)
- User Groups (2)
- Vagrant (2)
- Variables (2)
- Virtual Machine (2)
- Virtual Machines (2)
- Virtualbox (2)
- Web Application Firewall (2)
- Webinars (2)
- X5 (2)
- scalability (2)
- //Build2019 (1)
- 11G (1)
- 12.1 (1)
- 12Cr1 (1)
- 12Cr2 (1)
- 18C Grid Installation (1)
- 2022 (1)
- 2022 Snowflake Summit (1)
- AI Platform (1)
- AI Summit (1)
- Actifio (1)
- Active Directory (1)
- Adaptive Hash Index (1)
- Adf Custom Email (1)
- Adobe Flash (1)
- Adrci (1)
- Advanced Data Services (1)
- Afd (1)
- After Logon Trigger (1)
- Ahf (1)
- Aix (1)
- Akka (1)
- Alloydb (1)
- Alter Table (1)
- Always On (1)
- Always On Listener (1)
- Alwayson With Gke (1)
- Amazon (1)
- Amazon Athena (1)
- Amazon Aurora Backtrack (1)
- Amazon Efs (1)
- Amazon Redshift (1)
- Amazon Sagemaker (1)
- Amazon Vpc Flow Logs (1)
- Amdu (1)
- Analysis (1)
- Analytical Models (1)
- Analyzing Bigquery Via Sheets (1)
- Anisble (1)
- Annual Mysql Community Dinner (1)
- Anthos (1)
- Apache (1)
- Apache Nifi (1)
- Apache Spark (1)
- Application Migration (1)
- Architect (1)
- Architecture (1)
- Ash (1)
- Asmlib (1)
- Atlas CLI (1)
- Audit In Postgres (1)
- Audit In Postgresql (1)
- Auto Failover (1)
- Auto Increment (1)
- Auto Index (1)
- Autoconfig (1)
- Automated Reports (1)
- Automl (1)
- Autostart (1)
- Awr Mining (1)
- Aws Glue (1)
- Aws Lake Formation (1)
- Aws Lambda (1)
- Azure Analysis Services (1)
- Azure Blob Storage (1)
- Azure Cognitive Search (1)
- Azure Data (1)
- Azure Data Lake (1)
- Azure Data Lake Analytics (1)
- Azure Data Lake Store (1)
- Azure Data Migration Service (1)
- Azure Dma (1)
- Azure Dms (1)
- Azure Document Intelligence (1)
- Azure Integration Runtime (1)
- Azure OpenAI (1)
- Azure Sql Data Warehouse (1)
- Azure Sql Dw (1)
- Azure Sql Managed Instance (1)
- Azure Vm (1)
- Backup For Sql Server (1)
- Bacpac (1)
- Bag (1)
- Bare Metal Solution (1)
- Batch Operation (1)
- Batches In Cassandra (1)
- Beats (1)
- Best Practice (1)
- Bi Publisher (1)
- Binary Logging (1)
- Bind Variables (1)
- Bitnami (1)
- Blob Storage Endpoint (1)
- Blockchain (1)
- Browsers (1)
- Btp Architecture (1)
- Btp Components (1)
- Buffer Pool (1)
- Bug (1)
- Bugs (1)
- Build 2019 Updates (1)
- Build Cassandra (1)
- Bundle Patch (1)
- Bushy Join (1)
- Business Continuity (1)
- Business Insights (1)
- Business Process Modelling (1)
- Business Reputation (1)
- CAPEX (1)
- Capacity Planning (1)
- Career (1)
- Career Development (1)
- Cassandra-Cli (1)
- Catcon.Pm (1)
- Catctl.Pl (1)
- Catupgrd.Sql (1)
- Cbo (1)
- Cdb Duplication (1)
- Certificate (1)
- Certificate Management (1)
- Chaos Engineering (1)
- Cheatsheet (1)
- Checkactivefilesandexecutables (1)
- Chmod (1)
- Chown (1)
- Chrome Enterprise (1)
- Chrome Security (1)
- Cl-Series (1)
- Cleanup (1)
- Cloud Browser (1)
- Cloud Build (1)
- Cloud Consulting (1)
- Cloud Data Warehouse (1)
- Cloud Database Management (1)
- Cloud Dataproc (1)
- Cloud Foundry (1)
- Cloud Manager (1)
- Cloud Migations (1)
- Cloud Networking (1)
- Cloud SQL Replica (1)
- Cloud Scheduler (1)
- Cloud Services (1)
- Cloud Strategies (1)
- Cloudformation (1)
- Cluster Resource (1)
- Cmo (1)
- Cockroach Db (1)
- Coding Benchmarks (1)
- Colab (1)
- Collectd (1)
- Columnar (1)
- Communication Plans (1)
- Community (1)
- Compact Storage (1)
- Compaction (1)
- Compliance (1)
- Compression (1)
- Compute Instances (1)
- Compute Node (1)
- Concurrent Manager (1)
- Concurrent Processing (1)
- Configuration (1)
- Consistency Level (1)
- Consolidation (1)
- Conversational AI (1)
- Covid-19 (1)
- Cpu Patching (1)
- Cqlsstablewriter (1)
- Crash (1)
- Create Catalog Error (1)
- Create_File_Dest (1)
- Credentials (1)
- Cross Platform (1)
- CrowdStrike (1)
- Crsctl (1)
- Custom Instance Images (1)
- Cve-2022-21500 (1)
- Cvu (1)
- Cypher Queries (1)
- DBSAT 3 (1)
- Dacpac (1)
- Dag (1)
- Data Analysis (1)
- Data Analytics Platform (1)
- Data Box (1)
- Data Classification (1)
- Data Cleansing (1)
- Data Encryption (1)
- Data Engineering (1)
- Data Estate (1)
- Data Flow Management (1)
- Data Insights (1)
- Data Integrity (1)
- Data Lake (1)
- Data Leader (1)
- Data Lifecycle Management (1)
- Data Lineage (1)
- Data Masking (1)
- Data Mesh (1)
- Data Migration Assistant (1)
- Data Migration Service (1)
- Data Mining (1)
- Data Modeling (1)
- Data Monetization (1)
- Data Policy (1)
- Data Profiling (1)
- Data Protection (1)
- Data Retention (1)
- Data Safe (1)
- Data Sheets (1)
- Data Summit (1)
- Data Vault (1)
- Data Warehouse Modernization (1)
- Database Auditing (1)
- Database Consultant (1)
- Database Link (1)
- Database Modernization (1)
- Database Provisioning (1)
- Database Provisioning Failed (1)
- Database Replication (1)
- Database Scaling (1)
- Database Schemas (1)
- Database Security (1)
- Databricks (1)
- Datadog (1)
- Datafile (1)
- Datapatch (1)
- Dataprivacy (1)
- Datascape 59 (1)
- Datasets (1)
- Datastax Cassandra (1)
- Datastax Opscenter (1)
- Datasync Error (1)
- Db_Create_File_Dest (1)
- Dbaas (1)
- Dbatools (1)
- Dbcc Checkident (1)
- Dbms_Cloud (1)
- Dbms_File_Transfer (1)
- Dbms_Metadata (1)
- Dbms_Service (1)
- Dbms_Stats (1)
- Dbupgrade (1)
- Deep Learning (1)
- Delivery (1)
- Devd (1)
- Dgbroker (1)
- Dialogflow (1)
- Dict0Dict (1)
- Did You Know (1)
- Direct Path Read Temp (1)
- Disk Groups (1)
- Disk Management (1)
- Diskgroup (1)
- Dispatchers (1)
- Distributed Ag (1)
- Distribution Agent (1)
- Documentation (1)
- Download (1)
- Dp Agent (1)
- Duet AI (1)
- Duplication (1)
- Dynamic Sampling (1)
- Dynamic Tasks (1)
- E-Business Suite Cpu Patching (1)
- E-Business Suite Patching (1)
- Ebs Sso (1)
- Ec2 (1)
- Edb Postgresql Advanced Server (1)
- Edb Postgresql Password Verify Function (1)
- Editions (1)
- Edp (1)
- El Carro (1)
- Elassandra (1)
- Elk Stack (1)
- Em13Cr2 (1)
- Emcli (1)
- End of Life (1)
- Engineering (1)
- Enqueue (1)
- Enterprise (1)
- Enterprise Architecture (1)
- Enterprise Command Centers (1)
- Enterprise Manager Command Line Interface (Em Cli (1)
- Enterprise Plus (1)
- Episode 58 (1)
- Error Handling (1)
- Exacc (1)
- Exacheck (1)
- Exacs (1)
- Exadata Asr (1)
- Execution (1)
- Executive Sponsor (1)
- Expenditure (1)
- Export Sccm Collection To Csv (1)
- External Persistent Volumes (1)
- Fail (1)
- Failed Upgrade (1)
- Failover In Postgresql (1)
- Fall 2021 (1)
- Fast Recovery Area (1)
- FinOps Strategy (1)
- Flash Recovery Area (1)
- Flashback (1)
- Fnd (1)
- Fndsm (1)
- Force_Matching_Signature (1)
- Fra Full (1)
- Framework (1)
- Freebsd (1)
- Fsync (1)
- Function-Based Index (1)
- GCVE Architecture (1)
- GPQA (1)
- Gaming (1)
- Garbagecollect (1)
- Gcp Compute (1)
- Gcp-Spanner (1)
- Geography (1)
- Geth (1)
- Getmospatch (1)
- Git (1)
- Global Analytics (1)
- Gmail (1)
- Gmail Security (1)
- Google Analytics (1)
- Google Cloud Architecture Framework (1)
- Google Cloud Data Services (1)
- Google Cloud Partner (1)
- Google Cloud Spanner (1)
- Google Cloud VMware Engine (1)
- Google Compute Engine (1)
- Google Dataflow (1)
- Google Datalab (1)
- Google Grab And Go (1)
- Google Sheets (1)
- Gp2 (1)
- Graph Algorithms (1)
- Graph Databases (1)
- Graph Inferences (1)
- Graph Theory (1)
- GraphQL (1)
- Graphical User Interface (Gui) (1)
- Grid (1)
- Grid Infrastructure (1)
- Griddisk Resize (1)
- Grp (1)
- Guaranteed Restore Point (1)
- Guid Mismatch (1)
- HR Technology (1)
- HRM (1)
- Ha (1)
- Hang (1)
- Hashicorp (1)
- Hbase (1)
- Hcc (1)
- Hdinsight (1)
- Healthcheck (1)
- Hemantgiri S. Goswami (1)
- Hortonworks (1)
- How To Install Ssrs (1)
- Hr (1)
- Httpchk (1)
- Https (1)
- Huge Pages (1)
- HumanEval (1)
- Hung Database (1)
- Hybrid Columnar Compression (1)
- Hyper-V (1)
- Hyperscale (1)
- Hypothesis Driven Development (1)
- Ibm (1)
- Identity Management (1)
- Idm (1)
- Ilom (1)
- Imageinfo (1)
- Impdp (1)
- In Place Upgrade (1)
- Incident Response (1)
- Indempotent (1)
- Indexing In Mongodb (1)
- Influxdb (1)
- Information (1)
- Infrastructure As A Code (1)
- Injection (1)
- Innobackupex (1)
- Innodb Concurrency (1)
- Innodb Flush Method (1)
- Insights (1)
- Installing (1)
- Instance Cloning (1)
- Integration Services (1)
- Integrations (1)
- Interactive_Timeout (1)
- Interval Partitioning (1)
- Invisible Indexes (1)
- Io1 (1)
- IoT (1)
- Iops (1)
- Iphone (1)
- Ipv6 (1)
- Iscsi (1)
- Iscsi-Initiator-Utils (1)
- Iscsiadm (1)
- Issues (1)
- It Industry (1)
- It Teams (1)
- JMX Metrics (1)
- Jared Still (1)
- Javascript (1)
- Jdbc (1)
- Jinja2 (1)
- Jmx (1)
- Jmx Monitoring (1)
- Jvm (1)
- Jython (1)
- K8S (1)
- Kernel (1)
- Key Btp Components (1)
- Kfed (1)
- Kill Sessions (1)
- Knapsack (1)
- Kubeflow (1)
- LMSYS Chatbot Arena (1)
- Large Pages (1)
- Latency (1)
- Latest News (1)
- Leadership (1)
- Leap Second (1)
- Limits (1)
- Line 1 (1)
- Linkcolumn (1)
- Linux Host Monitoring (1)
- Linux Storage Appliance (1)
- Listener (1)
- Loadavg (1)
- Lock_Sga (1)
- Locks (1)
- Log File Switch (Archiving Needed) (1)
- Logfile (1)
- Looker (1)
- Lvm (1)
- MMLU (1)
- Managed Instance (1)
- Managed Services (1)
- Management (1)
- Management Servers (1)
- Marketing (1)
- Marketing Analytics (1)
- Martech (1)
- Masking (1)
- Megha Bedi (1)
- Metadata (1)
- Method-R Workbench (1)
- Metric (1)
- Metric Extensions (1)
- Michelle Gutzait (1)
- Microservices (1)
- Microsoft Azure Sql Database (1)
- Microsoft Build (1)
- Microsoft Build 2019 (1)
- Microsoft Ignite (1)
- Microsoft Inspire 2019 (1)
- Migrate (1)
- Migrating Ssis Catalog (1)
- Migrating To Azure Sql (1)
- Migration Checklist (1)
- Mirroring (1)
- Mismatch (1)
- Model Governance (1)
- Monetization (1)
- MongoDB Atlas (1)
- MongoDB Compass (1)
- Ms Excel (1)
- Msdtc (1)
- Msdtc In Always On (1)
- Msdtc In Cluster (1)
- Multi-IP (1)
- Multicast (1)
- Multipath (1)
- My.Cnf (1)
- MySQL Shell Logical Backup (1)
- MySQLDump (1)
- Mysql Enterprise (1)
- Mysql Plugin For Oracle Enterprise Manager (1)
- Mysql Replication Filters (1)
- Mysql Server (1)
- Mysql-Python (1)
- Nagios (1)
- Ndb (1)
- Net_Read_Timeout (1)
- Net_Write_Timeout (1)
- Netcat (1)
- Newsroom (1)
- Nfs (1)
- Nifi (1)
- Node (1)
- November 10Th 2015 (1)
- November 6Th 2015 (1)
- Null Columns (1)
- Nullipotent (1)
- OPEX (1)
- ORAPKI (1)
- O_Direct (1)
- Oacore (1)
- October 21St 2015 (1)
- October 6Th 2015 (1)
- October 8Th 2015 (1)
- Oda (1)
- Odbcs (1)
- Odbs (1)
- Odi (1)
- Oel (1)
- Ohs (1)
- Olvm (1)
- On-Prem To Azure Sql (1)
- On-Premises (1)
- Onclick (1)
- Open.Canada.Ca (1)
- Openstack (1)
- Operating System Monitoring (1)
- Oplog (1)
- Opsworks (1)
- Optimization (1)
- Optimizer (1)
- Ora-01852 (1)
- Ora-7445 (1)
- Oracle 19 (1)
- Oracle 20C (1)
- Oracle Cursor (1)
- Oracle Database 12.2 (1)
- Oracle Database Appliance (1)
- Oracle Database Se2 (1)
- Oracle Database Standard Edition 2 (1)
- Oracle Database Upgrade (1)
- Oracle Database@Google Cloud (1)
- Oracle Exadata Smart Scan (1)
- Oracle Licensing (1)
- Oracle Linux Virtualization Manager (1)
- Oracle Oda (1)
- Oracle Openworld (1)
- Oracle Parallelism (1)
- Oracle Rdbms (1)
- Oracle Real Application Clusters (1)
- Oracle Reports (1)
- Oracle Security (1)
- Oracle Wallet (1)
- Orasrp (1)
- Organizational Change (1)
- Orion (1)
- Os (1)
- Osbws_Install.Jar (1)
- Oui Gui (1)
- Output (1)
- Owox (1)
- Paas (1)
- Package Deployment Wizard Error (1)
- Parallel Execution (1)
- Parallel Query (1)
- Parallel Query Downgrade (1)
- Partitioning (1)
- Partitions (1)
- Password (1)
- Password Change (1)
- Password Recovery (1)
- Password Verify Function In Postgresql (1)
- Patches (1)
- Patchmgr (1)
- Pdb Duplication (1)
- Penalty (1)
- Perfomrance (1)
- Performance Schema (1)
- Pg 15 (1)
- Pg_Rewind (1)
- Pga (1)
- Pipeline Debugging (1)
- Pivot (1)
- Planning (1)
- Plsql (1)
- Policy (1)
- Polybase (1)
- Post-Acquisition (1)
- Post-Covid It (1)
- Postgresql Complex Password (1)
- Postgresql With Repmgr Integration (1)
- Power Bi (1)
- Pq (1)
- Preliminar Connection (1)
- Preliminary Connection (1)
- Privatecloud (1)
- Process Mining (1)
- Production (1)
- Productivity (1)
- Profile In Edb Postgresql (1)
- Programming (1)
- Prompt Engineering (1)
- Provisioned Iops (1)
- Provisiones Iops (1)
- Proxy Monitoring (1)
- Psu (1)
- Public Cloud (1)
- Pubsub (1)
- Purge (1)
- Purge Thread (1)
- Pythian Blackbird Acquisition (1)
- Pythian Goodies (1)
- Pythian News (1)
- Python Pandas (1)
- Query Performance (1)
- Quicksight (1)
- Quota Limits (1)
- R12 R12.2 Cp Concurrent Processing Abort (1)
- R12.1.3 (1)
- REF! (1)
- Ram Cache (1)
- Rbac (1)
- Rdb (1)
- Rds_File_Util (1)
- Read Free Replication (1)
- Read Latency (1)
- Read Only (1)
- Read Replica (1)
- Reboot (1)
- Recruiting (1)
- Redo Size (1)
- Relational Database Management System (1)
- Release (1)
- Release Automation (1)
- Repair (1)
- Replication Compatibility (1)
- Replication Error (1)
- Repmgr (1)
- Repmgrd (1)
- Reporting Services 2019 (1)
- Resiliency Planning (1)
- Resource Manager (1)
- Resources (1)
- Restore (1)
- Restore Point (1)
- Retail (1)
- Rhel (1)
- Risk (1)
- Risk Management (1)
- Rocksrb (1)
- Role In Postgresql (1)
- Rollback (1)
- Rolling Patch (1)
- Row0Purge (1)
- Rpm (1)
- Rule "Existing Clustered Or Clustered-Prepared In (1)
- Running Discovery On Remote Machine (1)
- SQL Optimization (1)
- SQL Tracing (1)
- SSRS Administration (1)
- SaaS (1)
- Sap Assessment (1)
- Sap Assessment Report (1)
- Sap Backup Restore (1)
- Sap Btp Architecture (1)
- Sap Btp Benefits (1)
- Sap Btp Model (1)
- Sap Btp Services (1)
- Sap Homogenous System Copy Method (1)
- Sap Landscape Copy (1)
- Sap Migration Assessment (1)
- Sap On Mssql (1)
- Sap System Copy (1)
- Sar (1)
- Scaling Ir (1)
- Sccm (1)
- Sccm Powershell (1)
- Scheduler (1)
- Scheduler_Job (1)
- Schedulers (1)
- Scheduling (1)
- Scott Mccormick (1)
- Scripts (1)
- Sdp (1)
- Secrets (1)
- Securing Sql Server (1)
- Security Compliance (1)
- Sed (Stream Editor) (1)
- Self Hosted Ir (1)
- Semaphore (1)
- Seps (1)
- September 11Th 2015 (1)
- Serverless Computing (1)
- Serverless Framework (1)
- Service Broker (1)
- Service Bus (1)
- Shared Connections (1)
- Shared Storage (1)
- Shellshock (1)
- Signals (1)
- Silent (1)
- Slave (1)
- Slob (1)
- Smart Scan (1)
- Smtp (1)
- Snapshot (1)
- Snowday Fall 2021 (1)
- Socat (1)
- Software Development (1)
- Software Engineering (1)
- Solutions Architecture (1)
- Spanner-Backups (1)
- Sphinx (1)
- Split Brain In Postgresql (1)
- Spm (1)
- Sql Agent (1)
- Sql Backup To Url Error (1)
- Sql Cluster Installer Hang (1)
- Sql Database (1)
- Sql Developer (1)
- Sql On Linux (1)
- Sql Server 2014 (1)
- Sql Server 2016 (1)
- Sql Server Agent On Linux (1)
- Sql Server Backups (1)
- Sql Server Denali Is Required To Install Integrat (1)
- Sql Server Health Check (1)
- Sql Server Troubleshooting On Linux (1)
- Sql Server Version (1)
- Sql Setup (1)
- Sql Vm (1)
- Sql2K19Ongke (1)
- Sqldatabase Serverless (1)
- Ssh User Equivalence (1)
- Ssis Denali Error (1)
- Ssis Install Error E Xisting Clustered Or Cluster (1)
- Ssis Package Deployment Error (1)
- Ssisdb Master Key (1)
- Ssisdb Restore Error (1)
- Sso (1)
- Ssrs 2019 (1)
- Sstable2Json (1)
- Sstableloader (1)
- Sstablesimpleunsortedwriter (1)
- Stack Dump (1)
- Standard Edition (1)
- Startup Process (1)
- Statistics (1)
- Statspack (1)
- Statspack Data Mining (1)
- Statspack Erroneously Reporting (1)
- Statspack Issues (1)
- Storage (1)
- Stored Procedure (1)
- Strategies (1)
- Streaming (1)
- Sunos (1)
- Swap (1)
- Swapping (1)
- Switch (1)
- Syft (1)
- Synapse (1)
- Sync Failed There Is Not Enough Space On The Disk (1)
- Sys Schema (1)
- System Function (1)
- Systems Administration (1)
- T-Sql (1)
- Table Optimization (1)
- Tablespace Growth (1)
- Tablespaces (1)
- Tags (1)
- Tar (1)
- Tde (1)
- Team Management (1)
- Tech Debt (1)
- Technology (1)
- Telegraf (1)
- Tempdb Encryption (1)
- Templates (1)
- Temporary Tablespace (1)
- Tenserflow (1)
- Teradata (1)
- Testing New Cassandra Builds (1)
- There Is Not Enough Space On The Disk (1)
- Thick Data (1)
- Third-Party Data (1)
- Thrift (1)
- Thrift Data (1)
- Tidb (1)
- Time Series (1)
- Time-Drift (1)
- Tkprof (1)
- Tmux (1)
- Tns (1)
- Trace (1)
- Tracefile (1)
- Training (1)
- Transaction Log (1)
- Transactions (1)
- Transformation Navigator (1)
- Transparent Data Encryption (1)
- Trigger (1)
- Triggers On Memory-Optimized Tables Must Use With (1)
- Troubleshooting (1)
- Tungsten (1)
- Tvdxtat (1)
- Twitter (1)
- U-Sql (1)
- UNDO Tablespace (1)
- Upgrade Issues (1)
- Uptime (1)
- Uptrade (1)
- Url Backup Error (1)
- Usability (1)
- Use Cases (1)
- User (1)
- User Defined Compactions (1)
- Utilization (1)
- Utl_Smtp (1)
- VDI Jump Host (1)
- Validate Structure (1)
- Validate_Credentials (1)
- Value (1)
- Velocity (1)
- Vertex AI (1)
- Vertica (1)
- Vertical Slicing (1)
- Videos (1)
- Virtual Private Cloud (1)
- Virtualization (1)
- Vision (1)
- Vpn (1)
- Wait_Timeout (1)
- Wallet (1)
- Webhook (1)
- Weblogic Connection Filters (1)
- Webscale Database (1)
- Windows 10 (1)
- Windows Powershell (1)
- WiredTiger (1)
- With Native_Compilation (1)
- Word (1)
- Workshop (1)
- Workspace Security (1)
- Xbstream (1)
- Xml Publisher (1)
- Zabbix (1)
- dbms_Monitor (1)
- postgresql 16 (1)
- sqltrace (1)
- tracing (1)
- vSphere (1)
- xml (1)
- October 2024 (2)
- September 2024 (7)
- August 2024 (4)
- July 2024 (2)
- June 2024 (6)
- May 2024 (3)
- April 2024 (2)
- February 2024 (2)
- January 2024 (11)
- December 2023 (10)
- November 2023 (11)
- October 2023 (10)
- September 2023 (8)
- August 2023 (7)
- July 2023 (2)
- June 2023 (13)
- May 2023 (4)
- April 2023 (6)
- March 2023 (10)
- February 2023 (6)
- January 2023 (5)
- December 2022 (10)
- November 2022 (10)
- October 2022 (10)
- September 2022 (13)
- August 2022 (16)
- July 2022 (12)
- June 2022 (13)
- May 2022 (11)
- April 2022 (4)
- March 2022 (5)
- February 2022 (4)
- January 2022 (14)
- December 2021 (16)
- November 2021 (11)
- October 2021 (7)
- September 2021 (11)
- August 2021 (6)
- July 2021 (9)
- June 2021 (4)
- May 2021 (8)
- April 2021 (16)
- March 2021 (16)
- February 2021 (6)
- January 2021 (12)
- December 2020 (12)
- November 2020 (17)
- October 2020 (11)
- September 2020 (10)
- August 2020 (11)
- July 2020 (13)
- June 2020 (6)
- May 2020 (9)
- April 2020 (18)
- March 2020 (21)
- February 2020 (13)
- January 2020 (15)
- December 2019 (10)
- November 2019 (11)
- October 2019 (12)
- September 2019 (16)
- August 2019 (15)
- July 2019 (10)
- June 2019 (16)
- May 2019 (20)
- April 2019 (21)
- March 2019 (14)
- February 2019 (18)
- January 2019 (18)
- December 2018 (5)
- November 2018 (16)
- October 2018 (12)
- September 2018 (20)
- August 2018 (27)
- July 2018 (31)
- June 2018 (34)
- May 2018 (28)
- April 2018 (27)
- March 2018 (17)
- February 2018 (8)
- January 2018 (20)
- December 2017 (14)
- November 2017 (4)
- October 2017 (1)
- September 2017 (3)
- August 2017 (5)
- July 2017 (4)
- June 2017 (2)
- May 2017 (7)
- April 2017 (7)
- March 2017 (8)
- February 2017 (8)
- January 2017 (5)
- December 2016 (3)
- November 2016 (4)
- October 2016 (8)
- September 2016 (9)
- August 2016 (10)
- July 2016 (9)
- June 2016 (8)
- May 2016 (13)
- April 2016 (16)
- March 2016 (13)
- February 2016 (11)
- January 2016 (6)
- December 2015 (11)
- November 2015 (11)
- October 2015 (5)
- September 2015 (16)
- August 2015 (4)
- July 2015 (1)
- June 2015 (3)
- May 2015 (6)
- April 2015 (5)
- March 2015 (5)
- February 2015 (4)
- January 2015 (3)
- December 2014 (7)
- October 2014 (4)
- September 2014 (6)
- August 2014 (6)
- July 2014 (16)
- June 2014 (7)
- May 2014 (6)
- April 2014 (5)
- March 2014 (4)
- February 2014 (10)
- January 2014 (6)
- December 2013 (8)
- November 2013 (12)
- October 2013 (9)
- September 2013 (6)
- August 2013 (7)
- July 2013 (9)
- June 2013 (7)
- May 2013 (7)
- April 2013 (4)
- March 2013 (7)
- February 2013 (4)
- January 2013 (4)
- December 2012 (6)
- November 2012 (8)
- October 2012 (9)
- September 2012 (3)
- August 2012 (5)
- July 2012 (5)
- June 2012 (7)
- May 2012 (11)
- April 2012 (1)
- March 2012 (8)
- February 2012 (1)
- January 2012 (6)
- December 2011 (8)
- November 2011 (5)
- October 2011 (9)
- September 2011 (6)
- August 2011 (4)
- July 2011 (1)
- June 2011 (1)
- May 2011 (5)
- April 2011 (2)
- February 2011 (2)
- January 2011 (2)
- December 2010 (1)
- November 2010 (7)
- October 2010 (3)
- September 2010 (8)
- August 2010 (2)
- July 2010 (4)
- June 2010 (7)
- May 2010 (2)
- April 2010 (1)
- March 2010 (3)
- February 2010 (3)
- January 2010 (2)
- November 2009 (6)
- October 2009 (6)
- August 2009 (3)
- July 2009 (3)
- June 2009 (3)
- May 2009 (2)
- April 2009 (8)
- March 2009 (6)
- February 2009 (4)
- January 2009 (3)
- November 2008 (3)
- October 2008 (7)
- September 2008 (6)
- August 2008 (9)
- July 2008 (9)
- June 2008 (9)
- May 2008 (9)
- April 2008 (8)
- March 2008 (4)
- February 2008 (3)
- January 2008 (3)
- December 2007 (2)
- November 2007 (7)
- October 2007 (1)
- August 2007 (4)
- July 2007 (3)
- June 2007 (8)
- May 2007 (4)
- April 2007 (2)
- March 2007 (2)
- February 2007 (5)
- January 2007 (8)
- December 2006 (1)
- November 2006 (3)
- October 2006 (4)
- September 2006 (3)
- July 2006 (1)
- May 2006 (2)
- April 2006 (1)
- July 2005 (1)
No Comments Yet
Let us know what you think