Share this
Mitigating Long Distance Network Latencies with Oracle Client Result Cache
by Jared Still on Jan 4, 2024 1:52:47 PM
Recently, Pythian was engaged to investigate the performance of an application that was reported as performing poorly for remote clients.
At times, users would wait 10-15 seconds for some screens to populate.
This may not seem such a long time at first.
Imagine though you are at the front desk of a medical practice office, admitting patients.
Those few seconds add up quickly, particularly when there are multiple folks queued up.
SQL Tracing
To find out where the time was being spent by this application, SQL tracing was enabled on multiple clients for a period of time.
Then, the trace files were gathered, and we began analyzing the data.
Trace data generated by the application was analyzed with the Method-R Workbench tool, mrskew.
There are some free trace file analysis tools available. However, none of them can perform an analysis of the type done by mrskew.
The --where='$dur<1’ option is typically used to filter out calls lasting 1 second or more, which in many cases consists of `SQL*Net message from client calls'.
The SNMFC wait frequently accumulates lengthy wait times, as this is the event seen in trace data when waiting for input from a user.
Tests performed later on are also analyzed with mrskew.
Following are some results from tracing the application.
Before getting into the analysis, it seemed a good idea to remember that this trace data is from tracing real applications used by real users.
While most of the time, a user may require more than 1 second between operations that result in the app communicating with the database, and there may be occasions where the user can do this in less than 1 second.
It is conceivable, depending on how the app works internally.
A histogram was created of the SNMFC times to get an idea of what should be considered the minimum time for user interaction if less than 1 second.
$mrskew --name='message from client' --rc=p10.rc trace*/*.trc
RANGE {min ≤ e < max} DURATION % CALLS MEAN MIN MAX
----------------------------- -------------- ------ --------- ------------ ------------ ------------
1. 0.000000 0.000001
2. 0.000001 0.000010
3. 0.000010 0.000100
4. 0.000100 0.001000 73.820458 0.0% 176,259 0.000419 0.000242 0.000999
5. 0.001000 0.010000 3,485.986173 0.6% 714,287 0.004880 0.001000 0.009999
6. 0.010000 0.100000 4,669.279702 0.8% 246,886 0.018913 0.010000 0.099993
7. 0.100000 1.000000 10,562.221749 1.7% 25,251 0.418289 0.100008 0.999999
8. 1.000000 10.000000 76,174.497704 12.3% 24,593 3.097406 1.000018 9.999422
9. 10.000000 100.000000 291,379.130585 46.9% 8,548 34.087404 10.000591 99.936869
10. 100.000000 1,000.000000 229,895.396405 37.0% 1,874 122.676305 100.258822 736.313515
11. 1,000.000000 +∞ 5,558.747266 0.9% 3 1,852.915755 1,520.688313 2,459.756552
----------------------------- -------------- ------ --------- ------------ ------------ ------------
TOTAL (11) 621,799.080042 100.0% 1,197,701 0.519161 0.000242 2,459.756552
The amount of time occupied by the 0.1 - 1.0 second bucket is only 1.7% of the total wait time.
It is possible that other calls could exceed 1 second as well. It is not unusual to see any of PARSE, EXEC or FETCH exceed that time.
So, now we can get an idea of the max time per call so that the exclusion of calls we do want to influence the output is at least minimized.
The following summarizes all database calls and system calls from the collection of trace files.
$ mrskew --top=0 --name=:all --sort=7nd trace*/*.trc
CALL-NAME DURATION % CALLS MEAN MIN MAX
-------------------------------- -------------- ------ --------- -------- -------- ------------
SQL*Net message from client 621,799.080042 99.7% 1,197,701 0.519161 0.000242 2,459.756552
FETCH 583.095959 0.1% 1,092,840 0.000534 0.000000 6.866308
PARSE 898.293746 0.1% 871,812 0.001030 0.000000 1.354290
SQL*Net more data from client 30.262520 0.0% 4,999 0.006054 0.000019 0.378024
EXEC 257.789553 0.0% 892,535 0.000289 0.000000 0.352359
SQL*Net more data to client 22.384297 0.0% 55,271 0.000405 0.000038 0.196321
cursor: pin S wait on X 0.253378 0.0% 14 0.018098 0.001094 0.168761
LOBWRITE 1.603060 0.0% 600 0.002672 0.000000 0.146709
LOBREAD 2.676483 0.0% 1,787 0.001498 0.000000 0.106229
log file sync 23.853369 0.0% 16,178 0.001474 0.000003 0.106004
db file parallel read 17.625797 0.0% 4,949 0.003561 0.000024 0.099710
db file sequential read 45.240606 0.0% 134,649 0.000336 0.000009 0.069494
PGA memory operation 21.200406 0.0% 849,033 0.000025 0.000008 0.023866
CLOSE 12.400357 0.0% 886,240 0.000014 0.000000 0.022191
enq: KO - fast object checkpoint 0.043178 0.0% 4 0.010795 0.002090 0.019107
direct path read 7.410740 0.0% 21,469 0.000345 0.000010 0.016980
LOBGETLEN 0.583349 0.0% 11,843 0.000049 0.000000 0.012174
SQL*Net break/reset to client 0.049506 0.0% 24 0.002063 0.000003 0.012116
db file scattered read 0.972713 0.0% 3,443 0.000283 0.000018 0.010542
latch: shared pool 0.571746 0.0% 1,994 0.000287 0.000001 0.009978
row cache mutex 0.020823 0.0% 69 0.000302 0.000009 0.009531
library cache: mutex X 0.015331 0.0% 27 0.000568 0.000008 0.009250
direct path sync 0.195964 0.0% 119 0.001647 0.000605 0.007678
cursor: pin S 0.002302 0.0% 2 0.001151 0.001128 0.001174
reliable message 0.001594 0.0% 4 0.000399 0.000137 0.001095
read by other session 0.003495 0.0% 15 0.000233 0.000045 0.000931
row cache lock 0.000987 0.0% 2 0.000494 0.000141 0.000846
latch: cache buffers lru chain 0.000833 0.0% 1 0.000833 0.000833 0.000833
SQL*Net message to client 2.414208 0.0% 1,197,706 0.000002 0.000001 0.000396
Disk file operations I/O 0.056115 0.0% 787 0.000071 0.000036 0.000256
direct path write 0.153630 0.0% 4,206 0.000037 0.000028 0.000254
latch: In memory undo latch 0.000151 0.0% 1 0.000151 0.000151 0.000151
LOBPGSIZE 0.004503 0.0% 163 0.000028 0.000008 0.000149
latch: cache buffers chains 0.000054 0.0% 1 0.000054 0.000054 0.000054
library cache: bucket mutex X 0.000103 0.0% 3 0.000034 0.000009 0.000052
asynch descriptor resize 0.000942 0.0% 52 0.000018 0.000016 0.000031
cursor: mutex X 0.000024 0.0% 1 0.000024 0.000024 0.000024
ERROR 0.000000 0.0% 12 0.000000 0.000000 0.000000
XCTEND 0.000000 0.0% 795,957 0.000000 0.000000 0.000000
-------------------------------- -------------- ------ --------- -------- -------- ------------
TOTAL (40) 623,728.261864 100.0% 8,046,513 0.077515 0.000000 2,459.756552
Right away, we see that the maximum FETCH time is 6.8 seconds, and the maximum PARSE time is 1.35 seconds.
Now, let's get some idea of how prevalent those lengthy times really are.
First, get a histogram of all FETCH calls.
$ mrskew --name='FETCH' --rc=p10.rc trace-2-methodr.trc trace*/*.trc
RANGE {min ≤ e < max} DURATION % CALLS MEAN MIN MAX
----------------------------- ---------- ------ --------- -------- -------- --------
1. 0.000000 0.000001 0.000000 0.0% 89,648 0.000000 0.000000 0.000000
2. 0.000001 0.000010 0.140694 0.0% 16,614 0.000008 0.000001 0.000010
3. 0.000010 0.000100 28.348351 4.8% 754,422 0.000038 0.000011 0.000100
4. 0.000100 0.001000 58.279688 10.0% 168,938 0.000345 0.000101 0.000999
5. 0.001000 0.010000 272.538200 46.5% 65,797 0.004142 0.001000 0.009999
6. 0.010000 0.100000 45.677704 7.8% 2,010 0.022725 0.010006 0.099719
7. 0.100000 1.000000 132.073228 22.6% 529 0.249666 0.100394 0.650443
8. 1.000000 10.000000 48.537364 8.3% 33 1.470829 1.024624 6.866308
9. 10.000000 100.000000
10. 100.000000 1,000.000000
11. 1,000.000000 +∞
----------------------------- ---------- ------ --------- -------- -------- --------
TOTAL (11) 585.595229 100.0% 1,097,991 0.000533 0.000000 6.866308
The calls that are >= 0.1 seconds make up ~ 31% of all FETCH time, which is a significant amount.
It would be nice to target something between 0.5 and 1.0 seconds as the target to eliminate SNMFC calls that are actually time waiting on user input.
Something around 0.7 seconds would be reasonable.
$af: Accounted For time
$ mrskew --name='FETCH' --where='$af >= 0.7' --rc=p10.rc trace-2-methodr.trc trace*/*.trc
RANGE {min ≤ e < max} DURATION % CALLS MEAN MIN MAX
----------------------------- --------- ------ ----- -------- -------- --------
1. 0.000000 0.000001
2. 0.000001 0.000010
3. 0.000010 0.000100
4. 0.000100 0.001000
5. 0.001000 0.010000
6. 0.010000 0.100000
7. 0.100000 1.000000
8. 1.000000 10.000000 48.537364 100.0% 33 1.470829 1.024624 6.866308
9. 10.000000 100.000000
10. 100.000000 1,000.000000
11. 1,000.000000 +∞
----------------------------- --------- ------ ----- -------- -------- --------
TOTAL (11) 48.537364 100.0% 33 1.470829 1.024624 6.866308
The check for FETCH calls >= 0.7 seconds shows none in the 0.7 < 1.0 range.
There are however quite a few in the 1.0 < 10.00 range.
If all FETCH calls >= .7 seconds are not included in the results, the difference in measurement will be minimal.
The entire 583.095959 seconds of FETCH time accounted for only 0.1% of the total time, so we can lose 48.5 seconds of FETCH without any significant difference in the results.
Now for PARSE:
$ mrskew --name='PARSE' --where='$af >= 0.7' --rc=p10.rc trace-2-methodr.trc trace*/*.trc
RANGE {min ≤ e < max} DURATION % CALLS MEAN MIN MAX
----------------------------- --------- ------ ----- -------- -------- --------
1. 0.000000 0.000001
2. 0.000001 0.000010
3. 0.000010 0.000100
4. 0.000100 0.001000
5. 0.001000 0.010000
6. 0.010000 0.100000
7. 0.100000 1.000000 26.019180 95.1% 35 0.743405 0.700742 0.989590
8. 1.000000 10.000000 1.354290 4.9% 1 1.354290 1.354290 1.354290
9. 10.000000 100.000000
10. 100.000000 1,000.000000
11. 1,000.000000 +∞
----------------------------- --------- ------ ----- -------- -------- --------
TOTAL (11) 27.373470 100.0% 36 0.760374 0.700742 1.354290
Again, even though 27.37 seconds of PARSE time will not be included in the report for all trace files when accounted for time ($af) is limited to 0.7 seconds, the amount of time will not be significant in the report.
Now, to get an overview of where this app spends its time from a performance analysis perspective. This command is similar to the previous one shown, but now, any SNMFC >= 0.7 seconds will not be included in the report.
$ mrskew --where='$af < 0.7' trace*/*.trc
CALL-NAME DURATION % CALLS MEAN MIN MAX
----------------------------- ------------- ------ --------- -------- -------- --------
SQL*Net message from client 14,151.167569 88.4% 1,157,586 0.012225 0.000242 0.699830
PARSE 870.920276 5.4% 871,776 0.000999 0.000000 0.699497
FETCH 534.558595 3.3% 1,092,807 0.000489 0.000000 0.650443
EXEC 257.789553 1.6% 892,535 0.000289 0.000000 0.352359
db file sequential read 45.240606 0.3% 134,649 0.000336 0.000009 0.069494
SQL*Net more data from client 30.262520 0.2% 4,999 0.006054 0.000019 0.378024
log file sync 23.853369 0.1% 16,178 0.001474 0.000003 0.106004
SQL*Net more data to client 22.384297 0.1% 55,271 0.000405 0.000038 0.196321
PGA memory operation 21.200406 0.1% 849,033 0.000025 0.000008 0.023866
db file parallel read 17.625797 0.1% 4,949 0.003561 0.000024 0.099710
29 others 29.435569 0.2% 2,926,546 0.000010 0.000000 0.168761
----------------------------- ------------- ------ --------- -------- -------- --------
TOTAL (39) 16,004.438557 100.0% 8,006,329 0.001999 0.000000 0.699830
This app spends about 88% of its time waiting on SQL*Net traffic. This is rather a lot of time.
What is happening is that a request is made to the database, and the database responds by sending data to the client.
The database instance then waits for the application to ack the response.
This is time that could well be spent by the application doing work rather than spending so much time acknowledging packets.
Frequently, this type of behavior occurs when an application requests a single row at a time from the database.
Maybe you are familiar with the term 'row-by-row' processing, as popularized by Tom Kyte.
The term, not the practice; Tom Kyte is definitely against row-by-row processing.
If you are not familiar with Tom Kyte, he was the originator of the very popular Oracle site, Ask Tom.
You may have experienced the impact of row-by-row processing yourself by experimenting with the 'arraysize' setting in sqlplus.
If not, here is a simple test that will show the significance of this concept.
The test script:
--j.sql
set term off
set timing on
spool j.log
select * from all_objects;
spool off
set term on
host tail -4 j.log
The test session:
SQL# set arraysize 100
SQL# @j
68109 rows selected.
Elapsed: 00:00:13.32
SQL# set arraysize 1
SQL# @j
68109 rows selected.
Elapsed: 00:00:23.27
An additional 10 seconds were required for the same query when arraysize was changed from 100 to 1.
And that is with the client located physically next to the database. The waits are greatly magnified with additional network latency.
We can use mrskew to find out just how many rows are returned at a time with this application.
$ mrskew --name=:dbcall --group='$r' --group-label='rows' trace*/*.trc
rows DURATION % CALLS MEAN MIN MAX
----------- ------------ ------ --------- -------- -------- --------
0 1,124.512143 64.0% 3,669,419 0.000306 0.000000 1.354290
1 532.790386 30.3% 697,012 0.000764 0.000000 2.764739
2 16.039232 0.9% 122,839 0.000131 0.000000 0.468920
200 14.690736 0.8% 2,946 0.004987 0.000000 0.483858
19 7.215690 0.4% 212 0.034036 0.000000 6.866308
120 6.254791 0.4% 1,786 0.003502 0.000059 0.011138
7 4.293085 0.2% 2,530 0.001697 0.000000 0.062482
6 3.065380 0.2% 4,424 0.000693 0.000000 0.084583
18 2.895639 0.2% 745 0.003887 0.000000 0.048260
3 2.243636 0.1% 2,060 0.001089 0.000000 0.200572
187 others 42.446292 2.4% 49,816 0.000852 0.000000 0.648143
----------- ------------ ------ --------- -------- -------- --------
TOTAL (197) 1,756.447010 100.0% 4,553,789 0.000386 0.000000 6.866308
94.4% of the time spent retrieving data has been spent retrieving zero or one rows.
That is quite significant. It would be great if that could be reduced.
One way might be by increasing the arraysize or equivalent parameter, if available.
In this case, that parameter, or something like it, is not available.
The following questions are raised by this result:
- Are those 1-row packets being sent because the application has submitted an SQL query that is expected to return only 1 row?
- Or is the app configured to retrieve one row at a time, regardless of the number of rows that may be returned?
The fact that some FETCH calls return 200 rows suggests that there may be many queries that simply return 0 or 1 row.
Now we can look for the SQL statements consuming the most time in terms of SNMFC and drill down further from there.
Note: some SQL is intentionally obfuscated
$ mrskew --name='message from client' --where='$af < 0.7' --group='$sqlid . ":" . substr($sql,0,60)' trace*/*.trc
$sqlid . ":" . substr($sql,0,60) DURATION % CALLS MEAN MIN MAX
-------------------------------------------------------------------------- ------------- ------ --------- -------- -------- --------
8j53dscbsbqmb:SELECT H.* FROM HOLIDAY H WHERE ( H."Clinic" = 0 OR H."Clini 1,270.682768 9.0% 41,910 0.030319 0.001445 0.698968
02xpam19gsy6q:SELECT "User", "Key", "Value" FROM DSETTING WHERE "Key" = :p 986.568287 7.0% 1,819 0.542368 0.001488 0.697195
4nf825y4j3sqp:SELECT "Access" FROM ATACCESS WHERE "AtTab" = 94 AND "SLevel 898.422290 6.3% 16,704 0.053785 0.001834 0.695050
204trnv7gujxn:SELECT COUNT(1) FROM AP_NOTE WHERE "AP_NOTE_AppId" = :1 AND 702.007396 5.0% 53,989 0.013003 0.001076 0.698291
0sh4pkr39vszs:SELECT N.*, U."User", U."Name", U."First", U."Middle", AU."U 366.166472 2.6% 58,694 0.006239 0.000847 0.330926
a326j107p6xwr:SELECT * FROM XXXXXXXXXXXXXXXXXXX 313.086716 2.2% 711 0.440347 0.001365 0.699289
62cj7g3wth5cm:SELECT APPTIND_ORREFSTS_F(:1) FROM DUAL 206.103814 1.5% 27,075 0.007612 0.002896 0.695297
9kryabxshpaaf:SELECT XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX 202.924836 1.4% 919 0.220810 0.000707 0.696312
#0:#0 187.300853 1.3% 24,270 0.007717 0.000308 0.695110
duk76j6wh17dd:SELECT COUNT(1) FROM NPATTACH WHERE "Id" = :1 AND "Type" = 1 136.644058 1.0% 26,289 0.005198 0.001111 0.327277
68,713 others 8,881.260079 62.8% 905,206 0.009811 0.000242 0.699830
-------------------------------------------------------------------------- ------------- ------ --------- -------- -------- --------
TOTAL (68,723) 14,151.167569 100.0% 1,157,586 0.012225 0.000242 0.699830
The SQL statement 8j53dscbsbqmb is responsible for 9% of SNMFC time across 368 trace files.
As 9% is an average, it will not apply equally to all users.
If we can significantly reduce the SNMFC time for 8j53dscbsbqmb, some users may not even notice, while others may experience a great improvement.
It really depends on which part of the app the user is working with.
The trace files were collected at several different times.
One set of trace files indicates that users were spending 27% of application use time waiting on SNMFC for this one SQL statement.
These users are quite likely to notice any performance improvement made for reducing SNMFC for sqlid 8j53dscbsbqmb.
Client Result Cache
Client Result Cache was introduced in Oracle z11.2.0.1, and is available on all editions of Oracle.
The purpose of CRC is to cache values in the client after they have been retrieved from the database.
This feature may not be appropriate for volatile data, but for data that only rarely changes, the savings for remote clients could be significant.
The premise is to dedicate a few megabytes of memory to each client for caching SQL results.
Subsequent queries may be satisfied from the cache, rather than making a SQL*Net round trip.
The requirements for enabling CRC may be seen here: Configuring the Client Result Cache
In general, CRC is fairly easy to enable. The following parameters must be set in the database instance.
1 client_result_cache_size
- must be set to 32768 or greater
- set to 1048576 for testing
- changing the value requires restarting the database instance 2 client_result_cache_lag
- set to 60000 for testing
- changing the value requires restarting the database instance 3 compatible
- must be 11.2.0.0.0 or higher
The use of the result cache can be controlled by the following
- result_cache_mode
- set to MANUAL or FORCE at the session/system level
- default is MANUAL
- affects all tables
- annotate tables to cache
- annotate tables with MANUAL or FORCE
- SQL: alter table table_name result_cache (mode [manual|force])
The testing done here will be controlled by table annotation.
There is a limitation on the use of Client Result Cache: the application must be one that is built with the Oracle Call Interface (OCI)
Fortunately, the DBD::Oracle module that is installed with Oracle is built with OCI. This means the Perl that is included with Oracle can be used to run the tests. The client application is also built using OCI, so any positive test results can be used to configure CRC for the client application.
Test Configuration
sqlrun is a tool I developed for running SQL statements against a database using 1+ sessions. It is highly configurable; the following are some of the parameters and configuration possibilities:
- number of sessions
- think time between executions
- connection timing
- connect all simultaneously
- connect as quickly as possible, in succession
- interval between connections
- Multiple SQL statements can be run
- randomize the frequency of statements that run
- Placeholder values (bind variables) can be supplied from a text file.
- DML can be used
- PL/SQL blocks can be used
All of the code and trace files used for this article are found here: pythian blog - Oracle Client Result Cache
The sqlrun code as used in this article has been heavily modified to include some features useful for this test, namely the --pause-at-exit, --xact-tally, --xact-tally-file, and --client-result-cache-trace options.
Though initially developed for use with Oracle databases, sqlrun can also work with MySQL and PostgreSQL.
Further details are found in the README.md in the GitHub repo.
The following Bash script is used as a driver:
#!/usr/bin/env bash
# convert to lower case
typeset -l rcMode=$1
set -u
[[ -z $rcMode ]] && {
echo
echo include 'force' or 'manual' on the command line
echo
echo eg: $0 force
echo
exit 1
}
# another method to convert to lower case
#rcMode=${rcMode@L}
echo rcMode: $rcMode
case $rcMode in
force|manual) ;;
*) echo
echo "arguments are [force|manual] - case is unimportant"
echo
exit 1;;
esac
db='lestrade/orcl.jks.com'
username='jkstill'
password='grok'
# annotate the tables appropriately
unset SQLPATH ORACLE_PATH
sqlplus -L /nolog <<-EOF
connect $username/$password@$db
@@result-cache-dental-config/${rcMode}.sql
exit
EOF
timestamp=$(date +%Y%m%d%H%M%S)
traceDir=trace/${rcMode}-${timestamp}
rcLogDir=rclog
rcLogFile=$rcLogDir/rc-${rcMode}-${timestamp}.log
traceFileID="RC-${timestamp}"
mkdir -p $traceDir
mkdir -p $rcLogDir
./sqlrun.pl \
--exe-mode sequential \
--connect-mode flood \
--tx-behavior commit \
--max-sessions 20 \
--exe-delay 0 \
--db "$db" \
--username $username \
--password "$password" \
--runtime 1200 \
--tracefile-id $traceFileID \
--trace \
--xact-tally \
--xact-tally-file $rcLogFile \
--pause-at-exit \
--sqldir $(pwd)/SQL
# cheating a bit as I know where the trace file are on the server
# lestrade.jks.com:/opt/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_24103_RC-20230703142522.trc
scp -p oracle@lestrade.jks.com:/opt/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_*_${traceFileID}*.trc $traceDir
echo
echo Trace files are in $traceDir/
echo RC Log is $rcLogFile
echo
There are only two tables involved, CHAIR and HOLIDAY.
CHAIR refers to Dental chairs, which will change infrequently.
HOLIDAY refers to annual holidays. This data normally changes only once per year.
The total number of rows for both table is low:
CHAIR: 122 TABLE: 602
Here is the SQL to be tested:
SELECT H.* FROM HOLIDAY H WHERE ( H.CLINIC = 0 OR H.CLINIC = (SELECT C.CLINIC FROM CHAIR C WHERE C.CHAIR = :1)) AND H.STARTDATE <= :2 AND H.ENDDATE >= :2 AND H.PARTIALDAY = 0 ORDER BY H.ID DESC
sqlrun will execute this statement in 20 sessions, for 20 minutes, as quickly as they can be run.
Bind variable values are read from a text file.
Testing
The testing will also use the mrskew option of “--where='$af<1'” to get test results, just as it was against the trace data from the application.
As discussed previously, the 0.7-second value was used to discern between application-induced SNMFC and user-induced SNMFC.
In automated testing, there would normally be no such waits, but as seen later, there are two possible causes of lengthy SNMFC waits in this testing.
The standard value of 1 second will be used, as there are no 'users'. There may be some lengthy SNMFC values caused by Client Result Cache, and one caused by the test harness.
Each test will consist of 20 clients, each running for 20 minutes, with sql trace enabled.
The tracing is at level 12, so it included waits as well as bind variable values.
A total of 4 tests will be run:
- No additional network latency (client and database are co-located)
- without Client Result Cache
- with Client Result Cache
- Network latency of ~6ms added to simulate client 100 miles distant from the database server.
- without Client Result Cache
- with Client Result Cache
The driver script sqlrun-rc.sh will call sqlplus and run a script to set table annotations to FORCE or MANUAL for the tests.
- FORCE: the client will use the client result cache
- MANUAL: the client will not use the client result cache
Test Environment
The test environment is as follows:
- Database Server: ** Lestrade ** i5 with single socket and 4 cores ** 32 G RAM ** 1G network
- Client 1 ** Poirot ** VM with 3 vCPUs ** 8G RAM
- Client 2 ** sqlrun ** VM with 3 vCPUs ** 8G RAM
Oracle database is 19.3 Oracle clients are 19.16 Test software is Perl 5, with the DBI and DBD::Oracle modules
Without Result Cache, no additional network latency
This is a large set of trace files, and it does take some time to process on this VM.
$ du -sh trace/manual-20230830153724
15G trace/manual-20230830153724
$ time mrskew --where='$af < 1' trace/manual-20230830153724/*.trc
CALL-NAME DURATION % CALLS MEAN MIN MAX
--------------------------- ------------- ------ ---------- -------- -------- --------
SQL*Net message from client 18,681.266376 88.2% 18,253,315 0.001023 0.000169 0.063483
FETCH 1,529.259314 7.2% 18,253,235 0.000084 0.000000 0.001037
EXEC 902.449553 4.3% 18,253,275 0.000049 0.000000 0.005300
log file sync 29.196260 0.1% 742 0.039348 0.005203 0.919955
SQL*Net message to client 19.594399 0.1% 18,253,335 0.000001 0.000000 0.011659
resmgr:cpu quantum 13.547097 0.1% 17,186 0.000788 0.000010 0.050825
cursor: pin S 6.535397 0.0% 4,852 0.001347 0.001009 0.024545
ADR block file read 0.244540 0.0% 40 0.006114 0.000068 0.010877
library cache: mutex X 0.051236 0.0% 14 0.003660 0.000004 0.045247
buffer busy waits 0.010421 0.0% 22 0.000474 0.000002 0.002212
11 others 0.016493 0.0% 402 0.000041 0.000000 0.004231
--------------------------- ------------- ------ ---------- -------- -------- --------
TOTAL (21) 21,182.171086 100.0% 73,036,418 0.000290 0.000000 0.919955
real 2m26.069s
user 2m7.695s
sys 0m17.999s
Client Result Cache Stats, no additional network latency
These statistics were collected after the sessions had completed the test but before disconnecting from the database.
These results just serve as a control because, as you can see, there was no caching, as indicated by the 'Find Count' column.
SYS@lestrade/orcl.jks.com AS SYSDBA> @crc-stats
Block Block Create Create Delete Delete Hash
Count Count Block Count Count Count Count Find Bucket Invalidation
USERNAME SID Current Max Size Failure Success Invalid Valid Count Count Count
--------------- ------- ------ ------ ------- ------- ------- ------ -------- ------ ------------
JKSTILL 37 128 4096 256 0 0 0 0 0 1024 0
JKSTILL 39 128 4096 256 0 0 0 0 0 1024 0
JKSTILL 43 128 4096 256 0 0 0 0 0 1024 0
JKSTILL 44 128 4096 256 0 0 0 0 0 1024 0
JKSTILL 46 128 4096 256 0 0 0 0 0 1024 0
JKSTILL 50 128 4096 256 0 0 0 0 0 1024 0
JKSTILL 135 128 4096 256 0 0 0 0 0 1024 0
JKSTILL 142 128 4096 256 0 0 0 0 0 1024 0
JKSTILL 152 128 4096 256 0 0 0 0 0 1024 0
JKSTILL 160 128 4096 256 0 0 0 0 0 1024 0
JKSTILL 175 128 4096 256 0 0 0 0 0 1024 0
JKSTILL 280 128 4096 256 0 0 0 0 0 1024 0
JKSTILL 284 128 4096 256 0 0 0 0 0 1024 0
JKSTILL 288 128 4096 256 0 0 0 0 0 1024 0
JKSTILL 296 128 4096 256 0 0 0 0 0 1024 0
JKSTILL 299 128 4096 256 0 0 0 0 0 1024 0
JKSTILL 393 128 4096 256 0 0 0 0 0 1024 0
JKSTILL 396 128 4096 256 0 0 0 0 0 1024 0
JKSTILL 407 128 4096 256 0 0 0 0 0 1024 0
JKSTILL 410 128 4096 256 0 0 0 0 0 1024 0
SYS 173 128 4096 256 0 0 0 0 0 1024 0
21 rows selected.
The script, crc-stats.sql, is used to collect the CRC statistics.
-- crc-stats.sql
-- Client Result Cache Statistics
set linesize 200 trimspool on
set pagesize 100
col username format a20
col sid format 9999
col serial# format 999999
col name format a35 head 'RC Name'
col machine format a30
col osuser format a15
col block_count_current_value format 99999 head 'Block|Count|Current'
col block_count_max_value format 99999 head 'Block|Count|Max'
col block_size_value format 99999 head 'Block|Size'
col create_count_failure_value format 99999 head 'Create|Count|Failure'
col create_count_success_value format 99999 head 'Create|Count|Success'
col delete_count_invalid_value format 99999 head 'Delete|Count|Invalid'
col delete_count_valid_value format 99999 head 'Delete|Count|Valid'
col find_count_value format 9999999 head 'Find|Count'
col hash_bucket_count_value format 99999 head 'Hash|Bucket|Count'
col invalidation_count_value format 99999 head 'Invalidation|Count'
with rs as (
select *
from (
select cache_id, name, value from client_result_cache_stats$
)
pivot
(
--max(name) "NAME"
max(value) "VALUE"
for name in (
'Block Count Current' block_count_current
,'Block Count Max' block_count_max
,'Block Size' block_size
,'Create Count Failure' create_count_failure
,'Create Count Success' create_count_success
,'Delete Count Invalid' delete_count_invalid
,'Delete Count Valid' delete_count_valid
,'Find Count' find_count
,'Hash Bucket Count' hash_bucket_count
,'Invalidation Count' invalidation_count
)
)
)
select
s.username
, s.sid
, s.serial#
, s.machine
, s.osuser
, block_count_current_value
, block_count_max_value
, block_size_value
, create_count_failure_value
, create_count_success_value
, delete_count_invalid_value
, delete_count_valid_value
, find_count_value
, hash_bucket_count_value
, invalidation_count_value
from gv$session_client_result_cache rc
join rs on rs.cache_id = rc.cache_id
join gv$session s on s.sid = rc.sid and s.serial# = rc.serial#
order by username, sid
/
The RC log
Each client kept track of the number of transactions performed, and wrote them out to a log file.
The results when no Client Result Cache is used:
RC-20230830153724-20230703153724: 914535
RC-20230830153724-20230703153724: 917220
RC-20230830153724-20230703153724: 926713
RC-20230830153724-20230703153724: 912156
RC-20230830153724-20230703153724: 902782
RC-20230830153724-20230703153724: 903334
RC-20230830153724-20230703153724: 932879
RC-20230830153724-20230703153724: 926055
RC-20230830153724-20230703153724: 903622
RC-20230830153724-20230703153724: 908904
RC-20230830153724-20230703153724: 900843
RC-20230830153724-20230703153724: 921625
RC-20230830153724-20230703153724: 902627
RC-20230830153724-20230703153724: 910225
RC-20230830153724-20230703153724: 913760
RC-20230830153724-20230703153724: 907505
RC-20230830153724-20230703153724: 898638
RC-20230830153724-20230703153724: 916200
RC-20230830153724-20230703153724: 912823
RC-20230830153724-20230703153724: 920769
The total number of transactions is 18,253,215
With Result Cache
Calculating the results from the trace files is much faster when Client Result Cache is used.
This is due to the much smaller size of the trace files:
$ du -sh trace/force-20230830160541
54M trace/force-20230830160541
$ time mrskew --where='$dur < 1' trace/force-20230830160541/*.trc
CALL-NAME DURATION % CALLS MEAN MIN MAX
------------------------------------------- --------- ------ ------- -------- -------- --------
SQL*Net message from client 47.964188 71.3% 59,741 0.000803 0.000174 0.045564
log file sync 16.691914 24.8% 460 0.036287 0.005590 0.265968
EXEC 1.809924 2.7% 60,081 0.000030 0.000000 0.035868
FETCH 0.380603 0.6% 60,041 0.000006 0.000000 0.000844
enq: CN - race with txn 0.060869 0.1% 16 0.003804 0.000678 0.007635
buffer busy waits 0.059058 0.1% 58 0.001018 0.000001 0.010705
latch free 0.051608 0.1% 96 0.000538 0.000001 0.001328
latch: Change Notification Hash table latch 0.051364 0.1% 17 0.003021 0.000559 0.006974
SQL*Net message to client 0.050983 0.1% 60,141 0.000001 0.000000 0.002998
enq: RC - Result Cache: Contention 0.047657 0.1% 398 0.000120 0.000002 0.002926
17 others 0.086466 0.1% 644 0.000134 0.000000 0.011544
------------------------------------------- --------- ------ ------- -------- -------- --------
TOTAL (27) 67.254634 100.0% 241,693 0.000278 0.000000 0.265968
real 0m0.558s
user 0m0.377s
sys 0m0.138s
When comparing some of the statistics reported by mrskew, you may be wondering if these 2 sets of trace files are using the same test criteria.
The test criteria is the same for both tests.
runtime: 20 minutes clients: 20
The only difference is whether or not Client Result Cache was being used.
Keep in mind that mrskew, or any tool, can only report on what is found in the trace file.
When Client Result Cache is used, the client retrieves each row from the database only once. Once all rows have been retrieved, there is no need to get more data from the database. All further executions of the test queries are satisfied from the local client cache.
Without CRC, the duration of all waits was 21,182.171086 seconds.
With CRC enabled, the duration of all waits was 67.254634 seconds.
Let's consider the time spent by each of the clients.
During the test without CRC, a total of 21,182.171086 seconds was spent in dbcalls or syscalls by the 20 clients.
That is 1,059.1085543 seconds per client on average, waiting on the database or the server:
21,182.171086/20 = 1,059.1085543
In that case, 88.25% of the time, the applications were waiting:
1,059.1085543 / 1200 * 100 = 88.2590461916667
Each client had about 141 seconds available to do whatever work needed to be done.
In this case, the only work is to get more data as the test SQL statements are repeatedly executed using a list of binding values for the SQL placeholders.
What about the tests with Client Result Cache enabled?
A total of 67 seconds was spent waiting on dbcalls and syscalls by all 20 clients.
That works out to just 3.3627317 seconds per client over the course of 20 minutes.
67.254634 / 20 = 3.3627317
Rather than just 141 seconds of local processing time available to each client, with CRC enabled, there were about 1196 seconds of local processing time available per client.
During this test, the only other processing to do is to run the test queries again.
As seen in the Result Count logs, the clients processed many more transactions than the clients that did not use CRC.
Client Result Cache Stats
These statistics were collected after the sessions had completed the test but before disconnecting from the database.
Take a look at the 'Find Count' column.
Each of the 20 clients satisfied queries from the local cache over 6 million times.
SYS@lestrade/orcl.jks.com AS SYSDBA> @crc-stats
Block Block Create Create Delete Delete Hash
Count Count Block Count Count Count Count Find Bucket Invalidation
USERNAME SID Current Max Size Failure Success Invalid Valid Count Count Count
---------- ----- ------- ------ ------ ------- ------- ------- ------ -------- ------ ------------
JKSTILL 32 3072 4096 256 0 2982 0 0 6319978 1024 0
JKSTILL 33 3072 4096 256 0 2982 0 0 6238234 1024 0
JKSTILL 39 3072 4096 256 0 2982 0 0 6293694 1024 0
JKSTILL 44 3072 4096 256 0 2982 0 0 6257028 1024 0
JKSTILL 47 3072 4096 256 0 2982 0 0 6237394 1024 0
JKSTILL 50 3072 4096 256 0 2982 0 0 6282581 1024 0
JKSTILL 135 3072 4096 256 0 2982 0 0 6254996 1024 0
JKSTILL 152 3072 4096 256 0 2982 0 0 6254007 1024 0
JKSTILL 160 3072 4096 256 0 2982 0 0 6251309 1024 0
JKSTILL 169 3072 4096 256 0 2982 0 0 6311561 1024 0
JKSTILL 175 3072 4096 256 0 2982 0 0 6332953 1024 0
JKSTILL 280 3072 4096 256 0 2982 0 0 6250219 1024 0
JKSTILL 282 3072 4096 256 0 2982 0 0 6285713 1024 0
JKSTILL 284 3072 4096 256 0 2982 0 0 6305114 1024 0
JKSTILL 288 3072 4096 256 0 2982 0 0 6283873 1024 0
JKSTILL 290 3072 4096 256 0 2982 0 0 6281226 1024 0
JKSTILL 393 3072 4096 256 0 2982 0 0 6253926 1024 0
JKSTILL 396 3072 4096 256 0 2982 0 0 6245423 1024 0
JKSTILL 404 3072 4096 256 0 2982 0 0 6288019 1024 0
JKSTILL 422 3072 4096 256 0 2982 0 0 6272817 1024 0
SYS 173 128 4096 256 0 0 0 0 0 1024 0
21 rows selected.
The RC log
Each client kept track of the number of transactions performed and wrote them out to a log file.
The results when Client Result Cache is used:
RC-20230830160541-20230703160541: 6623000
RC-20230830160541-20230703160541: 6554524
RC-20230830160541-20230703160541: 6599641
RC-20230830160541-20230703160541: 6532840
RC-20230830160541-20230703160541: 6557806
RC-20230830160541-20230703160541: 6584949
RC-20230830160541-20230703160541: 6581603
RC-20230830160541-20230703160541: 6557942
RC-20230830160541-20230703160541: 6551515
RC-20230830160541-20230703160541: 6611476
RC-20230830160541-20230703160541: 6549693
RC-20230830160541-20230703160541: 6573529
RC-20230830160541-20230703160541: 6559698
RC-20230830160541-20230703160541: 6586878
RC-20230830160541-20230703160541: 6590372
RC-20230830160541-20230703160541: 6544219
RC-20230830160541-20230703160541: 6612925
RC-20230830160541-20230703160541: 6593495
RC-20230830160541-20230703160541: 6561719
RC-20230830160541-20230703160541: 6639638
The total number of transactions is 131,567,462
This is 7.2 times more transactions than were accomplished without Client Result Cache.
60 Second SQL*Net Message From Client
Now, we will specifically look for SNMFC events of about 60 seconds.
$ time mrskew --where='$af >= 1 and $dur < 65' --group='qq{$basename:$line}' --group-label='FILE:LINE' trace/force-20230830160541/*.trc
FILE:LINE DURATION % CALLS MEAN MIN MAX
--------------------------------------------------------- ------------- ------ ----- --------- --------- ---------
orcl_ora_30561_RC-20230830160541-20230703160541.trc:58081 60.002586 0.3% 1 60.002586 60.002586 60.002586
orcl_ora_30574_RC-20230830160541-20230703160541.trc:57602 60.002578 0.3% 1 60.002578 60.002578 60.002578
orcl_ora_30572_RC-20230830160541-20230703160541.trc:57550 60.002518 0.3% 1 60.002518 60.002518 60.002518
orcl_ora_30559_RC-20230830160541-20230703160541.trc:58275 60.002420 0.3% 1 60.002420 60.002420 60.002420
orcl_ora_30572_RC-20230830160541-20230703160541.trc:58278 60.002338 0.3% 1 60.002338 60.002338 60.002338
orcl_ora_30559_RC-20230830160541-20230703160541.trc:57791 60.002131 0.3% 1 60.002131 60.002131 60.002131
orcl_ora_30566_RC-20230830160541-20230703160541.trc:53907 60.002048 0.3% 1 60.002048 60.002048 60.002048
orcl_ora_30566_RC-20230830160541-20230703160541.trc:54640 60.002029 0.3% 1 60.002029 60.002029 60.002029
orcl_ora_30537_RC-20230830160541-20230703160541.trc:55696 60.002002 0.3% 1 60.002002 60.002002 60.002002
orcl_ora_30563_RC-20230830160541-20230703160541.trc:55373 60.001989 0.3% 1 60.001989 60.001989 60.001989
370 others 22,200.553583 97.4% 370 60.001496 60.001281 60.001951
--------------------------------------------------------- ------------- ------ ----- --------- --------- ---------
TOTAL (380) 22,800.576222 100.0% 380 60.001516 60.001281 60.002586
real 0m0.312s
user 0m0.282s
sys 0m0.029s
There is a reason for checking the range of 1-65 seconds. The value of client_result_cache_lag was set to 60000, which is measured in milliseconds.
Every 60 seconds, the Oracle client checks for any updates to the tables where results are being cached at the client. During this time, the CRC stats are also updated in the instance.
Consider just one of the trace files:
$ mrskew --top=0 --sort=1a --where='$af >= 1 and $dur < 65' --group='qq{$basename:$line}' --group-label='FILE:LINE' trace/force-20230830160541/orcl_ora_30561_RC-20230830160541-20230703160541.trc
FILE:LINE DURATION % CALLS MEAN MIN MAX
--------------------------------------------------------- ------------ ------ ----- --------- --------- ---------
orcl_ora_30561_RC-20230830160541-20230703160541.trc:53959 60.001474 5.3% 1 60.001474 60.001474 60.001474
orcl_ora_30561_RC-20230830160541-20230703160541.trc:54208 60.001473 5.3% 1 60.001473 60.001473 60.001473
orcl_ora_30561_RC-20230830160541-20230703160541.trc:54450 60.001613 5.3% 1 60.001613 60.001613 60.001613
orcl_ora_30561_RC-20230830160541-20230703160541.trc:54692 60.001452 5.3% 1 60.001452 60.001452 60.001452
orcl_ora_30561_RC-20230830160541-20230703160541.trc:54934 60.001459 5.3% 1 60.001459 60.001459 60.001459
orcl_ora_30561_RC-20230830160541-20230703160541.trc:55176 60.001465 5.3% 1 60.001465 60.001465 60.001465
orcl_ora_30561_RC-20230830160541-20230703160541.trc:55418 60.001504 5.3% 1 60.001504 60.001504 60.001504
orcl_ora_30561_RC-20230830160541-20230703160541.trc:55660 60.001584 5.3% 1 60.001584 60.001584 60.001584
orcl_ora_30561_RC-20230830160541-20230703160541.trc:55902 60.001390 5.3% 1 60.001390 60.001390 60.001390
orcl_ora_30561_RC-20230830160541-20230703160541.trc:56144 60.001480 5.3% 1 60.001480 60.001480 60.001480
orcl_ora_30561_RC-20230830160541-20230703160541.trc:56386 60.001525 5.3% 1 60.001525 60.001525 60.001525
orcl_ora_30561_RC-20230830160541-20230703160541.trc:56628 60.001404 5.3% 1 60.001404 60.001404 60.001404
orcl_ora_30561_RC-20230830160541-20230703160541.trc:56870 60.001481 5.3% 1 60.001481 60.001481 60.001481
orcl_ora_30561_RC-20230830160541-20230703160541.trc:57112 60.001385 5.3% 1 60.001385 60.001385 60.001385
orcl_ora_30561_RC-20230830160541-20230703160541.trc:57354 60.001557 5.3% 1 60.001557 60.001557 60.001557
orcl_ora_30561_RC-20230830160541-20230703160541.trc:57596 60.001628 5.3% 1 60.001628 60.001628 60.001628
orcl_ora_30561_RC-20230830160541-20230703160541.trc:57838 60.001434 5.3% 1 60.001434 60.001434 60.001434
orcl_ora_30561_RC-20230830160541-20230703160541.trc:58081 60.002586 5.3% 1 60.002586 60.002586 60.002586
orcl_ora_30561_RC-20230830160541-20230703160541.trc:58323 60.001456 5.3% 1 60.001456 60.001456 60.001456
--------------------------------------------------------- ------------ ------ ----- --------- --------- ---------
TOTAL (19) 1,140.029350 100.0% 19 60.001545 60.001385 60.002586
If I pick one of the SNMFC lines and get the following lines, we can see that the client is consulting with the database to see if the cache needs to be synced.
The following has about 200 lines listed in the name of brevity:
$ mrskew --sort=1a --top=0 --alldep --name=:all -where='$line >= 55660 and $line <= 55660+230' --group='qq{$line:$text}' -gl='LINE:TEXT' trace/force-20230830160541/orcl_ora_30561_RC-20230830160541-20230703160541.trc
LINE:TEXT DURATION % CALLS MEAN MIN MAX
------------------------------------------------------------------------------------------------------------------------------------------ --------- ------ ----- --------- --------- ---------
55660:WAIT #140661336887528: nam='SQL*Net message from client' ela= 60001584 driver id=1952673792 #bytes=1 p3=0 obj#=-1 tim=14522473254855 60.001584 99.9% 1 60.001584 60.001584 60.001584
55661:BINDS #140661335857648:
55662:
55663: Bind#0
55664: oacdty=01 mxl=128(49) mxlc=00 mal=00 scl=00 pre=00
55665: oacflg=00 fl2=0000 frm=01 csi=873 siz=152 off=0
55666: kxsbbbfp=7fee4504b8e8 bln=128 avl=10 flg=05
55667: value="Block Size"
...
55683:BINDS #140661335857648:
55684:
55685: Bind#0
55686: oacdty=01 mxl=128(49) mxlc=00 mal=00 scl=00 pre=00
55687: oacflg=00 fl2=0000 frm=01 csi=873 siz=152 off=0
55688: kxsbbbfp=7fee4504b8e8 bln=128 avl=15 flg=05
55689: value="Block Count Max"
...
55859:BINDS #140661335857648:
55860:
55861: Bind#0
55862: oacdty=01 mxl=128(49) mxlc=00 mal=00 scl=00 pre=00
55863: oacflg=00 fl2=0000 frm=01 csi=873 siz=152 off=0
55864: kxsbbbfp=7fee4504b8e8 bln=128 avl=18 flg=05
55865: value="Delete Count Valid"
55866: Bind#1
55867: oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
55868: oacflg=00 fl2=0000 frm=00 csi=00 siz=0 off=128
55869: kxsbbbfp=7fee4504b968 bln=22 avl=01 flg=01
55870: value=0
55871: Bind#2
55872: oacdty=02 mxl=22(03) mxlc=00 mal=00 scl=00 pre=00
55873: oacflg=10 fl2=0000 frm=00 csi=00 siz=24 off=0
55874: kxsbbbfp=7fee4a41ae68 bln=22 avl=03 flg=09
55875: value=5497
55876: Bind#3
55877: oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
55878: oacflg=00 fl2=0000 frm=00 csi=00 siz=24 off=0
55879: kxsbbbfp=7fee4504b8b8 bln=22 avl=02 flg=05
55880: value=10
55881:EXEC #140661335857648:c=964,e=964,p=0,cr=15,cu=11,mis=0,r=10,dep=1,og=4,plh=1807565214,tim=14522473255889 0.000964 0.0% 1 0.000964 0.000964 0.000964
55882:WAIT #140661335857648: nam='log file sync' ela= 40174 buffer#=158 sync scn=38725244627216 p3=0 obj#=-1 tim=14522473296109 0.040174 0.1% 1 0.040174 0.040174 0.040174
55883:BINDS #140661336887528:
55884:
55885: Bind#0
55886: oacdty=01 mxl=32(30) mxlc=00 mal=00 scl=00 pre=00
55887: oacflg=05 fl2=1000000 frm=01 csi=873 siz=160 off=0
55888: kxsbbbfp=7fee45046890 bln=32 avl=04 flg=05
55889: value="1742"
55890: Bind#1
------------------------------------------------------------------------------------------------------------------------------------------ --------- ------ ----- --------- --------- ---------
TOTAL (231) 60.042722 100.0% 3 20.014241 0.000964 60.001584
Simulated Network Latency
In many cases, a client may be separated from the server by many miles.
If the client machine is 100 miles away from the database server, we can calculate the time required for a round trip.
Traffic Control
The TC (traffic control) program can be used to induce latency in a network.
Using this, we can simulate a client that is 100 miles from the database server.
A different VM, 'sqlrun', is being used to run the tests.
As per the following network latency calculator, setting the latency to 6ms should be about right:
The tc utility was used to simulate network latency of ~ 6ms, with a fluctuation of up to 25%.
[root@sqlrun ~]# tc qdisc add dev enp0s3 root netem delay 6ms 1ms 25%
[root@sqlrun ~]# tc qdisc
qdisc netem 8002: dev enp0s3 root refcnt 2 limit 1000 delay 6.0ms 1.0ms 25%
qdisc pfifo_fast 0: dev virbr0-nic root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
As this is a VM that is running on a server about 30 inches from the database server, ping would normally be < 1 ms.
Here, we see the average at 8.292ms, which, for testing purposes, is close enough to our estimate.
(Lestrade is the database server)
[sqlrun]$ ping -c 100 lestrade
PING lestrade.jks.com (192.168.1.116) 56(84) bytes of data.
64 bytes from lestrade.jks.com (192.168.1.116): icmp_seq=1 ttl=64 time=6.94 ms
64 bytes from lestrade.jks.com (192.168.1.116): icmp_seq=2 ttl=64 time=6.82 ms
64 bytes from lestrade.jks.com (192.168.1.116): icmp_seq=3 ttl=64 time=7.33 ms
...
64 bytes from lestrade.jks.com (192.168.1.116): icmp_seq=98 ttl=64 time=6.92 ms
64 bytes from lestrade.jks.com (192.168.1.116): icmp_seq=99 ttl=64 time=7.36 ms
64 bytes from lestrade.jks.com (192.168.1.116): icmp_seq=100 ttl=64 time=6.91 ms
--- lestrade.jks.com ping statistics ---
100 packets transmitted, 100 received, 0% packet loss, time 99181ms
rtt min/avg/max/mdev = 5.304/8.292/25.075/4.329 ms
For reference, ping from a different server:
$ ping -c 100 lestrade
PING lestrade.jks.com (192.168.1.116) 56(84) bytes of data.
64 bytes from lestrade.jks.com (192.168.1.116): icmp_seq=1 ttl=64 time=0.229 ms
64 bytes from lestrade.jks.com (192.168.1.116): icmp_seq=2 ttl=64 time=0.263 ms
64 bytes from lestrade.jks.com (192.168.1.116): icmp_seq=3 ttl=64 time=0.254 ms
...
64 bytes from lestrade.jks.com (192.168.1.116): icmp_seq=98 ttl=64 time=0.290 ms
64 bytes from lestrade.jks.com (192.168.1.116): icmp_seq=99 ttl=64 time=0.293 ms
64 bytes from lestrade.jks.com (192.168.1.116): icmp_seq=100 ttl=64 time=0.308 ms
--- lestrade.jks.com ping statistics ---
100 packets transmitted, 100 received, 0% packet loss, time 99270ms
rtt min/avg/max/mdev = 0.209/0.279/0.374/0.032 ms
The same script sqlrun-rc.sh was used to run the tests.
The only difference is that sqlrun is now running on a different client machine.
Simulated Latency without Client Result Cache
Let's have a look at the results.
$ time mrskew --where='$af < 1' manual-20230831121326/*.trc
CALL-NAME DURATION % CALLS MEAN MIN MAX
--------------------------- ------------- ------ ---------- -------- -------- --------
SQL*Net message from client 23,189.706550 97.5% 2,756,398 0.008413 0.005198 0.256000
FETCH 317.882828 1.3% 2,756,318 0.000115 0.000000 0.000912
EXEC 203.297906 0.9% 2,756,358 0.000074 0.000000 0.004975
log file sync 61.155541 0.3% 820 0.074580 0.008886 0.802599
SQL*Net message to client 1.553686 0.0% 2,756,418 0.000001 0.000000 0.001248
cursor: pin S 0.203710 0.0% 187 0.001089 0.001039 0.001466
buffer busy waits 0.052867 0.0% 104 0.000508 0.000001 0.006587
latch: shared pool 0.034064 0.0% 8 0.004258 0.000006 0.006685
ADR block file read 0.019283 0.0% 40 0.000482 0.000062 0.001759
library cache: mutex X 0.015052 0.0% 28 0.000538 0.000001 0.010585
10 others 0.031471 0.0% 403 0.000078 0.000000 0.004870
--------------------------- ------------- ------ ---------- -------- -------- --------
TOTAL (20) 23,773.952958 100.0% 11,027,082 0.002156 0.000000 0.802599
real 0m21.298s
user 0m19.132s
sys 0m2.125s
You may have noticed that mrskew needed only 22.3 seconds to process the trace files from this test, while earlier, the same tests required 2+ minutes.
The reason for that is that the trace files are much smaller:
$ du -sh manual-20230831121326/
2.3G manual-20230831121326/
The trace files from the very first test were from the 'local' client, with a < 1 ms network latency. These totaled about 15G.
As the network latency is now at 8.4ms on average, the percentage of SNMFC time is much higher than it was for the 'local' client. Due to the 8m network latency, the 'remote' client did much less work.
With the induced latency, this client is spending 8x more time waiting on the network than with previous tests, and so SNFMC is consuming 97.5% of the time.
rc log
Each process on the 'remote' client did only about 137k transactions each.
RC-20230831121326-20230704121326: 137741
RC-20230831121326-20230704121326: 137806
RC-20230831121326-20230704121326: 137757
RC-20230831121326-20230704121326: 137850
RC-20230831121326-20230704121326: 137775
RC-20230831121326-20230704121326: 137981
RC-20230831121326-20230704121326: 137760
RC-20230831121326-20230704121326: 137887
RC-20230831121326-20230704121326: 137773
RC-20230831121326-20230704121326: 137768
RC-20230831121326-20230704121326: 137695
RC-20230831121326-20230704121326: 137898
RC-20230831121326-20230704121326: 138031
RC-20230831121326-20230704121326: 137848
RC-20230831121326-20230704121326: 137878
RC-20230831121326-20230704121326: 137884
RC-20230831121326-20230704121326: 137528
RC-20230831121326-20230704121326: 137767
RC-20230831121326-20230704121326: 137788
RC-20230831121326-20230704121326: 137883
The total number of transactions was ~ 2.75M.
$ awk '{ x=x+$2 }END{print x}' rclog-network_latency/rc-manual-20230831121326.log
2756298
Compare that to the 18.2M performed in the first test on the 'local' client.
Simulated Latency with Client Result Cache
Now we get the results that we hope will justify all of this testing, running the test on a 'remote' client with Client Result Cache enabled.
At first glance, there does not appear to be much improvement.
After all, the SNMFC has only dropped by 3.1%, from 97.5% to 94.4%.
$ time mrskew --where='$dur < 1' force-20230831123418/*.trc
CALL-NAME DURATION % CALLS MEAN MIN MAX
---------------------------------- ---------- ------ ------- -------- -------- --------
SQL*Net message from client 459.712597 94.4% 59,740 0.007695 0.005252 0.073854
log file sync 21.464715 4.4% 441 0.048673 0.011675 0.152602
EXEC 4.516092 0.9% 60,080 0.000075 0.000000 0.033100
FETCH 0.849419 0.2% 60,040 0.000014 0.000000 0.000795
enq: RC - Result Cache: Contention 0.077254 0.0% 533 0.000145 0.000007 0.000486
ADR block file read 0.058727 0.0% 40 0.001468 0.000070 0.012409
enq: CN - race with txn 0.051463 0.0% 15 0.003431 0.000611 0.006732
cursor: pin S wait on X 0.034593 0.0% 6 0.005766 0.004267 0.006668
SQL*Net message to client 0.028325 0.0% 60,140 0.000000 0.000000 0.000025
buffer busy waits 0.019479 0.0% 80 0.000243 0.000001 0.005459
18 others 0.042739 0.0% 587 0.000073 0.000000 0.002882
---------------------------------- ---------- ------ ------- -------- -------- --------
TOTAL (28) 486.855403 100.0% 241,702 0.002014 0.000000 0.152602
real 0m0.451s
user 0m0.439s
sys 0m0.012s
As was done previously with the 'local' client, we can compare the actual time available to the app in both scenarios.
Both tests were with 20 clients, each running 1200 seconds.
The total time available in each test is 24,000 seconds. The time accounting is summarized in the following table:
Note: 'Excluded Time' in this case refers to time that is not included in the report. i.e. any wait >= 1 second was excluded from the report.
Test |
Total Time |
Accounted for Time |
Excluded Time |
No CRC |
24000 |
23,773.0 |
227.0 |
Use CRC |
24000 |
486.9 |
23,513.1 |
In the case of the tests that were not using CRC, most of the Excluded Time can be found by looking for $af values between 1 and 59 seconds inclusive.
$ time mrskew --where='$af >=1 and $af <= 59' manual-20230831121326/*.trc
CALL-NAME DURATION % CALLS MEAN MIN MAX
--------------------------- ---------- ------ ----- --------- --------- ---------
SQL*Net message from client 206.082333 100.0% 20 10.304117 10.284974 10.313397
--------------------------- ---------- ------ ----- --------- --------- ---------
TOTAL (1) 206.082333 100.0% 20 10.304117 10.284974 10.313397
real 0m12.751s
user 0m12.198s
sys 0m0.532s
This leaves approximately 21 seconds unaccounted for across the 20 tests. We will not be considering unaccounted time further in this article other than to acknowledge that it happens.
The result for the test on the 'remote' client with CRC enabled looks a bit different:
The trace files for the 'remote' clients using CRC do not have any calls in this time range:
$ time mrskew --where='$af >=1 and $af <= 59' force-20230831123418/*.trc
mrskew: no calls match where '(($af >=1 and $af <= 59) and ($dep==$depmin)) and ($nam=~/.+/i)' is true
real 0m0.312s
user 0m0.294s
sys 0m0.018s
The 'remote' CRC trace files do have a large number of SNFMC in the 60-second range:
$ time mrskew --where='$af >59' force-20230831123418/*.trc
CALL-NAME DURATION % CALLS MEAN MIN MAX
--------------------------- ------------- ------ ----- --------- --------- ---------
SQL*Net message from client 24,332.499100 100.0% 400 60.831248 60.006347 76.714749
--------------------------- ------------- ------ ----- --------- --------- ---------
TOTAL (1) 24,332.499100 100.0% 400 60.831248 60.006347 76.714749
real 0m0.312s
user 0m0.295s
sys 0m0.017s
As seen previously, these ~ 60-second waits are a result of the Client Result Cache algorithm waking up every 60 seconds on the client, checking the status of the tables being cached, and then updating the CRC statistics.
During those 60 seconds, each client process is busy getting results from the local client cache.
This can be seen by looking at the RC log that counts the number of transactions per client, with each client performing about 6.77M transactions on average
RC-20230831123418-20230704123418: 6909541
RC-20230831123418-20230704123418: 7240289
RC-20230831123418-20230704123418: 6694473
RC-20230831123418-20230704123418: 6891337
RC-20230831123418-20230704123418: 6686918
RC-20230831123418-20230704123418: 6374032
RC-20230831123418-20230704123418: 6700690
RC-20230831123418-20230704123418: 6490638
RC-20230831123418-20230704123418: 6802760
RC-20230831123418-20230704123418: 6797183
RC-20230831123418-20230704123418: 6570896
RC-20230831123418-20230704123418: 6839648
RC-20230831123418-20230704123418: 6706639
RC-20230831123418-20230704123418: 6746755
RC-20230831123418-20230704123418: 6732575
RC-20230831123418-20230704123418: 6842443
RC-20230831123418-20230704123418: 6690335
RC-20230831123418-20230704123418: 6496897
RC-20230831123418-20230704123418: 7352803
RC-20230831123418-20230704123418: 6888591
The total:
$ awk '{ x=x+$2 }END{print x}' rclog-network_latency/rc-force-20230831123418.log
135455443
Cache vs No Cache for 'remote' clients:
Mode |
Total Transactions |
No Cache |
2,756,298 |
With Cache |
135,455,443 |
The difference is more striking when charted:
With caching, 49x more transactions were processed.
Conclusion
What might that translate into for a real application?
The SNFMC time for sqlid 8j53dscbsbqmb is 9% for all sessions traced, which was 1,270 seconds.
If the same 49x reduction could be made in SNMFC for this sqlid, then only 25 seconds would be spent on SNFMC for this query.
As this is an average, any change will affect some users more than others. Those users working in the part of the application that uses this query will likely notice the difference.
It would be great if I could report the time savings from the perspective of the application that was experiencing these performance issues. Unfortunately, those results are not available at this time.
There are likely other queries working with static or nearly static data, where the tables may be annotated for use with Client Result Cache.
Searching for those may be another blog.
If you would like to explore this topic further and feel you could use some assistance, please contact Pythian.
Share this
- Technical Track (969)
- Oracle (400)
- MySQL (137)
- Cloud (131)
- Open Source (90)
- Google Cloud (83)
- DBA Lounge (76)
- Microsoft SQL Server (76)
- Technical Blog (74)
- Big Data (52)
- AWS (49)
- Google Cloud Platform (47)
- Cassandra (44)
- DevOps (41)
- Azure (38)
- Pythian (33)
- Linux (30)
- Database (26)
- Podcasts (25)
- Site Reliability Engineering (25)
- Performance (24)
- SQL Server (24)
- Microsoft Azure (23)
- Oracle E-Business Suite (23)
- PostgreSQL (23)
- Oracle Database (22)
- Docker (21)
- Group Blog Posts (20)
- Security (20)
- DBA (19)
- Log Buffer (19)
- SQL (19)
- Exadata (18)
- Mongodb (18)
- Oracle Cloud Infrastructure (OCI) (18)
- Oracle Exadata (18)
- Automation (17)
- Hadoop (16)
- Oracleebs (16)
- Amazon RDS (15)
- Ansible (15)
- Ebs (15)
- Snowflake (15)
- ASM (13)
- BigQuery (13)
- Patching (13)
- RDS (13)
- Replication (13)
- Data (12)
- GenAI (12)
- Kubernetes (12)
- Oracle 12C (12)
- Advanced Analytics (11)
- Backup (11)
- LLM (11)
- Machine Learning (11)
- OCI (11)
- Rman (11)
- Cloud Migration (10)
- Datascape Podcast (10)
- Monitoring (10)
- R12 (10)
- 12C (9)
- AI (9)
- Apache Cassandra (9)
- Data Guard (9)
- Infrastructure (9)
- Oracle 19C (9)
- Oracle Applications (9)
- Python (9)
- Series (9)
- AWR (8)
- Amazon Web Services (AWS) (8)
- Articles (8)
- High Availability (8)
- Oracle EBS (8)
- Percona (8)
- Powershell (8)
- Recovery (8)
- Weblogic (8)
- Apache Beam (7)
- Backups (7)
- Data Governance (7)
- Goldengate (7)
- Innodb (7)
- Migration (7)
- Myrocks (7)
- OEM (7)
- Oracle Enterprise Manager (OEM) (7)
- Performance Tuning (7)
- Authentication (6)
- ChatGPT-4 (6)
- Data Enablement (6)
- Database Performance (6)
- E-Business Suite (6)
- Fmw (6)
- Grafana (6)
- Oracle Enterprise Manager (6)
- Orchestrator (6)
- Postgres (6)
- Rac (6)
- Renew Refresh Republish (6)
- RocksDB (6)
- Serverless (6)
- Upgrade (6)
- 19C (5)
- Azure Data Factory (5)
- Azure Synapse Analytics (5)
- Cpu (5)
- Data Visualization (5)
- Disaster Recovery (5)
- Error (5)
- Generative AI (5)
- Google BigQuery (5)
- Indexes (5)
- Love Letters To Data (5)
- Mariadb (5)
- Microsoft (5)
- Proxysql (5)
- Scala (5)
- Sql Server Administration (5)
- VMware (5)
- Windows (5)
- Xtrabackup (5)
- Airflow (4)
- Analytics (4)
- Apex (4)
- Best Practices (4)
- Centrally Managed Users (4)
- Cli (4)
- Cloud FinOps (4)
- Cloud Spanner (4)
- Cockroachdb (4)
- Configuration Management (4)
- Container (4)
- Data Management (4)
- Data Pipeline (4)
- Data Security (4)
- Data Strategy (4)
- Database Administrator (4)
- Database Management (4)
- Database Migration (4)
- Dataflow (4)
- Dbsat (4)
- Elasticsearch (4)
- Fahd Mirza (4)
- Fusion Middleware (4)
- Google (4)
- Io (4)
- Java (4)
- Kafka (4)
- Middleware (4)
- Mysql 8 (4)
- Network (4)
- Ocidtab (4)
- Opatch (4)
- Oracle Autonomous Database (Adb) (4)
- Oracle Cloud (4)
- Pitr (4)
- Post-Mortem Analysis (4)
- Prometheus (4)
- Redhat (4)
- September 9Th 2015 (4)
- Sql2016 (4)
- Ssl (4)
- Terraform (4)
- Workflow (4)
- 2Fa (3)
- Alwayson (3)
- Amazon Relational Database Service (Rds) (3)
- Apache Kafka (3)
- Apexexport (3)
- Aurora (3)
- Azure Sql Db (3)
- Cdb (3)
- ChatGPT (3)
- Cloud Armor (3)
- Cloud Database (3)
- Cloud Security (3)
- Cluster (3)
- Consul (3)
- Cosmos Db (3)
- Cost Management (3)
- Covid19 (3)
- Crontab (3)
- Data Analytics (3)
- Data Integration (3)
- Database 12C (3)
- Database Monitoring (3)
- Database Troubleshooting (3)
- Database Upgrade (3)
- Databases (3)
- Dataops (3)
- Dbt (3)
- Digital Transformation (3)
- ERP (3)
- Google Chrome (3)
- Google Cloud Sql (3)
- Graphite (3)
- Haproxy (3)
- Heterogeneous Database Migration (3)
- Hugepages (3)
- Inside Pythian (3)
- Installation (3)
- Json (3)
- Keras (3)
- Ldap (3)
- Liquibase (3)
- Love Letter (3)
- Lua (3)
- Mfa (3)
- Multitenant (3)
- Mysql 5.7 (3)
- Mysql Configuration (3)
- Nginx (3)
- Nodetool (3)
- Non-Tech Articles (3)
- Oem 13C (3)
- Oms (3)
- Oracle 18C (3)
- Oracle Data Guard (3)
- Oracle Live Sql (3)
- Oracle Rac (3)
- Patch (3)
- Perl (3)
- Pmm (3)
- Pt-Online-Schema-Change (3)
- Rdbms (3)
- Recommended (3)
- Remote Teams (3)
- Reporting (3)
- Reverse Proxy (3)
- S3 (3)
- Spark (3)
- Sql On The Edge (3)
- Sql Server Configuration (3)
- Sql Server On Linux (3)
- Ssis (3)
- Ssis Catalog (3)
- Stefan Knecht (3)
- Striim (3)
- Sysadmin (3)
- System Versioned (3)
- Systemd (3)
- Temporal Tables (3)
- Tensorflow (3)
- Tools (3)
- Tuning (3)
- Vasu Balla (3)
- Vault (3)
- Vulnerability (3)
- Waf (3)
- 18C (2)
- Adf (2)
- Adop (2)
- Agent (2)
- Agile (2)
- Amazon Data Migration Service (2)
- Amazon Ec2 (2)
- Amazon S3 (2)
- Apache Flink (2)
- Apple (2)
- Apps (2)
- Ashdump (2)
- Atp (2)
- Audit (2)
- Automatic Backups (2)
- Autonomous (2)
- Autoupgrade (2)
- Awr Data Mining (2)
- Azure Sql (2)
- Azure Sql Data Sync (2)
- Bash (2)
- Business (2)
- Business Intelligence (2)
- Caching (2)
- Cassandra Nodetool (2)
- Cdap (2)
- Certification (2)
- Cloning (2)
- Cloud Cost Optimization (2)
- Cloud Data Fusion (2)
- Cloud Hosting (2)
- Cloud Infrastructure (2)
- Cloud Shell (2)
- Cloud Sql (2)
- Cloudscape (2)
- Cluster Level Consistency (2)
- Conferences (2)
- Consul-Template (2)
- Containerization (2)
- Containers (2)
- Cosmosdb (2)
- Costs (2)
- Cql (2)
- Cqlsh (2)
- Cyber Security (2)
- Data Discovery (2)
- Data Migration (2)
- Data Quality (2)
- Data Streaming (2)
- Data Warehouse (2)
- Database Consulting (2)
- Database Migrations (2)
- Dataguard (2)
- Datapump (2)
- Ddl (2)
- Debezium (2)
- Dictionary Views (2)
- Dms (2)
- Docker-Composer (2)
- Dr (2)
- Duplicate (2)
- Ecc (2)
- Elastic (2)
- Elastic Stack (2)
- Em12C (2)
- Encryption (2)
- Enterprise Data Platform (EDP) (2)
- Enterprise Manager (2)
- Etl (2)
- Events (2)
- Exachk (2)
- Filter Driver (2)
- Flume (2)
- Full Text Search (2)
- Galera (2)
- Gemini (2)
- General Purpose Ssd (2)
- Gh-Ost (2)
- Gke (2)
- Google Workspace (2)
- Hanganalyze (2)
- Hdfs (2)
- Health Check (2)
- Historical Trends (2)
- Incremental (2)
- Infiniband (2)
- Infrastructure As Code (2)
- Innodb Cluster (2)
- Innodb File Structure (2)
- Innodb Group Replication (2)
- Install (2)
- Internals (2)
- Java Web Start (2)
- Kibana (2)
- Log (2)
- Log4J (2)
- Logs (2)
- Memory (2)
- Merge Replication (2)
- Metrics (2)
- Mutex (2)
- MySQLShell (2)
- NLP (2)
- Neo4J (2)
- Node.Js (2)
- Nosql (2)
- November 11Th 2015 (2)
- Ntp (2)
- Oci Iam (2)
- Oem12C (2)
- Omspatcher (2)
- Opatchauto (2)
- Open Source Database (2)
- Operational Excellence (2)
- Oracle 11G (2)
- Oracle Datase (2)
- Oracle Extended Manager (Oem) (2)
- Oracle Flashback (2)
- Oracle Forms (2)
- Oracle Installation (2)
- Oracle Io Testing (2)
- Pdb (2)
- Podcast (2)
- Puppet (2)
- Pythian Europe (2)
- R12.2 (2)
- Redshift (2)
- Remote DBA (2)
- Remote Sre (2)
- SAP (2)
- SAP HANA Cloud (2)
- Sap Migration (2)
- Scale (2)
- Schema (2)
- September 30Th 2015 (2)
- September 3Rd 2015 (2)
- Shell (2)
- Simon Pane (2)
- Single Sign-On (2)
- Sql Server On Gke (2)
- Sqlplus (2)
- Sre (2)
- Ssis Catalog Error (2)
- Ssisdb (2)
- Standby (2)
- Statspack Mining (2)
- Systemstate Dump (2)
- Tablespace (2)
- Technical Training (2)
- Tempdb (2)
- Tfa (2)
- Throughput (2)
- Tls (2)
- Tombstones (2)
- Transactional Replication (2)
- User Groups (2)
- Vagrant (2)
- Variables (2)
- Virtual Machine (2)
- Virtual Machines (2)
- Virtualbox (2)
- Web Application Firewall (2)
- Webinars (2)
- X5 (2)
- scalability (2)
- //Build2019 (1)
- 11G (1)
- 12.1 (1)
- 12Cr1 (1)
- 12Cr2 (1)
- 18C Grid Installation (1)
- 2022 (1)
- 2022 Snowflake Summit (1)
- AI Platform (1)
- AI Summit (1)
- Actifio (1)
- Active Directory (1)
- Adaptive Hash Index (1)
- Adf Custom Email (1)
- Adobe Flash (1)
- Adrci (1)
- Advanced Data Services (1)
- Afd (1)
- After Logon Trigger (1)
- Ahf (1)
- Aix (1)
- Akka (1)
- Alloydb (1)
- Alter Table (1)
- Always On (1)
- Always On Listener (1)
- Alwayson With Gke (1)
- Amazon (1)
- Amazon Athena (1)
- Amazon Aurora Backtrack (1)
- Amazon Efs (1)
- Amazon Redshift (1)
- Amazon Sagemaker (1)
- Amazon Vpc Flow Logs (1)
- Amdu (1)
- Analysis (1)
- Analytical Models (1)
- Analyzing Bigquery Via Sheets (1)
- Anisble (1)
- Annual Mysql Community Dinner (1)
- Anthos (1)
- Apache (1)
- Apache Nifi (1)
- Apache Spark (1)
- Application Migration (1)
- Architect (1)
- Architecture (1)
- Ash (1)
- Asmlib (1)
- Atlas CLI (1)
- Audit In Postgres (1)
- Audit In Postgresql (1)
- Auto Failover (1)
- Auto Increment (1)
- Auto Index (1)
- Autoconfig (1)
- Automated Reports (1)
- Automl (1)
- Autostart (1)
- Awr Mining (1)
- Aws Glue (1)
- Aws Lake Formation (1)
- Aws Lambda (1)
- Azure Analysis Services (1)
- Azure Blob Storage (1)
- Azure Cognitive Search (1)
- Azure Data (1)
- Azure Data Lake (1)
- Azure Data Lake Analytics (1)
- Azure Data Lake Store (1)
- Azure Data Migration Service (1)
- Azure Dma (1)
- Azure Dms (1)
- Azure Document Intelligence (1)
- Azure Integration Runtime (1)
- Azure OpenAI (1)
- Azure Sql Data Warehouse (1)
- Azure Sql Dw (1)
- Azure Sql Managed Instance (1)
- Azure Vm (1)
- Backup For Sql Server (1)
- Bacpac (1)
- Bag (1)
- Bare Metal Solution (1)
- Batch Operation (1)
- Batches In Cassandra (1)
- Beats (1)
- Best Practice (1)
- Bi Publisher (1)
- Binary Logging (1)
- Bind Variables (1)
- Bitnami (1)
- Blob Storage Endpoint (1)
- Blockchain (1)
- Browsers (1)
- Btp Architecture (1)
- Btp Components (1)
- Buffer Pool (1)
- Bug (1)
- Bugs (1)
- Build 2019 Updates (1)
- Build Cassandra (1)
- Bundle Patch (1)
- Bushy Join (1)
- Business Continuity (1)
- Business Insights (1)
- Business Process Modelling (1)
- Business Reputation (1)
- CAPEX (1)
- Capacity Planning (1)
- Career (1)
- Career Development (1)
- Cassandra-Cli (1)
- Catcon.Pm (1)
- Catctl.Pl (1)
- Catupgrd.Sql (1)
- Cbo (1)
- Cdb Duplication (1)
- Certificate (1)
- Certificate Management (1)
- Chaos Engineering (1)
- Cheatsheet (1)
- Checkactivefilesandexecutables (1)
- Chmod (1)
- Chown (1)
- Chrome Enterprise (1)
- Chrome Security (1)
- Cl-Series (1)
- Cleanup (1)
- Cloud Browser (1)
- Cloud Build (1)
- Cloud Consulting (1)
- Cloud Data Warehouse (1)
- Cloud Database Management (1)
- Cloud Dataproc (1)
- Cloud Foundry (1)
- Cloud Manager (1)
- Cloud Migations (1)
- Cloud Networking (1)
- Cloud SQL Replica (1)
- Cloud Scheduler (1)
- Cloud Services (1)
- Cloud Strategies (1)
- Cloudformation (1)
- Cluster Resource (1)
- Cmo (1)
- Cockroach Db (1)
- Coding Benchmarks (1)
- Colab (1)
- Collectd (1)
- Columnar (1)
- Communication Plans (1)
- Community (1)
- Compact Storage (1)
- Compaction (1)
- Compliance (1)
- Compression (1)
- Compute Instances (1)
- Compute Node (1)
- Concurrent Manager (1)
- Concurrent Processing (1)
- Configuration (1)
- Consistency Level (1)
- Consolidation (1)
- Conversational AI (1)
- Covid-19 (1)
- Cpu Patching (1)
- Cqlsstablewriter (1)
- Crash (1)
- Create Catalog Error (1)
- Create_File_Dest (1)
- Credentials (1)
- Cross Platform (1)
- CrowdStrike (1)
- Crsctl (1)
- Custom Instance Images (1)
- Cve-2022-21500 (1)
- Cvu (1)
- Cypher Queries (1)
- DBSAT 3 (1)
- Dacpac (1)
- Dag (1)
- Data Analysis (1)
- Data Analytics Platform (1)
- Data Box (1)
- Data Classification (1)
- Data Cleansing (1)
- Data Encryption (1)
- Data Engineering (1)
- Data Estate (1)
- Data Flow Management (1)
- Data Insights (1)
- Data Integrity (1)
- Data Lake (1)
- Data Leader (1)
- Data Lifecycle Management (1)
- Data Lineage (1)
- Data Masking (1)
- Data Mesh (1)
- Data Migration Assistant (1)
- Data Migration Service (1)
- Data Mining (1)
- Data Modeling (1)
- Data Monetization (1)
- Data Policy (1)
- Data Profiling (1)
- Data Protection (1)
- Data Retention (1)
- Data Safe (1)
- Data Sheets (1)
- Data Summit (1)
- Data Vault (1)
- Data Warehouse Modernization (1)
- Database Auditing (1)
- Database Consultant (1)
- Database Link (1)
- Database Modernization (1)
- Database Provisioning (1)
- Database Provisioning Failed (1)
- Database Replication (1)
- Database Scaling (1)
- Database Schemas (1)
- Database Security (1)
- Databricks (1)
- Datadog (1)
- Datafile (1)
- Datapatch (1)
- Dataprivacy (1)
- Datascape 59 (1)
- Datasets (1)
- Datastax Cassandra (1)
- Datastax Opscenter (1)
- Datasync Error (1)
- Db_Create_File_Dest (1)
- Dbaas (1)
- Dbatools (1)
- Dbcc Checkident (1)
- Dbms_Cloud (1)
- Dbms_File_Transfer (1)
- Dbms_Metadata (1)
- Dbms_Service (1)
- Dbms_Stats (1)
- Dbupgrade (1)
- Deep Learning (1)
- Delivery (1)
- Devd (1)
- Dgbroker (1)
- Dialogflow (1)
- Dict0Dict (1)
- Did You Know (1)
- Direct Path Read Temp (1)
- Disk Groups (1)
- Disk Management (1)
- Diskgroup (1)
- Dispatchers (1)
- Distributed Ag (1)
- Distribution Agent (1)
- Documentation (1)
- Download (1)
- Dp Agent (1)
- Duet AI (1)
- Duplication (1)
- Dynamic Sampling (1)
- Dynamic Tasks (1)
- E-Business Suite Cpu Patching (1)
- E-Business Suite Patching (1)
- Ebs Sso (1)
- Ec2 (1)
- Edb Postgresql Advanced Server (1)
- Edb Postgresql Password Verify Function (1)
- Editions (1)
- Edp (1)
- El Carro (1)
- Elassandra (1)
- Elk Stack (1)
- Em13Cr2 (1)
- Emcli (1)
- End of Life (1)
- Engineering (1)
- Enqueue (1)
- Enterprise (1)
- Enterprise Architecture (1)
- Enterprise Command Centers (1)
- Enterprise Manager Command Line Interface (Em Cli (1)
- Enterprise Plus (1)
- Episode 58 (1)
- Error Handling (1)
- Exacc (1)
- Exacheck (1)
- Exacs (1)
- Exadata Asr (1)
- Execution (1)
- Executive Sponsor (1)
- Expenditure (1)
- Export Sccm Collection To Csv (1)
- External Persistent Volumes (1)
- Fail (1)
- Failed Upgrade (1)
- Failover In Postgresql (1)
- Fall 2021 (1)
- Fast Recovery Area (1)
- FinOps Strategy (1)
- Flash Recovery Area (1)
- Flashback (1)
- Fnd (1)
- Fndsm (1)
- Force_Matching_Signature (1)
- Fra Full (1)
- Framework (1)
- Freebsd (1)
- Fsync (1)
- Function-Based Index (1)
- GCVE Architecture (1)
- GPQA (1)
- Gaming (1)
- Garbagecollect (1)
- Gcp Compute (1)
- Gcp-Spanner (1)
- Geography (1)
- Geth (1)
- Getmospatch (1)
- Git (1)
- Global Analytics (1)
- Gmail (1)
- Gmail Security (1)
- Google Analytics (1)
- Google Cloud Architecture Framework (1)
- Google Cloud Data Services (1)
- Google Cloud Partner (1)
- Google Cloud Spanner (1)
- Google Cloud VMware Engine (1)
- Google Compute Engine (1)
- Google Dataflow (1)
- Google Datalab (1)
- Google Grab And Go (1)
- Google Sheets (1)
- Gp2 (1)
- Graph Algorithms (1)
- Graph Databases (1)
- Graph Inferences (1)
- Graph Theory (1)
- GraphQL (1)
- Graphical User Interface (Gui) (1)
- Grid (1)
- Grid Infrastructure (1)
- Griddisk Resize (1)
- Grp (1)
- Guaranteed Restore Point (1)
- Guid Mismatch (1)
- HR Technology (1)
- HRM (1)
- Ha (1)
- Hang (1)
- Hashicorp (1)
- Hbase (1)
- Hcc (1)
- Hdinsight (1)
- Healthcheck (1)
- Hemantgiri S. Goswami (1)
- Hortonworks (1)
- How To Install Ssrs (1)
- Hr (1)
- Httpchk (1)
- Https (1)
- Huge Pages (1)
- HumanEval (1)
- Hung Database (1)
- Hybrid Columnar Compression (1)
- Hyper-V (1)
- Hyperscale (1)
- Hypothesis Driven Development (1)
- Ibm (1)
- Identity Management (1)
- Idm (1)
- Ilom (1)
- Imageinfo (1)
- Impdp (1)
- In Place Upgrade (1)
- Incident Response (1)
- Indempotent (1)
- Indexing In Mongodb (1)
- Influxdb (1)
- Information (1)
- Infrastructure As A Code (1)
- Injection (1)
- Innobackupex (1)
- Innodb Concurrency (1)
- Innodb Flush Method (1)
- Insights (1)
- Installing (1)
- Instance Cloning (1)
- Integration Services (1)
- Integrations (1)
- Interactive_Timeout (1)
- Interval Partitioning (1)
- Invisible Indexes (1)
- Io1 (1)
- IoT (1)
- Iops (1)
- Iphone (1)
- Ipv6 (1)
- Iscsi (1)
- Iscsi-Initiator-Utils (1)
- Iscsiadm (1)
- Issues (1)
- It Industry (1)
- It Teams (1)
- JMX Metrics (1)
- Jared Still (1)
- Javascript (1)
- Jdbc (1)
- Jinja2 (1)
- Jmx (1)
- Jmx Monitoring (1)
- Jvm (1)
- Jython (1)
- K8S (1)
- Kernel (1)
- Key Btp Components (1)
- Kfed (1)
- Kill Sessions (1)
- Knapsack (1)
- Kubeflow (1)
- LMSYS Chatbot Arena (1)
- Large Pages (1)
- Latency (1)
- Latest News (1)
- Leadership (1)
- Leap Second (1)
- Limits (1)
- Line 1 (1)
- Linkcolumn (1)
- Linux Host Monitoring (1)
- Linux Storage Appliance (1)
- Listener (1)
- Loadavg (1)
- Lock_Sga (1)
- Locks (1)
- Log File Switch (Archiving Needed) (1)
- Logfile (1)
- Looker (1)
- Lvm (1)
- MMLU (1)
- Managed Instance (1)
- Managed Services (1)
- Management (1)
- Management Servers (1)
- Marketing (1)
- Marketing Analytics (1)
- Martech (1)
- Masking (1)
- Megha Bedi (1)
- Metadata (1)
- Method-R Workbench (1)
- Metric (1)
- Metric Extensions (1)
- Michelle Gutzait (1)
- Microservices (1)
- Microsoft Azure Sql Database (1)
- Microsoft Build (1)
- Microsoft Build 2019 (1)
- Microsoft Ignite (1)
- Microsoft Inspire 2019 (1)
- Migrate (1)
- Migrating Ssis Catalog (1)
- Migrating To Azure Sql (1)
- Migration Checklist (1)
- Mirroring (1)
- Mismatch (1)
- Model Governance (1)
- Monetization (1)
- MongoDB Atlas (1)
- MongoDB Compass (1)
- Ms Excel (1)
- Msdtc (1)
- Msdtc In Always On (1)
- Msdtc In Cluster (1)
- Multi-IP (1)
- Multicast (1)
- Multipath (1)
- My.Cnf (1)
- MySQL Shell Logical Backup (1)
- MySQLDump (1)
- Mysql Enterprise (1)
- Mysql Plugin For Oracle Enterprise Manager (1)
- Mysql Replication Filters (1)
- Mysql Server (1)
- Mysql-Python (1)
- Nagios (1)
- Ndb (1)
- Net_Read_Timeout (1)
- Net_Write_Timeout (1)
- Netcat (1)
- Newsroom (1)
- Nfs (1)
- Nifi (1)
- Node (1)
- November 10Th 2015 (1)
- November 6Th 2015 (1)
- Null Columns (1)
- Nullipotent (1)
- OPEX (1)
- ORAPKI (1)
- O_Direct (1)
- Oacore (1)
- October 21St 2015 (1)
- October 6Th 2015 (1)
- October 8Th 2015 (1)
- Oda (1)
- Odbcs (1)
- Odbs (1)
- Odi (1)
- Oel (1)
- Ohs (1)
- Olvm (1)
- On-Prem To Azure Sql (1)
- On-Premises (1)
- Onclick (1)
- Open.Canada.Ca (1)
- Openstack (1)
- Operating System Monitoring (1)
- Oplog (1)
- Opsworks (1)
- Optimization (1)
- Optimizer (1)
- Ora-01852 (1)
- Ora-7445 (1)
- Oracle 19 (1)
- Oracle 20C (1)
- Oracle Cursor (1)
- Oracle Database 12.2 (1)
- Oracle Database Appliance (1)
- Oracle Database Se2 (1)
- Oracle Database Standard Edition 2 (1)
- Oracle Database Upgrade (1)
- Oracle Database@Google Cloud (1)
- Oracle Exadata Smart Scan (1)
- Oracle Licensing (1)
- Oracle Linux Virtualization Manager (1)
- Oracle Oda (1)
- Oracle Openworld (1)
- Oracle Parallelism (1)
- Oracle Rdbms (1)
- Oracle Real Application Clusters (1)
- Oracle Reports (1)
- Oracle Security (1)
- Oracle Wallet (1)
- Orasrp (1)
- Organizational Change (1)
- Orion (1)
- Os (1)
- Osbws_Install.Jar (1)
- Oui Gui (1)
- Output (1)
- Owox (1)
- Paas (1)
- Package Deployment Wizard Error (1)
- Parallel Execution (1)
- Parallel Query (1)
- Parallel Query Downgrade (1)
- Partitioning (1)
- Partitions (1)
- Password (1)
- Password Change (1)
- Password Recovery (1)
- Password Verify Function In Postgresql (1)
- Patches (1)
- Patchmgr (1)
- Pdb Duplication (1)
- Penalty (1)
- Perfomrance (1)
- Performance Schema (1)
- Pg 15 (1)
- Pg_Rewind (1)
- Pga (1)
- Pipeline Debugging (1)
- Pivot (1)
- Planning (1)
- Plsql (1)
- Policy (1)
- Polybase (1)
- Post-Acquisition (1)
- Post-Covid It (1)
- Postgresql Complex Password (1)
- Postgresql With Repmgr Integration (1)
- Power Bi (1)
- Pq (1)
- Preliminar Connection (1)
- Preliminary Connection (1)
- Privatecloud (1)
- Process Mining (1)
- Production (1)
- Productivity (1)
- Profile In Edb Postgresql (1)
- Programming (1)
- Prompt Engineering (1)
- Provisioned Iops (1)
- Provisiones Iops (1)
- Proxy Monitoring (1)
- Psu (1)
- Public Cloud (1)
- Pubsub (1)
- Purge (1)
- Purge Thread (1)
- Pythian Blackbird Acquisition (1)
- Pythian Goodies (1)
- Pythian News (1)
- Python Pandas (1)
- Query Performance (1)
- Quicksight (1)
- Quota Limits (1)
- R12 R12.2 Cp Concurrent Processing Abort (1)
- R12.1.3 (1)
- REF! (1)
- Ram Cache (1)
- Rbac (1)
- Rdb (1)
- Rds_File_Util (1)
- Read Free Replication (1)
- Read Latency (1)
- Read Only (1)
- Read Replica (1)
- Reboot (1)
- Recruiting (1)
- Redo Size (1)
- Relational Database Management System (1)
- Release (1)
- Release Automation (1)
- Repair (1)
- Replication Compatibility (1)
- Replication Error (1)
- Repmgr (1)
- Repmgrd (1)
- Reporting Services 2019 (1)
- Resiliency Planning (1)
- Resource Manager (1)
- Resources (1)
- Restore (1)
- Restore Point (1)
- Retail (1)
- Rhel (1)
- Risk (1)
- Risk Management (1)
- Rocksrb (1)
- Role In Postgresql (1)
- Rollback (1)
- Rolling Patch (1)
- Row0Purge (1)
- Rpm (1)
- Rule "Existing Clustered Or Clustered-Prepared In (1)
- Running Discovery On Remote Machine (1)
- SQL Optimization (1)
- SQL Tracing (1)
- SSRS Administration (1)
- SaaS (1)
- Sap Assessment (1)
- Sap Assessment Report (1)
- Sap Backup Restore (1)
- Sap Btp Architecture (1)
- Sap Btp Benefits (1)
- Sap Btp Model (1)
- Sap Btp Services (1)
- Sap Homogenous System Copy Method (1)
- Sap Landscape Copy (1)
- Sap Migration Assessment (1)
- Sap On Mssql (1)
- Sap System Copy (1)
- Sar (1)
- Scaling Ir (1)
- Sccm (1)
- Sccm Powershell (1)
- Scheduler (1)
- Scheduler_Job (1)
- Schedulers (1)
- Scheduling (1)
- Scott Mccormick (1)
- Scripts (1)
- Sdp (1)
- Secrets (1)
- Securing Sql Server (1)
- Security Compliance (1)
- Sed (Stream Editor) (1)
- Self Hosted Ir (1)
- Semaphore (1)
- Seps (1)
- September 11Th 2015 (1)
- Serverless Computing (1)
- Serverless Framework (1)
- Service Broker (1)
- Service Bus (1)
- Shared Connections (1)
- Shared Storage (1)
- Shellshock (1)
- Signals (1)
- Silent (1)
- Slave (1)
- Slob (1)
- Smart Scan (1)
- Smtp (1)
- Snapshot (1)
- Snowday Fall 2021 (1)
- Socat (1)
- Software Development (1)
- Software Engineering (1)
- Solutions Architecture (1)
- Spanner-Backups (1)
- Sphinx (1)
- Split Brain In Postgresql (1)
- Spm (1)
- Sql Agent (1)
- Sql Backup To Url Error (1)
- Sql Cluster Installer Hang (1)
- Sql Database (1)
- Sql Developer (1)
- Sql On Linux (1)
- Sql Server 2014 (1)
- Sql Server 2016 (1)
- Sql Server Agent On Linux (1)
- Sql Server Backups (1)
- Sql Server Denali Is Required To Install Integrat (1)
- Sql Server Health Check (1)
- Sql Server Troubleshooting On Linux (1)
- Sql Server Version (1)
- Sql Setup (1)
- Sql Vm (1)
- Sql2K19Ongke (1)
- Sqldatabase Serverless (1)
- Ssh User Equivalence (1)
- Ssis Denali Error (1)
- Ssis Install Error E Xisting Clustered Or Cluster (1)
- Ssis Package Deployment Error (1)
- Ssisdb Master Key (1)
- Ssisdb Restore Error (1)
- Sso (1)
- Ssrs 2019 (1)
- Sstable2Json (1)
- Sstableloader (1)
- Sstablesimpleunsortedwriter (1)
- Stack Dump (1)
- Standard Edition (1)
- Startup Process (1)
- Statistics (1)
- Statspack (1)
- Statspack Data Mining (1)
- Statspack Erroneously Reporting (1)
- Statspack Issues (1)
- Storage (1)
- Stored Procedure (1)
- Strategies (1)
- Streaming (1)
- Sunos (1)
- Swap (1)
- Swapping (1)
- Switch (1)
- Syft (1)
- Synapse (1)
- Sync Failed There Is Not Enough Space On The Disk (1)
- Sys Schema (1)
- System Function (1)
- Systems Administration (1)
- T-Sql (1)
- Table Optimization (1)
- Tablespace Growth (1)
- Tablespaces (1)
- Tags (1)
- Tar (1)
- Tde (1)
- Team Management (1)
- Tech Debt (1)
- Technology (1)
- Telegraf (1)
- Tempdb Encryption (1)
- Templates (1)
- Temporary Tablespace (1)
- Tenserflow (1)
- Teradata (1)
- Testing New Cassandra Builds (1)
- There Is Not Enough Space On The Disk (1)
- Thick Data (1)
- Third-Party Data (1)
- Thrift (1)
- Thrift Data (1)
- Tidb (1)
- Time Series (1)
- Time-Drift (1)
- Tkprof (1)
- Tmux (1)
- Tns (1)
- Trace (1)
- Tracefile (1)
- Training (1)
- Transaction Log (1)
- Transactions (1)
- Transformation Navigator (1)
- Transparent Data Encryption (1)
- Trigger (1)
- Triggers On Memory-Optimized Tables Must Use With (1)
- Troubleshooting (1)
- Tungsten (1)
- Tvdxtat (1)
- Twitter (1)
- U-Sql (1)
- UNDO Tablespace (1)
- Upgrade Issues (1)
- Uptime (1)
- Uptrade (1)
- Url Backup Error (1)
- Usability (1)
- Use Cases (1)
- User (1)
- User Defined Compactions (1)
- Utilization (1)
- Utl_Smtp (1)
- VDI Jump Host (1)
- Validate Structure (1)
- Validate_Credentials (1)
- Value (1)
- Velocity (1)
- Vertex AI (1)
- Vertica (1)
- Vertical Slicing (1)
- Videos (1)
- Virtual Private Cloud (1)
- Virtualization (1)
- Vision (1)
- Vpn (1)
- Wait_Timeout (1)
- Wallet (1)
- Webhook (1)
- Weblogic Connection Filters (1)
- Webscale Database (1)
- Windows 10 (1)
- Windows Powershell (1)
- WiredTiger (1)
- With Native_Compilation (1)
- Word (1)
- Workshop (1)
- Workspace Security (1)
- Xbstream (1)
- Xml Publisher (1)
- Zabbix (1)
- dbms_Monitor (1)
- postgresql 16 (1)
- sqltrace (1)
- tracing (1)
- vSphere (1)
- xml (1)
- October 2024 (2)
- September 2024 (7)
- August 2024 (4)
- July 2024 (2)
- June 2024 (6)
- May 2024 (3)
- April 2024 (2)
- February 2024 (2)
- January 2024 (11)
- December 2023 (10)
- November 2023 (11)
- October 2023 (10)
- September 2023 (8)
- August 2023 (7)
- July 2023 (2)
- June 2023 (13)
- May 2023 (4)
- April 2023 (6)
- March 2023 (10)
- February 2023 (6)
- January 2023 (5)
- December 2022 (10)
- November 2022 (10)
- October 2022 (10)
- September 2022 (13)
- August 2022 (16)
- July 2022 (12)
- June 2022 (13)
- May 2022 (11)
- April 2022 (4)
- March 2022 (5)
- February 2022 (4)
- January 2022 (14)
- December 2021 (16)
- November 2021 (11)
- October 2021 (7)
- September 2021 (11)
- August 2021 (6)
- July 2021 (9)
- June 2021 (4)
- May 2021 (8)
- April 2021 (16)
- March 2021 (16)
- February 2021 (6)
- January 2021 (12)
- December 2020 (12)
- November 2020 (17)
- October 2020 (11)
- September 2020 (10)
- August 2020 (11)
- July 2020 (13)
- June 2020 (6)
- May 2020 (9)
- April 2020 (18)
- March 2020 (21)
- February 2020 (13)
- January 2020 (15)
- December 2019 (10)
- November 2019 (11)
- October 2019 (12)
- September 2019 (16)
- August 2019 (15)
- July 2019 (10)
- June 2019 (16)
- May 2019 (20)
- April 2019 (21)
- March 2019 (14)
- February 2019 (18)
- January 2019 (18)
- December 2018 (5)
- November 2018 (16)
- October 2018 (12)
- September 2018 (20)
- August 2018 (27)
- July 2018 (31)
- June 2018 (34)
- May 2018 (28)
- April 2018 (27)
- March 2018 (17)
- February 2018 (8)
- January 2018 (20)
- December 2017 (14)
- November 2017 (4)
- October 2017 (1)
- September 2017 (3)
- August 2017 (5)
- July 2017 (4)
- June 2017 (2)
- May 2017 (7)
- April 2017 (7)
- March 2017 (8)
- February 2017 (8)
- January 2017 (5)
- December 2016 (3)
- November 2016 (4)
- October 2016 (8)
- September 2016 (9)
- August 2016 (10)
- July 2016 (9)
- June 2016 (8)
- May 2016 (13)
- April 2016 (16)
- March 2016 (13)
- February 2016 (11)
- January 2016 (6)
- December 2015 (11)
- November 2015 (11)
- October 2015 (5)
- September 2015 (16)
- August 2015 (4)
- July 2015 (1)
- June 2015 (3)
- May 2015 (6)
- April 2015 (5)
- March 2015 (5)
- February 2015 (4)
- January 2015 (3)
- December 2014 (7)
- October 2014 (4)
- September 2014 (6)
- August 2014 (6)
- July 2014 (16)
- June 2014 (7)
- May 2014 (6)
- April 2014 (5)
- March 2014 (4)
- February 2014 (10)
- January 2014 (6)
- December 2013 (8)
- November 2013 (12)
- October 2013 (9)
- September 2013 (6)
- August 2013 (7)
- July 2013 (9)
- June 2013 (7)
- May 2013 (7)
- April 2013 (4)
- March 2013 (7)
- February 2013 (4)
- January 2013 (4)
- December 2012 (6)
- November 2012 (8)
- October 2012 (9)
- September 2012 (3)
- August 2012 (5)
- July 2012 (5)
- June 2012 (7)
- May 2012 (11)
- April 2012 (1)
- March 2012 (8)
- February 2012 (1)
- January 2012 (6)
- December 2011 (8)
- November 2011 (5)
- October 2011 (9)
- September 2011 (6)
- August 2011 (4)
- July 2011 (1)
- June 2011 (1)
- May 2011 (5)
- April 2011 (2)
- February 2011 (2)
- January 2011 (2)
- December 2010 (1)
- November 2010 (7)
- October 2010 (3)
- September 2010 (8)
- August 2010 (2)
- July 2010 (4)
- June 2010 (7)
- May 2010 (2)
- April 2010 (1)
- March 2010 (3)
- February 2010 (3)
- January 2010 (2)
- November 2009 (6)
- October 2009 (6)
- August 2009 (3)
- July 2009 (3)
- June 2009 (3)
- May 2009 (2)
- April 2009 (8)
- March 2009 (6)
- February 2009 (4)
- January 2009 (3)
- November 2008 (3)
- October 2008 (7)
- September 2008 (6)
- August 2008 (9)
- July 2008 (9)
- June 2008 (9)
- May 2008 (9)
- April 2008 (8)
- March 2008 (4)
- February 2008 (3)
- January 2008 (3)
- December 2007 (2)
- November 2007 (7)
- October 2007 (1)
- August 2007 (4)
- July 2007 (3)
- June 2007 (8)
- May 2007 (4)
- April 2007 (2)
- March 2007 (2)
- February 2007 (5)
- January 2007 (8)
- December 2006 (1)
- November 2006 (3)
- October 2006 (4)
- September 2006 (3)
- July 2006 (1)
- May 2006 (2)
- April 2006 (1)
- July 2005 (1)
Comments (6)