Share this
Oracle Database Migrated to Cloud Starts Experiencing "log file sync" Waits
by Jure Bratina on Nov 24, 2023 9:50:26 AM
A client was moving an Oracle 11.2.0.4 database from their data center to one of the major cloud providers' infrastructure. Even though the database was version 11.2.0.4, the findings and conclusions are also relevant for newer versions. This was seamlessly achieved by a switchover to the new primary located in the cloud. After the migration, the wait times for "log file sync" waits increased, and there were occasional spikes where wait times for this wait event exceeded 1 second per wait. This post is meant to illustrate how I proceeded with the investigation, which unraveled a chain of events causing this problem, and how the issue was permanently fixed by finding and addressing the root cause. In order not to make the post too long, some details that are not directly relevant to the topic I want to illustrate will be omitted.
Data Guard setup before the switchover
The client setup consisted of 6 databases: 1 primary (OP_A) and 2 standbys (OP_B, OP_C) in the on-premises data center, and 3 standbys in the cloud environment (CL_A, CL_B, and CL_C). The primary database OP_A is sending redo to OP_B and OP_C in SYNC AFFIRM mode and to CL_A and CL_B in ASYNC NOAFFIRM mode. In order to save on network bandwidth, the redo to CL_C is sent from CL_B in a "manual" cascaded standby mode by configuring the log_archive_dest_N parameter.
Desired Data Guard setup after the switchover
After the switchover, CL_A is the new primary, and redo shipping is configured as per the below diagram. Since the OP_B and OP_C databases were planned to be decommissioned in the near future, and in order to save on network bandwidth, the OP_A standby will be sending redo to them in a cascaded standby way by manually configuring the log_archive_dest_N parameters:
Switchover and the appearance of "log file sync" waits
The switchover went without problems, and the application servers connected to the new primary CL_A. Checking the status on the whole database using Tanel Poder's snapper script, there were 2.6 sessions on CPU, and a bunch of others waiting on waits related to foreground sessions waiting for LGWR to complete its writes (log file sync), and LGWR-related waits:
Session level breakdown...
Sampling SID all with interval 5 seconds, taking 1 snapshots...
account---------------------------------------------------------------------------------------------------------------
ActSes %Thread | INST | SQL_ID | SQL_CHILD | EVENT | WAIT_CLASS
---------------------------------------------------------------------------------------------------------------
1.00 (100%) | 1 | 2x9k3ppn9r4x9 | 0 | ON CPU | ON CPU
.83 (83%) | 1 | 5zgkp42j5nqp3 | 0 | ON CPU | ON CPU
.43 (43%) | 1 | | | log file sync | Commit
.40 (40%) | 1 | | 0 | LNS wait on SENDREQ | Network
.27 (27%) | 1 | | 0 | LGWR-LNS wait on channel | Other
.23 (23%) | 1 | | 0 | log file parallel write | System I/O
.13 (13%) | 1 | 1nwzhp0605cv8 | 1 | enq: TX - row lock contention | Application
.13 (13%) | 1 | 5h867tp05vb0s | 3 | ON CPU | ON CPU
.10 (10%) | 1 | 3mjwqby797aqh | 2 | ON CPU | ON CPU
.10 (10%) | 1 | 7kdrp5k54xgxb | 6 | ON CPU | ON CPU
--------------------------------------------------------------------------------------------------------
ActSes %Thread | INST | EVENT | P1 | WAIT_CLASS
--------------------------------------------------------------------------------------------------------
2.63 (263%) | 1 | ON CPU | | ON CPU
.40 (40%) | 1 | LNS wait on SENDREQ | 0 | Network
.27 (27%) | 1 | LGWR-LNS wait on channel | 0 | Other
.23 (23%) | 1 | log file parallel write | 2 | System I/O
.13 (13%) | 1 | enq: TX - row lock contention | 1415053316 | Application
.03 (3%) | 1 | log file sync | 4134 | Commit
.03 (3%) | 1 | log file sync | 6835 | Commit
.03 (3%) | 1 | log file sync | 11589 | Commit
.03 (3%) | 1 | log file sync | 5504 | Commit
.03 (3%) | 1 | log file sync | 10542 | Commit
During the time the database was being monitored, the performance was acceptable. However, in the days following the switchover, there were occasional increases in application response time. Live monitoring and analyzing AWR data revealed that the wait times for the "log file sync" wait event were occasionally spiking up. The following graphs show that the cumulative and average "log file sync" wait time increased significantly after the switchover in June. At the same time, the number of waits didn't change, so the transaction activity was comparable to the one before the switchover.
Note: Unless specifically stated, all of the graphs that are based on AWR data represent delta values of 15-minute intervals AWR snapshots. The vertical dotted red line indicates the time of the switchover.
"log file sync" wait event trend
This is how the log file sync's wait profile and histogram looked like for two comparable 15-minute time periods. Both AWR snapshots start at the same time, but one is from a Thursday before the switchover, and the other one is from a Thursday after the switchover:
-- Before the switchover:
Avg
%Time Total Wait wait Waits % DB
Event Waits -outs Time (s) (ms) /txn time
-------------------------- ------------ ----- ---------- ------- -------- ------
log file sync 377,173 0 4,297 11 1.1 39.9
% of Waits
-----------------------------------------------
Total
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms <=1s >1s
-------------------------- ------ ----- ----- ----- ----- ----- ----- ----- -----
log file sync 376.2K .0 .7 13.1 30.7 38.1 13.8 3.7
-- After the switchover:
Avg
%Time Total Wait wait Waits % DB
Event Waits -outs Time (s) (ms) /txn time
-------------------------- ------------ ----- ---------- ------- -------- ------
log file sync 348,922 0 164,141 470 1.1 85.7
% of Waits
-----------------------------------------------
Total
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms <=1s >1s
-------------------------- ------ ----- ----- ----- ----- ----- ----- ----- -----
log file sync 347.2K .0 6.0 29.2 5.5 4.2 5.8 42.1 7.2
Waits
4s
Event to 2m <2s <4s <8s <16s <32s < 1m < 2m >=2m
-------------------------- ----- ----- ----- ----- ----- ----- ----- ----- -----
log file sync 23.5K 93.2 1.7 5.1
What stands out is the 42-fold increase in the average wait time, going from 11 ms to 470 ms. The last line of the above output indicates there are waits taking between 4s and 8s to complete, which are big outliers, and several orders of magnitude larger than we'd usually expect for a "log file sync" duration. To be precise, approx. 1200 waits (5.1% of 23.5k) took between 4s and 8s, and that's 0,34% (1200/348,922*100) of all the "log file sync" waits. These abnormally high log write times were also detected by logwriter on the primary database:
*** 2023-06-06 22:28:03.682
Warning: log write elapsed time 524ms, size 3000KB
*** 2023-06-06 22:28:35.013
Warning: log write elapsed time 505ms, size 2157KB
*** 2023-06-06 22:28:35.973
Warning: log write elapsed time 958ms, size 5316KB
*** 2023-06-06 22:28:37.151
Warning: log write elapsed time 1177ms, size 7342KB
*** 2023-06-06 22:28:45.087
Warning: log write elapsed time 588ms, size 2355KB
Cross-checking with the number of transactions (commits, rollbacks) as they are directly related to the "log file sync" wait, their volume is comparable to the pre-switchover time, though we can notice some unusual spikes in the amount of redo written per second:
The wait on "log file sync" accounts for the time the foreground session waits for the logwriter process to write the log buffer to the log file, which is, in turn, accounted for by the "log file parallel write" event. However, the cumulative waits on that event actually decreased after the switchover, meaning that writing to local redo log files shouldn't be the cause of high "log file sync waits":
"log file parallel write" wait event trend
There could be various reasons for high waits on "log file sync", such as CPU starvation, but in our case that wasn't the reason. Instead, the wait events shown above ("LNS wait on SENDREQ","LGWR-LNS wait on channel") indicate there's some waiting related to redo transfer to the standby(s). The shipping of redo to the CL_B standby was defined as "SYNC AFFIRM", meaning that the redo has to be shipped and written to the standby's redo logs before the primary can declare the transaction as being committed. All of this time is being accounted for in the primary's "log file sync" wait event. So if there's a problem on the network between CL_A->CL_B, or if the CL_B standby has I/O issues, this will affect transaction times on CL_A.
Side note: Here's a very interesting and informative video that I highly recommend watching (skip to 50:13 for a graphical representation of the concept) where Toon Koppelaars from Oracle's Real World Performance group explains and illustrates how outliers on critical background processes can affect the "log file sync" wait for the foreground processes experience.
How to proceed?
The easy and "obvious" quick fix would be to change the log transfer method from SYNC AFFIRM to ASYNC NOAFFIRM (we're on 11g, so going with FASTSYNC (SYNC NOAFFIRM) wasn't an option); however, the root cause would still remain unknown. The next task was thus to identify whether the issue was related to possible delays in network transfer between CL_A and CL_B, or whether there was an I/O problem on CL_B.
I didn't have an easy way to test network latency between CL_A and CL_B. The MOS note "Assessing and Tuning Network Performance for Data Guard and RMAN (Doc ID 2064368.1)", contains a Java binary that could be used for this purpose, but I'd need client approval for that. Instead, focusing first on the I/O part, I found a correlation between the increase in log file sync on the CL_A primary and a heavy increase in I/O load on the CL_B standby. This was a strong clue that I/O performance on the standby could be the next thing to investigate.
Investigating I/O on the standby
Following is sar data from CL_B's server (standby) for June 6th. Note how the I/O queue size, I/O response time, and I/O wait time spike up at ~ 2:45 PM and from 10 PM till midnight. This correlates perfectly with the times of increased waits on "log file sync" on the CL_A primary, as shown in the graph above:
$ sar -b -f /var/log/sa/sa06 -s 00:00:00 -e 23:59:00
Linux 3.10.0-1160.83.1.el7.x86_64 (redact_CL_B) 06/06/2023 _x86_64_ (16 CPU)
12:00:01 AM tps rtps wtps bread/s bwrtn/s
02:20:01 PM 645.89 145.75 500.14 16649.63 34315.73
02:30:01 PM 790.17 296.57 493.60 19101.55 34043.26
02:40:01 PM 1069.04 702.48 366.56 177197.58 58505.98
02:50:01 PM 1255.85 899.30 356.55 173529.92 50034.31
03:00:01 PM 729.93 136.45 593.48 12611.13 28222.12
.
.
09:50:01 PM 534.14 108.30 425.84 7700.36 19885.35
10:00:01 PM 476.89 68.75 408.14 7283.37 14896.25
10:10:01 PM 1088.93 714.19 374.74 221526.13 25700.94
10:20:01 PM 1055.80 945.11 110.68 248119.71 20580.45
10:30:01 PM 946.64 847.82 98.82 247968.53 13827.08
10:40:01 PM 886.32 812.83 73.49 252727.91 14138.72
.
.
-- Device dev253-1 I/O statistics:
DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
.
.
14:30:01 dev253-1 731.86 18794.59 15336.18 46.64 1.46 1.99 0.51 37.52
14:40:01 dev253-1 972.70 176551.16 29011.34 211.33 55.78 57.34 0.90 87.19
14:50:01 dev253-1 1182.31 173112.07 24480.63 167.12 52.18 44.13 0.70 82.95
15:00:01 dev253-1 681.79 12348.01 11630.85 35.17 0.93 1.36 0.55 37.40
15:10:01 dev253-1 760.70 27010.68 15151.62 55.43 4.72 6.20 0.56 42.81
15:20:01 dev253-1 676.66 19957.70 11678.80 46.75 2.90 4.29 0.58 39.37
.
.
21:40:01 dev253-1 500.03 7858.04 10875.67 37.47 0.41 0.82 0.64 31.82
21:50:01 dev253-1 507.36 7696.20 10264.35 35.40 0.35 0.69 0.56 28.57
22:00:01 dev253-1 464.65 7283.26 10370.20 37.99 0.33 0.70 0.58 27.05
22:10:01 dev253-1 1058.28 221517.54 14310.22 222.84 31.05 29.34 0.70 73.96
22:20:01 dev253-1 1019.13 247850.70 8587.06 251.62 54.53 53.50 0.98 100.00
22:30:01 dev253-1 917.35 246849.62 8993.50 278.89 67.72 73.81 1.09 100.00
22:40:01 dev253-1 861.16 252280.65 4078.80 297.69 72.40 84.08 1.16 100.00
22:50:02 dev253-1 753.36 252828.30 3637.83 340.43 83.21 110.44 1.33 100.00
23:00:01 dev253-1 738.69 253587.31 2798.34 347.08 77.39 104.77 1.35 100.00
23:10:01 dev253-1 657.82 253893.35 2477.08 389.73 82.68 125.68 1.52 100.00
23:20:02 dev253-1 653.83 252839.67 3621.79 392.25 87.38 133.63 1.53 100.00
23:30:01 dev253-1 655.74 252010.94 4382.86 391.00 83.17 126.84 1.52 100.00
23:40:01 dev253-1 713.32 252081.54 4271.87 359.38 84.93 119.08 1.40 100.00
23:50:01 dev253-1 2102.19 173289.72 32952.97 98.11 33.45 15.90 0.43 91.28
Notice how svctime is relatively low all the time (max. 1.53 ms), but await is 100+ ms during the peak time? In Linux, "svctime" represents the time the I/O device takes to complete the actual I/O operation. On the other hand, "await" is "svctime + time spent in the queue". This is an indication that most of the time I/O requests were waiting in the queue waiting to be issued to the disk and that the high response time wasn't due to disks not being able to deliver the requested I/O, but because the requests were waiting in the queue (avgqu-sz).
CL_B standby server I/O spikes
CL_B standby server I/O spikes
Cross-checking the same sar statistics from the CL_A's server (primary), there wasn't any relevant load during those time periods. At 10 PM, there's the Oracle standard maintenance window where statistics are being gathered, but otherwise, the workload on the CL_A database was steady. Also, even before the switchover, the OP_A database was shipping its redo to the OP_B and OP_C standbys in SYNC AFFIRM mode, and the issue wasn't present.
Why is there an increase in I/O requests in CL_B?
The next task was to find out why there was this sudden increase in I/O requests on the CL_B standby. In-memory ASH was unfortunately already aged out for June 6th, so instead, I checked for a similar high I/O event for June 7th. It showed that Oracle was waiting on reading database blocks that needed to be changed as part of recovery:
SQL> select to_char(sample_time, 'dd.mm.yyyy hh24:mi:ss') sample_time, user_id, session_state, event, module, action, sql_id, program
from v$active_session_history
where sample_id = 509365;
SAMPLE_TIME USER_ID SESSION EVENT MODULE PROGRAM
------------------- ---------- ------- ------------------------------------------ --------------------------------- ---------------------------------
07.06.2023 14:44:31 0 WAITING db file parallel read oracle@redact_CL_B (PR01)
07.06.2023 14:44:31 0 WAITING db file parallel read oracle@redact_CL_B (PR0D)
07.06.2023 14:44:31 0 WAITING db file parallel read oracle@redact_CL_B (PR03)
07.06.2023 14:44:31 1 WAITING Standby redo I/O oracle@redact_CL_A (TNS V1-V3) oracle@redact_CL_A (TNS V1-V3)
07.06.2023 14:44:31 0 WAITING db file parallel read oracle@redact_CL_B (PR04)
07.06.2023 14:44:31 0 WAITING db file parallel read oracle@redact_CL_B (PR05)
07.06.2023 14:44:31 0 WAITING db file parallel read oracle@redact_CL_B (PR06)
07.06.2023 14:44:31 0 WAITING db file async I/O submit oracle@redact_CL_B (DBW0)
07.06.2023 14:44:31 0 WAITING db file parallel read oracle@redact_CL_B (PR07)
07.06.2023 14:44:31 0 WAITING db file async I/O submit oracle@redact_CL_B (DBW1)
07.06.2023 14:44:31 0 WAITING db file parallel read oracle@redact_CL_B (PR08)
07.06.2023 14:44:31 0 WAITING db file parallel read oracle@redact_CL_B (PR09)
07.06.2023 14:44:31 0 WAITING db file parallel read oracle@redact_CL_B (PR0A)
07.06.2023 14:44:31 0 WAITING db file parallel read oracle@redact_CL_B (PR0B)
07.06.2023 14:44:31 0 WAITING db file parallel read oracle@redact_CL_B (PR0C)
07.06.2023 14:44:31 0 WAITING db file parallel read oracle@redact_CL_B (PR0G)
07.06.2023 14:44:31 0 WAITING parallel recovery change buffer free oracle@redact_CL_B (PR00)
07.06.2023 14:44:31 0 WAITING db file parallel read oracle@redact_CL_B (PR0F)
07.06.2023 14:44:31 0 WAITING db file parallel read oracle@redact_CL_B (PR0E)
07.06.2023 14:44:31 0 WAITING db file parallel read oracle@redact_CL_B (PR02)
20 rows selected.
SQL>
This lead me to the Oracle support note: "Redo I/O Is Higher On Active Data Guard Physical Standby Than Primary (Doc ID 2249431.1)", which suggests that the waits present on the standby could be related to the parameter DB_LOST_WRITE_PROTECT set to TYPICAL. And indeed, all of the databases present in the DG configuration had DB_LOST_WRITE_PROTECT=TYPICAL. Though the question remains - why did the issue start to happen after the switchover, since the setting was in place even before the switchover, and the workload didn't change?
The DB_LOST_WRITE_PROTECT parameter
The next step was to verify what and how DB_LOST_WRITE_PROTECT=TYPICAL affects the behavior of the primary and standby. The "DB_LOST_WRITE_PROTECT" section of the MOS note "Best Practices for Corruption Detection, Prevention, and Automatic Repair - in a Data Guard Configuration (Doc ID 1302539.1)", essentially explains that physical reads on the primary database can cause increased/additional redo logging (Block Read Redo, BRR), and that redo apply might be impacted due to additional data block read I/Os. The additional redo is logged only for physical reads on the primary. After enabling DB_LOST_WRITE_PROTECT on a physical standby database, you will encounter an increase in physical reads, especially for data that is updated very infrequently but read semi-frequently (forcing a physical read) on the primary database. Note "Redo I/O Is Higher On Active Data Guard Physical Standby Than Primary (Doc ID 2249431.1)" further explains that these additional BRR records are only generated for physical reads into the buffer cache and not by direct path reads.
Based on the above, in case there's an increase in the amount of physical reads on the primary, this can affect the standby's I/O volume. Also, as stated, direct path reads don't count here, only physical reads to the buffer cache are problematic. This behavior could, in turn, affect the primary commit times, which, among others, are sensitive to the standby's I/O performance since it's shipping redo in SYNC AFFIRM mode. The note mentions an increase in the "redo size for lost write detection" statistic, and indeed, after the switchover from OP_A->CL_A, it increased, as did the amount of "physical reads cache", i.e. reads to the buffer cache. Spikes of both statistics correlate:
"redo size for lost write detection" and "physical reads cache" correlation
At the time, I didn't go into the details as to why the amount of cached reads increased. A quick check didn't report any significant changes from buffered reads to direct path reads, though focusing only on the period when the stats gathering job was running, it now completed in 30 minutes instead of more than an hour. The time the stats job was running was also one of the periods when the primary experienced high waits on "log file sync". The new system had more CPU power and a faster I/O subsystem, so the most probable explanation was that the same job was completed faster, also generating more redo related to "redo size for lost write detection," as shown in the above graphs.
At this point, another quick fix could be to set DB_LOST_WRITE_PROTECT=NONE on the CL_B standby, though the root cause would still not be found.
Finding the root cause and fixing the problem
As mentioned, among the whole day's activities, the stats gathering job was the time when the "log file sync" issue was most prominent, so I focused the research on those time periods. Since the job was interfering with a critical application workload, the client modified its start time from 10 PM to 7 PM. This didn't fix the issue, it just moved the "log file sync" problem so it manifested 3 hours earlier, but I'm mentioning it so it's clear why I'm focusing the research on 7 PM and not 10 PM.
The main clue is in the below graphs obtained from visualizing sar data. The runtime of the stats gathering job on CL_A was from 7 PM till approx. 7:28 PM. The I/O spike on the OP_A standby also lasted just from 7 PM till approx. 7:30 PM, however, the spikes on both CL_B and CL_C standbys were there from 7 PM till 9 PM, so 1.5 hours longer. Note also how the blocks/s figures of both CL_B and CL_C standbys match perfectly, it seems like the amount of IOs was being capped artificially at approx. 250,000 blocks/s. The OP_A standby residing in the on-premises data center seems capable of much more I/O bandwidth than the CL_B and CL_C standbys in the cloud; in fact, 6.4x more. I also highlighted a spike at ~ 2:30 PM so that it can be visually compared to the status on the OP_A standby:
- CL_B standby I/O (cloud):
- CL_C standby I/O (cloud):
- OP_A standby I/O (on premises):
Note that even though the CL_C standby's I/O subsystem was suffering from the same issue as CL_B's, it didn't have an impact on the primary since its redo shipping was asynchronous.
As we didn't have access to the cloud's machine web console, my suggestion to the client was to check whether there was any I/O cap on the CL_B standby's storage volumes. Indeed, the client replied that the server hosting the CL_A primary had storage volumes capable of much more IOPS than the ones attached to the CL_B and CL_C standbys. So the IOPS on those two standbys were capped during the periods of higher I/O load. The client configured the storage volumes on both standbys to have the same characteristics as the one on the CL_A primary. After the change was implemented on June 13th, wait times on "log file sync" drastically decreased back to a few milliseconds range. As a curiosity I included the "redo size for lost write detection" statistic, which still presents spikes, but since the standby can now handle the load, it's not a problem for the primary anymore. The displayed values are based on 15-minute AWR snapshots:
I/O characteristics derived from sar on the CL_B and CL_C standbys also confirmed that the I/O storage subsystem was capable of handling more IOPS. The spikes at ~ 2:30 PM and 7 PM are much shorter now and are comparable in shape to the ones on the OP_A server presented above. The maximum number of blocks/s is also much higher than it used to be before the change:
CL_B server's I/O characteristics after providing a storage volume capable of more IOPS
Summary
The problem with high waits on "log file sync" was related to a combination of events. After the switchover, the primary database in the cloud issued more buffered reads in a shorter time period compared to the on-premises database, presumably due to faster CPUs and I/O subsystem. Due to "DB_LOST_WRITE_PROTECT=TYPICAL", the standby(s) had to read and verify the affected datablocks from their disks. This, in turn, capped out the IOPS on the CL_B and CL_C standby hosts and further slowed down all standby's I/O operations, including writing to redo logs and redo apply. This "backfired" to the primary and affected its commit times, as its redo shipping method to CL_B was set to SYNC AFFIRM.
This post illustrates how following a trail of clues led to the actual root cause of increased commit times on the primary database. Setting the redo transfer mode from CL_A to CL_B to ASYNC NOAFFIRM and/or setting DB_LOST_WRITE_PROTECT=NONE could have temporarily alleviated the problem, though the root cause wouldn't have been found be fixed, which could have caused problems later in case a switchover to CL_B occurred.
Lessons learned
The post hopefully also illustrates that having a standby destination with SYNC AFFIRM redo transfer mode and (optionally) databases with DB_LOST_WRITE_PROTECT=TYPICAL, requires that the standby's storage subsystem is also performant enough to keep up with the primary's performance. I/O capabilities of the primary and standbys meant as a disaster recovery solution should match anyway, since in the event of a switchover/failover to the standby, we might be in for an unpleasant surprise caused by major slowdowns and delays in I/O operations. If the switchover/failover was due to an emergency and failure of the primary, that would probably not be the most appropriate time to start researching why the new primary operates poorly on I/O operations.
Share this
- Technical Track (969)
- Oracle (400)
- MySQL (137)
- Cloud (131)
- Open Source (90)
- Google Cloud (83)
- DBA Lounge (76)
- Microsoft SQL Server (76)
- Technical Blog (74)
- Big Data (52)
- AWS (49)
- Google Cloud Platform (47)
- Cassandra (44)
- DevOps (41)
- Azure (38)
- Pythian (33)
- Linux (30)
- Database (26)
- Podcasts (25)
- Site Reliability Engineering (25)
- Performance (24)
- SQL Server (24)
- Microsoft Azure (23)
- Oracle E-Business Suite (23)
- PostgreSQL (23)
- Oracle Database (22)
- Docker (21)
- Group Blog Posts (20)
- Security (20)
- DBA (19)
- Log Buffer (19)
- SQL (19)
- Exadata (18)
- Mongodb (18)
- Oracle Cloud Infrastructure (OCI) (18)
- Oracle Exadata (18)
- Automation (17)
- Hadoop (16)
- Oracleebs (16)
- Amazon RDS (15)
- Ansible (15)
- Ebs (15)
- Snowflake (15)
- ASM (13)
- BigQuery (13)
- Patching (13)
- RDS (13)
- Replication (13)
- Data (12)
- GenAI (12)
- Kubernetes (12)
- Oracle 12C (12)
- Advanced Analytics (11)
- Backup (11)
- LLM (11)
- Machine Learning (11)
- OCI (11)
- Rman (11)
- Cloud Migration (10)
- Datascape Podcast (10)
- Monitoring (10)
- R12 (10)
- 12C (9)
- AI (9)
- Apache Cassandra (9)
- Data Guard (9)
- Infrastructure (9)
- Oracle 19C (9)
- Oracle Applications (9)
- Python (9)
- Series (9)
- AWR (8)
- Amazon Web Services (AWS) (8)
- Articles (8)
- High Availability (8)
- Oracle EBS (8)
- Percona (8)
- Powershell (8)
- Recovery (8)
- Weblogic (8)
- Apache Beam (7)
- Backups (7)
- Data Governance (7)
- Goldengate (7)
- Innodb (7)
- Migration (7)
- Myrocks (7)
- OEM (7)
- Oracle Enterprise Manager (OEM) (7)
- Performance Tuning (7)
- Authentication (6)
- ChatGPT-4 (6)
- Data Enablement (6)
- Database Performance (6)
- E-Business Suite (6)
- Fmw (6)
- Grafana (6)
- Oracle Enterprise Manager (6)
- Orchestrator (6)
- Postgres (6)
- Rac (6)
- Renew Refresh Republish (6)
- RocksDB (6)
- Serverless (6)
- Upgrade (6)
- 19C (5)
- Azure Data Factory (5)
- Azure Synapse Analytics (5)
- Cpu (5)
- Data Visualization (5)
- Disaster Recovery (5)
- Error (5)
- Generative AI (5)
- Google BigQuery (5)
- Indexes (5)
- Love Letters To Data (5)
- Mariadb (5)
- Microsoft (5)
- Proxysql (5)
- Scala (5)
- Sql Server Administration (5)
- VMware (5)
- Windows (5)
- Xtrabackup (5)
- Airflow (4)
- Analytics (4)
- Apex (4)
- Best Practices (4)
- Centrally Managed Users (4)
- Cli (4)
- Cloud FinOps (4)
- Cloud Spanner (4)
- Cockroachdb (4)
- Configuration Management (4)
- Container (4)
- Data Management (4)
- Data Pipeline (4)
- Data Security (4)
- Data Strategy (4)
- Database Administrator (4)
- Database Management (4)
- Database Migration (4)
- Dataflow (4)
- Dbsat (4)
- Elasticsearch (4)
- Fahd Mirza (4)
- Fusion Middleware (4)
- Google (4)
- Io (4)
- Java (4)
- Kafka (4)
- Middleware (4)
- Mysql 8 (4)
- Network (4)
- Ocidtab (4)
- Opatch (4)
- Oracle Autonomous Database (Adb) (4)
- Oracle Cloud (4)
- Pitr (4)
- Post-Mortem Analysis (4)
- Prometheus (4)
- Redhat (4)
- September 9Th 2015 (4)
- Sql2016 (4)
- Ssl (4)
- Terraform (4)
- Workflow (4)
- 2Fa (3)
- Alwayson (3)
- Amazon Relational Database Service (Rds) (3)
- Apache Kafka (3)
- Apexexport (3)
- Aurora (3)
- Azure Sql Db (3)
- Cdb (3)
- ChatGPT (3)
- Cloud Armor (3)
- Cloud Database (3)
- Cloud Security (3)
- Cluster (3)
- Consul (3)
- Cosmos Db (3)
- Cost Management (3)
- Covid19 (3)
- Crontab (3)
- Data Analytics (3)
- Data Integration (3)
- Database 12C (3)
- Database Monitoring (3)
- Database Troubleshooting (3)
- Database Upgrade (3)
- Databases (3)
- Dataops (3)
- Dbt (3)
- Digital Transformation (3)
- ERP (3)
- Google Chrome (3)
- Google Cloud Sql (3)
- Graphite (3)
- Haproxy (3)
- Heterogeneous Database Migration (3)
- Hugepages (3)
- Inside Pythian (3)
- Installation (3)
- Json (3)
- Keras (3)
- Ldap (3)
- Liquibase (3)
- Love Letter (3)
- Lua (3)
- Mfa (3)
- Multitenant (3)
- Mysql 5.7 (3)
- Mysql Configuration (3)
- Nginx (3)
- Nodetool (3)
- Non-Tech Articles (3)
- Oem 13C (3)
- Oms (3)
- Oracle 18C (3)
- Oracle Data Guard (3)
- Oracle Live Sql (3)
- Oracle Rac (3)
- Patch (3)
- Perl (3)
- Pmm (3)
- Pt-Online-Schema-Change (3)
- Rdbms (3)
- Recommended (3)
- Remote Teams (3)
- Reporting (3)
- Reverse Proxy (3)
- S3 (3)
- Spark (3)
- Sql On The Edge (3)
- Sql Server Configuration (3)
- Sql Server On Linux (3)
- Ssis (3)
- Ssis Catalog (3)
- Stefan Knecht (3)
- Striim (3)
- Sysadmin (3)
- System Versioned (3)
- Systemd (3)
- Temporal Tables (3)
- Tensorflow (3)
- Tools (3)
- Tuning (3)
- Vasu Balla (3)
- Vault (3)
- Vulnerability (3)
- Waf (3)
- 18C (2)
- Adf (2)
- Adop (2)
- Agent (2)
- Agile (2)
- Amazon Data Migration Service (2)
- Amazon Ec2 (2)
- Amazon S3 (2)
- Apache Flink (2)
- Apple (2)
- Apps (2)
- Ashdump (2)
- Atp (2)
- Audit (2)
- Automatic Backups (2)
- Autonomous (2)
- Autoupgrade (2)
- Awr Data Mining (2)
- Azure Sql (2)
- Azure Sql Data Sync (2)
- Bash (2)
- Business (2)
- Business Intelligence (2)
- Caching (2)
- Cassandra Nodetool (2)
- Cdap (2)
- Certification (2)
- Cloning (2)
- Cloud Cost Optimization (2)
- Cloud Data Fusion (2)
- Cloud Hosting (2)
- Cloud Infrastructure (2)
- Cloud Shell (2)
- Cloud Sql (2)
- Cloudscape (2)
- Cluster Level Consistency (2)
- Conferences (2)
- Consul-Template (2)
- Containerization (2)
- Containers (2)
- Cosmosdb (2)
- Costs (2)
- Cql (2)
- Cqlsh (2)
- Cyber Security (2)
- Data Discovery (2)
- Data Migration (2)
- Data Quality (2)
- Data Streaming (2)
- Data Warehouse (2)
- Database Consulting (2)
- Database Migrations (2)
- Dataguard (2)
- Datapump (2)
- Ddl (2)
- Debezium (2)
- Dictionary Views (2)
- Dms (2)
- Docker-Composer (2)
- Dr (2)
- Duplicate (2)
- Ecc (2)
- Elastic (2)
- Elastic Stack (2)
- Em12C (2)
- Encryption (2)
- Enterprise Data Platform (EDP) (2)
- Enterprise Manager (2)
- Etl (2)
- Events (2)
- Exachk (2)
- Filter Driver (2)
- Flume (2)
- Full Text Search (2)
- Galera (2)
- Gemini (2)
- General Purpose Ssd (2)
- Gh-Ost (2)
- Gke (2)
- Google Workspace (2)
- Hanganalyze (2)
- Hdfs (2)
- Health Check (2)
- Historical Trends (2)
- Incremental (2)
- Infiniband (2)
- Infrastructure As Code (2)
- Innodb Cluster (2)
- Innodb File Structure (2)
- Innodb Group Replication (2)
- Install (2)
- Internals (2)
- Java Web Start (2)
- Kibana (2)
- Log (2)
- Log4J (2)
- Logs (2)
- Memory (2)
- Merge Replication (2)
- Metrics (2)
- Mutex (2)
- MySQLShell (2)
- NLP (2)
- Neo4J (2)
- Node.Js (2)
- Nosql (2)
- November 11Th 2015 (2)
- Ntp (2)
- Oci Iam (2)
- Oem12C (2)
- Omspatcher (2)
- Opatchauto (2)
- Open Source Database (2)
- Operational Excellence (2)
- Oracle 11G (2)
- Oracle Datase (2)
- Oracle Extended Manager (Oem) (2)
- Oracle Flashback (2)
- Oracle Forms (2)
- Oracle Installation (2)
- Oracle Io Testing (2)
- Pdb (2)
- Podcast (2)
- Puppet (2)
- Pythian Europe (2)
- R12.2 (2)
- Redshift (2)
- Remote DBA (2)
- Remote Sre (2)
- SAP (2)
- SAP HANA Cloud (2)
- Sap Migration (2)
- Scale (2)
- Schema (2)
- September 30Th 2015 (2)
- September 3Rd 2015 (2)
- Shell (2)
- Simon Pane (2)
- Single Sign-On (2)
- Sql Server On Gke (2)
- Sqlplus (2)
- Sre (2)
- Ssis Catalog Error (2)
- Ssisdb (2)
- Standby (2)
- Statspack Mining (2)
- Systemstate Dump (2)
- Tablespace (2)
- Technical Training (2)
- Tempdb (2)
- Tfa (2)
- Throughput (2)
- Tls (2)
- Tombstones (2)
- Transactional Replication (2)
- User Groups (2)
- Vagrant (2)
- Variables (2)
- Virtual Machine (2)
- Virtual Machines (2)
- Virtualbox (2)
- Web Application Firewall (2)
- Webinars (2)
- X5 (2)
- scalability (2)
- //Build2019 (1)
- 11G (1)
- 12.1 (1)
- 12Cr1 (1)
- 12Cr2 (1)
- 18C Grid Installation (1)
- 2022 (1)
- 2022 Snowflake Summit (1)
- AI Platform (1)
- AI Summit (1)
- Actifio (1)
- Active Directory (1)
- Adaptive Hash Index (1)
- Adf Custom Email (1)
- Adobe Flash (1)
- Adrci (1)
- Advanced Data Services (1)
- Afd (1)
- After Logon Trigger (1)
- Ahf (1)
- Aix (1)
- Akka (1)
- Alloydb (1)
- Alter Table (1)
- Always On (1)
- Always On Listener (1)
- Alwayson With Gke (1)
- Amazon (1)
- Amazon Athena (1)
- Amazon Aurora Backtrack (1)
- Amazon Efs (1)
- Amazon Redshift (1)
- Amazon Sagemaker (1)
- Amazon Vpc Flow Logs (1)
- Amdu (1)
- Analysis (1)
- Analytical Models (1)
- Analyzing Bigquery Via Sheets (1)
- Anisble (1)
- Annual Mysql Community Dinner (1)
- Anthos (1)
- Apache (1)
- Apache Nifi (1)
- Apache Spark (1)
- Application Migration (1)
- Architect (1)
- Architecture (1)
- Ash (1)
- Asmlib (1)
- Atlas CLI (1)
- Audit In Postgres (1)
- Audit In Postgresql (1)
- Auto Failover (1)
- Auto Increment (1)
- Auto Index (1)
- Autoconfig (1)
- Automated Reports (1)
- Automl (1)
- Autostart (1)
- Awr Mining (1)
- Aws Glue (1)
- Aws Lake Formation (1)
- Aws Lambda (1)
- Azure Analysis Services (1)
- Azure Blob Storage (1)
- Azure Cognitive Search (1)
- Azure Data (1)
- Azure Data Lake (1)
- Azure Data Lake Analytics (1)
- Azure Data Lake Store (1)
- Azure Data Migration Service (1)
- Azure Dma (1)
- Azure Dms (1)
- Azure Document Intelligence (1)
- Azure Integration Runtime (1)
- Azure OpenAI (1)
- Azure Sql Data Warehouse (1)
- Azure Sql Dw (1)
- Azure Sql Managed Instance (1)
- Azure Vm (1)
- Backup For Sql Server (1)
- Bacpac (1)
- Bag (1)
- Bare Metal Solution (1)
- Batch Operation (1)
- Batches In Cassandra (1)
- Beats (1)
- Best Practice (1)
- Bi Publisher (1)
- Binary Logging (1)
- Bind Variables (1)
- Bitnami (1)
- Blob Storage Endpoint (1)
- Blockchain (1)
- Browsers (1)
- Btp Architecture (1)
- Btp Components (1)
- Buffer Pool (1)
- Bug (1)
- Bugs (1)
- Build 2019 Updates (1)
- Build Cassandra (1)
- Bundle Patch (1)
- Bushy Join (1)
- Business Continuity (1)
- Business Insights (1)
- Business Process Modelling (1)
- Business Reputation (1)
- CAPEX (1)
- Capacity Planning (1)
- Career (1)
- Career Development (1)
- Cassandra-Cli (1)
- Catcon.Pm (1)
- Catctl.Pl (1)
- Catupgrd.Sql (1)
- Cbo (1)
- Cdb Duplication (1)
- Certificate (1)
- Certificate Management (1)
- Chaos Engineering (1)
- Cheatsheet (1)
- Checkactivefilesandexecutables (1)
- Chmod (1)
- Chown (1)
- Chrome Enterprise (1)
- Chrome Security (1)
- Cl-Series (1)
- Cleanup (1)
- Cloud Browser (1)
- Cloud Build (1)
- Cloud Consulting (1)
- Cloud Data Warehouse (1)
- Cloud Database Management (1)
- Cloud Dataproc (1)
- Cloud Foundry (1)
- Cloud Manager (1)
- Cloud Migations (1)
- Cloud Networking (1)
- Cloud SQL Replica (1)
- Cloud Scheduler (1)
- Cloud Services (1)
- Cloud Strategies (1)
- Cloudformation (1)
- Cluster Resource (1)
- Cmo (1)
- Cockroach Db (1)
- Coding Benchmarks (1)
- Colab (1)
- Collectd (1)
- Columnar (1)
- Communication Plans (1)
- Community (1)
- Compact Storage (1)
- Compaction (1)
- Compliance (1)
- Compression (1)
- Compute Instances (1)
- Compute Node (1)
- Concurrent Manager (1)
- Concurrent Processing (1)
- Configuration (1)
- Consistency Level (1)
- Consolidation (1)
- Conversational AI (1)
- Covid-19 (1)
- Cpu Patching (1)
- Cqlsstablewriter (1)
- Crash (1)
- Create Catalog Error (1)
- Create_File_Dest (1)
- Credentials (1)
- Cross Platform (1)
- CrowdStrike (1)
- Crsctl (1)
- Custom Instance Images (1)
- Cve-2022-21500 (1)
- Cvu (1)
- Cypher Queries (1)
- DBSAT 3 (1)
- Dacpac (1)
- Dag (1)
- Data Analysis (1)
- Data Analytics Platform (1)
- Data Box (1)
- Data Classification (1)
- Data Cleansing (1)
- Data Encryption (1)
- Data Engineering (1)
- Data Estate (1)
- Data Flow Management (1)
- Data Insights (1)
- Data Integrity (1)
- Data Lake (1)
- Data Leader (1)
- Data Lifecycle Management (1)
- Data Lineage (1)
- Data Masking (1)
- Data Mesh (1)
- Data Migration Assistant (1)
- Data Migration Service (1)
- Data Mining (1)
- Data Modeling (1)
- Data Monetization (1)
- Data Policy (1)
- Data Profiling (1)
- Data Protection (1)
- Data Retention (1)
- Data Safe (1)
- Data Sheets (1)
- Data Summit (1)
- Data Vault (1)
- Data Warehouse Modernization (1)
- Database Auditing (1)
- Database Consultant (1)
- Database Link (1)
- Database Modernization (1)
- Database Provisioning (1)
- Database Provisioning Failed (1)
- Database Replication (1)
- Database Scaling (1)
- Database Schemas (1)
- Database Security (1)
- Databricks (1)
- Datadog (1)
- Datafile (1)
- Datapatch (1)
- Dataprivacy (1)
- Datascape 59 (1)
- Datasets (1)
- Datastax Cassandra (1)
- Datastax Opscenter (1)
- Datasync Error (1)
- Db_Create_File_Dest (1)
- Dbaas (1)
- Dbatools (1)
- Dbcc Checkident (1)
- Dbms_Cloud (1)
- Dbms_File_Transfer (1)
- Dbms_Metadata (1)
- Dbms_Service (1)
- Dbms_Stats (1)
- Dbupgrade (1)
- Deep Learning (1)
- Delivery (1)
- Devd (1)
- Dgbroker (1)
- Dialogflow (1)
- Dict0Dict (1)
- Did You Know (1)
- Direct Path Read Temp (1)
- Disk Groups (1)
- Disk Management (1)
- Diskgroup (1)
- Dispatchers (1)
- Distributed Ag (1)
- Distribution Agent (1)
- Documentation (1)
- Download (1)
- Dp Agent (1)
- Duet AI (1)
- Duplication (1)
- Dynamic Sampling (1)
- Dynamic Tasks (1)
- E-Business Suite Cpu Patching (1)
- E-Business Suite Patching (1)
- Ebs Sso (1)
- Ec2 (1)
- Edb Postgresql Advanced Server (1)
- Edb Postgresql Password Verify Function (1)
- Editions (1)
- Edp (1)
- El Carro (1)
- Elassandra (1)
- Elk Stack (1)
- Em13Cr2 (1)
- Emcli (1)
- End of Life (1)
- Engineering (1)
- Enqueue (1)
- Enterprise (1)
- Enterprise Architecture (1)
- Enterprise Command Centers (1)
- Enterprise Manager Command Line Interface (Em Cli (1)
- Enterprise Plus (1)
- Episode 58 (1)
- Error Handling (1)
- Exacc (1)
- Exacheck (1)
- Exacs (1)
- Exadata Asr (1)
- Execution (1)
- Executive Sponsor (1)
- Expenditure (1)
- Export Sccm Collection To Csv (1)
- External Persistent Volumes (1)
- Fail (1)
- Failed Upgrade (1)
- Failover In Postgresql (1)
- Fall 2021 (1)
- Fast Recovery Area (1)
- FinOps Strategy (1)
- Flash Recovery Area (1)
- Flashback (1)
- Fnd (1)
- Fndsm (1)
- Force_Matching_Signature (1)
- Fra Full (1)
- Framework (1)
- Freebsd (1)
- Fsync (1)
- Function-Based Index (1)
- GCVE Architecture (1)
- GPQA (1)
- Gaming (1)
- Garbagecollect (1)
- Gcp Compute (1)
- Gcp-Spanner (1)
- Geography (1)
- Geth (1)
- Getmospatch (1)
- Git (1)
- Global Analytics (1)
- Gmail (1)
- Gmail Security (1)
- Google Analytics (1)
- Google Cloud Architecture Framework (1)
- Google Cloud Data Services (1)
- Google Cloud Partner (1)
- Google Cloud Spanner (1)
- Google Cloud VMware Engine (1)
- Google Compute Engine (1)
- Google Dataflow (1)
- Google Datalab (1)
- Google Grab And Go (1)
- Google Sheets (1)
- Gp2 (1)
- Graph Algorithms (1)
- Graph Databases (1)
- Graph Inferences (1)
- Graph Theory (1)
- GraphQL (1)
- Graphical User Interface (Gui) (1)
- Grid (1)
- Grid Infrastructure (1)
- Griddisk Resize (1)
- Grp (1)
- Guaranteed Restore Point (1)
- Guid Mismatch (1)
- HR Technology (1)
- HRM (1)
- Ha (1)
- Hang (1)
- Hashicorp (1)
- Hbase (1)
- Hcc (1)
- Hdinsight (1)
- Healthcheck (1)
- Hemantgiri S. Goswami (1)
- Hortonworks (1)
- How To Install Ssrs (1)
- Hr (1)
- Httpchk (1)
- Https (1)
- Huge Pages (1)
- HumanEval (1)
- Hung Database (1)
- Hybrid Columnar Compression (1)
- Hyper-V (1)
- Hyperscale (1)
- Hypothesis Driven Development (1)
- Ibm (1)
- Identity Management (1)
- Idm (1)
- Ilom (1)
- Imageinfo (1)
- Impdp (1)
- In Place Upgrade (1)
- Incident Response (1)
- Indempotent (1)
- Indexing In Mongodb (1)
- Influxdb (1)
- Information (1)
- Infrastructure As A Code (1)
- Injection (1)
- Innobackupex (1)
- Innodb Concurrency (1)
- Innodb Flush Method (1)
- Insights (1)
- Installing (1)
- Instance Cloning (1)
- Integration Services (1)
- Integrations (1)
- Interactive_Timeout (1)
- Interval Partitioning (1)
- Invisible Indexes (1)
- Io1 (1)
- IoT (1)
- Iops (1)
- Iphone (1)
- Ipv6 (1)
- Iscsi (1)
- Iscsi-Initiator-Utils (1)
- Iscsiadm (1)
- Issues (1)
- It Industry (1)
- It Teams (1)
- JMX Metrics (1)
- Jared Still (1)
- Javascript (1)
- Jdbc (1)
- Jinja2 (1)
- Jmx (1)
- Jmx Monitoring (1)
- Jvm (1)
- Jython (1)
- K8S (1)
- Kernel (1)
- Key Btp Components (1)
- Kfed (1)
- Kill Sessions (1)
- Knapsack (1)
- Kubeflow (1)
- LMSYS Chatbot Arena (1)
- Large Pages (1)
- Latency (1)
- Latest News (1)
- Leadership (1)
- Leap Second (1)
- Limits (1)
- Line 1 (1)
- Linkcolumn (1)
- Linux Host Monitoring (1)
- Linux Storage Appliance (1)
- Listener (1)
- Loadavg (1)
- Lock_Sga (1)
- Locks (1)
- Log File Switch (Archiving Needed) (1)
- Logfile (1)
- Looker (1)
- Lvm (1)
- MMLU (1)
- Managed Instance (1)
- Managed Services (1)
- Management (1)
- Management Servers (1)
- Marketing (1)
- Marketing Analytics (1)
- Martech (1)
- Masking (1)
- Megha Bedi (1)
- Metadata (1)
- Method-R Workbench (1)
- Metric (1)
- Metric Extensions (1)
- Michelle Gutzait (1)
- Microservices (1)
- Microsoft Azure Sql Database (1)
- Microsoft Build (1)
- Microsoft Build 2019 (1)
- Microsoft Ignite (1)
- Microsoft Inspire 2019 (1)
- Migrate (1)
- Migrating Ssis Catalog (1)
- Migrating To Azure Sql (1)
- Migration Checklist (1)
- Mirroring (1)
- Mismatch (1)
- Model Governance (1)
- Monetization (1)
- MongoDB Atlas (1)
- MongoDB Compass (1)
- Ms Excel (1)
- Msdtc (1)
- Msdtc In Always On (1)
- Msdtc In Cluster (1)
- Multi-IP (1)
- Multicast (1)
- Multipath (1)
- My.Cnf (1)
- MySQL Shell Logical Backup (1)
- MySQLDump (1)
- Mysql Enterprise (1)
- Mysql Plugin For Oracle Enterprise Manager (1)
- Mysql Replication Filters (1)
- Mysql Server (1)
- Mysql-Python (1)
- Nagios (1)
- Ndb (1)
- Net_Read_Timeout (1)
- Net_Write_Timeout (1)
- Netcat (1)
- Newsroom (1)
- Nfs (1)
- Nifi (1)
- Node (1)
- November 10Th 2015 (1)
- November 6Th 2015 (1)
- Null Columns (1)
- Nullipotent (1)
- OPEX (1)
- ORAPKI (1)
- O_Direct (1)
- Oacore (1)
- October 21St 2015 (1)
- October 6Th 2015 (1)
- October 8Th 2015 (1)
- Oda (1)
- Odbcs (1)
- Odbs (1)
- Odi (1)
- Oel (1)
- Ohs (1)
- Olvm (1)
- On-Prem To Azure Sql (1)
- On-Premises (1)
- Onclick (1)
- Open.Canada.Ca (1)
- Openstack (1)
- Operating System Monitoring (1)
- Oplog (1)
- Opsworks (1)
- Optimization (1)
- Optimizer (1)
- Ora-01852 (1)
- Ora-7445 (1)
- Oracle 19 (1)
- Oracle 20C (1)
- Oracle Cursor (1)
- Oracle Database 12.2 (1)
- Oracle Database Appliance (1)
- Oracle Database Se2 (1)
- Oracle Database Standard Edition 2 (1)
- Oracle Database Upgrade (1)
- Oracle Database@Google Cloud (1)
- Oracle Exadata Smart Scan (1)
- Oracle Licensing (1)
- Oracle Linux Virtualization Manager (1)
- Oracle Oda (1)
- Oracle Openworld (1)
- Oracle Parallelism (1)
- Oracle Rdbms (1)
- Oracle Real Application Clusters (1)
- Oracle Reports (1)
- Oracle Security (1)
- Oracle Wallet (1)
- Orasrp (1)
- Organizational Change (1)
- Orion (1)
- Os (1)
- Osbws_Install.Jar (1)
- Oui Gui (1)
- Output (1)
- Owox (1)
- Paas (1)
- Package Deployment Wizard Error (1)
- Parallel Execution (1)
- Parallel Query (1)
- Parallel Query Downgrade (1)
- Partitioning (1)
- Partitions (1)
- Password (1)
- Password Change (1)
- Password Recovery (1)
- Password Verify Function In Postgresql (1)
- Patches (1)
- Patchmgr (1)
- Pdb Duplication (1)
- Penalty (1)
- Perfomrance (1)
- Performance Schema (1)
- Pg 15 (1)
- Pg_Rewind (1)
- Pga (1)
- Pipeline Debugging (1)
- Pivot (1)
- Planning (1)
- Plsql (1)
- Policy (1)
- Polybase (1)
- Post-Acquisition (1)
- Post-Covid It (1)
- Postgresql Complex Password (1)
- Postgresql With Repmgr Integration (1)
- Power Bi (1)
- Pq (1)
- Preliminar Connection (1)
- Preliminary Connection (1)
- Privatecloud (1)
- Process Mining (1)
- Production (1)
- Productivity (1)
- Profile In Edb Postgresql (1)
- Programming (1)
- Prompt Engineering (1)
- Provisioned Iops (1)
- Provisiones Iops (1)
- Proxy Monitoring (1)
- Psu (1)
- Public Cloud (1)
- Pubsub (1)
- Purge (1)
- Purge Thread (1)
- Pythian Blackbird Acquisition (1)
- Pythian Goodies (1)
- Pythian News (1)
- Python Pandas (1)
- Query Performance (1)
- Quicksight (1)
- Quota Limits (1)
- R12 R12.2 Cp Concurrent Processing Abort (1)
- R12.1.3 (1)
- REF! (1)
- Ram Cache (1)
- Rbac (1)
- Rdb (1)
- Rds_File_Util (1)
- Read Free Replication (1)
- Read Latency (1)
- Read Only (1)
- Read Replica (1)
- Reboot (1)
- Recruiting (1)
- Redo Size (1)
- Relational Database Management System (1)
- Release (1)
- Release Automation (1)
- Repair (1)
- Replication Compatibility (1)
- Replication Error (1)
- Repmgr (1)
- Repmgrd (1)
- Reporting Services 2019 (1)
- Resiliency Planning (1)
- Resource Manager (1)
- Resources (1)
- Restore (1)
- Restore Point (1)
- Retail (1)
- Rhel (1)
- Risk (1)
- Risk Management (1)
- Rocksrb (1)
- Role In Postgresql (1)
- Rollback (1)
- Rolling Patch (1)
- Row0Purge (1)
- Rpm (1)
- Rule "Existing Clustered Or Clustered-Prepared In (1)
- Running Discovery On Remote Machine (1)
- SQL Optimization (1)
- SQL Tracing (1)
- SSRS Administration (1)
- SaaS (1)
- Sap Assessment (1)
- Sap Assessment Report (1)
- Sap Backup Restore (1)
- Sap Btp Architecture (1)
- Sap Btp Benefits (1)
- Sap Btp Model (1)
- Sap Btp Services (1)
- Sap Homogenous System Copy Method (1)
- Sap Landscape Copy (1)
- Sap Migration Assessment (1)
- Sap On Mssql (1)
- Sap System Copy (1)
- Sar (1)
- Scaling Ir (1)
- Sccm (1)
- Sccm Powershell (1)
- Scheduler (1)
- Scheduler_Job (1)
- Schedulers (1)
- Scheduling (1)
- Scott Mccormick (1)
- Scripts (1)
- Sdp (1)
- Secrets (1)
- Securing Sql Server (1)
- Security Compliance (1)
- Sed (Stream Editor) (1)
- Self Hosted Ir (1)
- Semaphore (1)
- Seps (1)
- September 11Th 2015 (1)
- Serverless Computing (1)
- Serverless Framework (1)
- Service Broker (1)
- Service Bus (1)
- Shared Connections (1)
- Shared Storage (1)
- Shellshock (1)
- Signals (1)
- Silent (1)
- Slave (1)
- Slob (1)
- Smart Scan (1)
- Smtp (1)
- Snapshot (1)
- Snowday Fall 2021 (1)
- Socat (1)
- Software Development (1)
- Software Engineering (1)
- Solutions Architecture (1)
- Spanner-Backups (1)
- Sphinx (1)
- Split Brain In Postgresql (1)
- Spm (1)
- Sql Agent (1)
- Sql Backup To Url Error (1)
- Sql Cluster Installer Hang (1)
- Sql Database (1)
- Sql Developer (1)
- Sql On Linux (1)
- Sql Server 2014 (1)
- Sql Server 2016 (1)
- Sql Server Agent On Linux (1)
- Sql Server Backups (1)
- Sql Server Denali Is Required To Install Integrat (1)
- Sql Server Health Check (1)
- Sql Server Troubleshooting On Linux (1)
- Sql Server Version (1)
- Sql Setup (1)
- Sql Vm (1)
- Sql2K19Ongke (1)
- Sqldatabase Serverless (1)
- Ssh User Equivalence (1)
- Ssis Denali Error (1)
- Ssis Install Error E Xisting Clustered Or Cluster (1)
- Ssis Package Deployment Error (1)
- Ssisdb Master Key (1)
- Ssisdb Restore Error (1)
- Sso (1)
- Ssrs 2019 (1)
- Sstable2Json (1)
- Sstableloader (1)
- Sstablesimpleunsortedwriter (1)
- Stack Dump (1)
- Standard Edition (1)
- Startup Process (1)
- Statistics (1)
- Statspack (1)
- Statspack Data Mining (1)
- Statspack Erroneously Reporting (1)
- Statspack Issues (1)
- Storage (1)
- Stored Procedure (1)
- Strategies (1)
- Streaming (1)
- Sunos (1)
- Swap (1)
- Swapping (1)
- Switch (1)
- Syft (1)
- Synapse (1)
- Sync Failed There Is Not Enough Space On The Disk (1)
- Sys Schema (1)
- System Function (1)
- Systems Administration (1)
- T-Sql (1)
- Table Optimization (1)
- Tablespace Growth (1)
- Tablespaces (1)
- Tags (1)
- Tar (1)
- Tde (1)
- Team Management (1)
- Tech Debt (1)
- Technology (1)
- Telegraf (1)
- Tempdb Encryption (1)
- Templates (1)
- Temporary Tablespace (1)
- Tenserflow (1)
- Teradata (1)
- Testing New Cassandra Builds (1)
- There Is Not Enough Space On The Disk (1)
- Thick Data (1)
- Third-Party Data (1)
- Thrift (1)
- Thrift Data (1)
- Tidb (1)
- Time Series (1)
- Time-Drift (1)
- Tkprof (1)
- Tmux (1)
- Tns (1)
- Trace (1)
- Tracefile (1)
- Training (1)
- Transaction Log (1)
- Transactions (1)
- Transformation Navigator (1)
- Transparent Data Encryption (1)
- Trigger (1)
- Triggers On Memory-Optimized Tables Must Use With (1)
- Troubleshooting (1)
- Tungsten (1)
- Tvdxtat (1)
- Twitter (1)
- U-Sql (1)
- UNDO Tablespace (1)
- Upgrade Issues (1)
- Uptime (1)
- Uptrade (1)
- Url Backup Error (1)
- Usability (1)
- Use Cases (1)
- User (1)
- User Defined Compactions (1)
- Utilization (1)
- Utl_Smtp (1)
- VDI Jump Host (1)
- Validate Structure (1)
- Validate_Credentials (1)
- Value (1)
- Velocity (1)
- Vertex AI (1)
- Vertica (1)
- Vertical Slicing (1)
- Videos (1)
- Virtual Private Cloud (1)
- Virtualization (1)
- Vision (1)
- Vpn (1)
- Wait_Timeout (1)
- Wallet (1)
- Webhook (1)
- Weblogic Connection Filters (1)
- Webscale Database (1)
- Windows 10 (1)
- Windows Powershell (1)
- WiredTiger (1)
- With Native_Compilation (1)
- Word (1)
- Workshop (1)
- Workspace Security (1)
- Xbstream (1)
- Xml Publisher (1)
- Zabbix (1)
- dbms_Monitor (1)
- postgresql 16 (1)
- sqltrace (1)
- tracing (1)
- vSphere (1)
- xml (1)
- October 2024 (2)
- September 2024 (7)
- August 2024 (4)
- July 2024 (2)
- June 2024 (6)
- May 2024 (3)
- April 2024 (2)
- February 2024 (2)
- January 2024 (11)
- December 2023 (10)
- November 2023 (11)
- October 2023 (10)
- September 2023 (8)
- August 2023 (7)
- July 2023 (2)
- June 2023 (13)
- May 2023 (4)
- April 2023 (6)
- March 2023 (10)
- February 2023 (6)
- January 2023 (5)
- December 2022 (10)
- November 2022 (10)
- October 2022 (10)
- September 2022 (13)
- August 2022 (16)
- July 2022 (12)
- June 2022 (13)
- May 2022 (11)
- April 2022 (4)
- March 2022 (5)
- February 2022 (4)
- January 2022 (14)
- December 2021 (16)
- November 2021 (11)
- October 2021 (7)
- September 2021 (11)
- August 2021 (6)
- July 2021 (9)
- June 2021 (4)
- May 2021 (8)
- April 2021 (16)
- March 2021 (16)
- February 2021 (6)
- January 2021 (12)
- December 2020 (12)
- November 2020 (17)
- October 2020 (11)
- September 2020 (10)
- August 2020 (11)
- July 2020 (13)
- June 2020 (6)
- May 2020 (9)
- April 2020 (18)
- March 2020 (21)
- February 2020 (13)
- January 2020 (15)
- December 2019 (10)
- November 2019 (11)
- October 2019 (12)
- September 2019 (16)
- August 2019 (15)
- July 2019 (10)
- June 2019 (16)
- May 2019 (20)
- April 2019 (21)
- March 2019 (14)
- February 2019 (18)
- January 2019 (18)
- December 2018 (5)
- November 2018 (16)
- October 2018 (12)
- September 2018 (20)
- August 2018 (27)
- July 2018 (31)
- June 2018 (34)
- May 2018 (28)
- April 2018 (27)
- March 2018 (17)
- February 2018 (8)
- January 2018 (20)
- December 2017 (14)
- November 2017 (4)
- October 2017 (1)
- September 2017 (3)
- August 2017 (5)
- July 2017 (4)
- June 2017 (2)
- May 2017 (7)
- April 2017 (7)
- March 2017 (8)
- February 2017 (8)
- January 2017 (5)
- December 2016 (3)
- November 2016 (4)
- October 2016 (8)
- September 2016 (9)
- August 2016 (10)
- July 2016 (9)
- June 2016 (8)
- May 2016 (13)
- April 2016 (16)
- March 2016 (13)
- February 2016 (11)
- January 2016 (6)
- December 2015 (11)
- November 2015 (11)
- October 2015 (5)
- September 2015 (16)
- August 2015 (4)
- July 2015 (1)
- June 2015 (3)
- May 2015 (6)
- April 2015 (5)
- March 2015 (5)
- February 2015 (4)
- January 2015 (3)
- December 2014 (7)
- October 2014 (4)
- September 2014 (6)
- August 2014 (6)
- July 2014 (16)
- June 2014 (7)
- May 2014 (6)
- April 2014 (5)
- March 2014 (4)
- February 2014 (10)
- January 2014 (6)
- December 2013 (8)
- November 2013 (12)
- October 2013 (9)
- September 2013 (6)
- August 2013 (7)
- July 2013 (9)
- June 2013 (7)
- May 2013 (7)
- April 2013 (4)
- March 2013 (7)
- February 2013 (4)
- January 2013 (4)
- December 2012 (6)
- November 2012 (8)
- October 2012 (9)
- September 2012 (3)
- August 2012 (5)
- July 2012 (5)
- June 2012 (7)
- May 2012 (11)
- April 2012 (1)
- March 2012 (8)
- February 2012 (1)
- January 2012 (6)
- December 2011 (8)
- November 2011 (5)
- October 2011 (9)
- September 2011 (6)
- August 2011 (4)
- July 2011 (1)
- June 2011 (1)
- May 2011 (5)
- April 2011 (2)
- February 2011 (2)
- January 2011 (2)
- December 2010 (1)
- November 2010 (7)
- October 2010 (3)
- September 2010 (8)
- August 2010 (2)
- July 2010 (4)
- June 2010 (7)
- May 2010 (2)
- April 2010 (1)
- March 2010 (3)
- February 2010 (3)
- January 2010 (2)
- November 2009 (6)
- October 2009 (6)
- August 2009 (3)
- July 2009 (3)
- June 2009 (3)
- May 2009 (2)
- April 2009 (8)
- March 2009 (6)
- February 2009 (4)
- January 2009 (3)
- November 2008 (3)
- October 2008 (7)
- September 2008 (6)
- August 2008 (9)
- July 2008 (9)
- June 2008 (9)
- May 2008 (9)
- April 2008 (8)
- March 2008 (4)
- February 2008 (3)
- January 2008 (3)
- December 2007 (2)
- November 2007 (7)
- October 2007 (1)
- August 2007 (4)
- July 2007 (3)
- June 2007 (8)
- May 2007 (4)
- April 2007 (2)
- March 2007 (2)
- February 2007 (5)
- January 2007 (8)
- December 2006 (1)
- November 2006 (3)
- October 2006 (4)
- September 2006 (3)
- July 2006 (1)
- May 2006 (2)
- April 2006 (1)
- July 2005 (1)
Comments (2)