Pythian Blog: Technical Track

Mystery Solved: Replication lag in InnoDB

While running a backup with XtraBackup against a slave server we noticed that replication was lagging significantly. The root cause wasn't clear, but we noticed that DML statements from replication were just hanging for a long time. Replication wasn't always hanging, but it happened so frequently that a 24 hour backup caused replication to lag 11 hours. The first hypothesis was that all the writes generated from replication (relay log, bin log, redo log, etc) were generating too high contention on IO while XtraBackup was reading the files from disk. The redo log wasn't hitting 75%, which meant that InnoDB wasn't doing aggressive flushing - some other contention was causing replication to stall. After various tests, we found that disabling innodb_auto_lru_dump solved the issue. It wasn’t entirely clear what the relation was between the lru dump and replication lag during backup, but it was very easy to reproduce. Enabling lru dump at runtime was immediately causing replication to lag, and disabling it restored replication back to normal. Also, when innodb_auto_lru_dump was enabled we noticed that from time to time the simple command "SHOW ENGINE INNODB STATUS" was hanging for 2-3 minutes. To attempt to reproduce the issue outside this production environment, we tried to run various benchmarks using sysbench, with and without auto lru dump. The sbtest table (~20GB on disk) was created using the following command: sysbench --test=oltp --mysql-table-engine=innodb --mysql-user=root --oltp-table-size=100000000 prepare The InnoDB settings were: innodb_buffer_pool_size = 10G innodb_flush_log_at_trx_commit = 2 innodb_thread_concurrency = 0 innodb_flush_method=O_DIRECT innodb_log_file_size=128M innodb_file_per_table The various benchmarks were ran using: - read-only workload vs read-write workload; - small buffer pool vs large buffer pool (from 2G to 30G) - small number of threads vs large number of threads None of the above benchmarks showed any significant difference with auto lru dump enabled or disabled. Perhaps these workloads were not really reproducing our environment where we were getting issues with auto lru dump. We therefore started a new series of benchmarks with only one thread doing mainly writes - this is the workload we expect in a slave used only for replication and backups. The workload with sysbench was modified to perform more writes than read, yet the result of the benchmark didn't change a lot - enabling or disabling lru wasn't producing any significant change in performance. The problem with this benchmark was that it was generating too many writes and filling the redo log. InnoDB was then doing aggressive flushing and this was a bottleneck that was hiding any effect caused from the lru dump. To prevent the redo from filling too quickly, we had to change the workload to read a lot of pages, change the buffer pool from 30G to 4G, and test with always restarting mysqld and with the buffer pool prewarmed with: select sql_no_cache count(*), sum(length(c)) FROM sbtest where id between 1 and 20000000; sysbench --num-threads=1 --test=oltp --mysql-user=root --oltp-table-size=100000000 --oltp-index-updates=10 --oltp-non-index-updates=10 --oltp-point-selects=1 --max-requests=1000 run innodb_auto_lru_dump=0: transactions: (7.26 per sec.) innodb_auto_lru_dump=1: transactions: (6.93 per sec.) This was not a huge difference, but we finally saw some effect of the auto_lru_dump. It became apparent that the number of transactions per second in the above benchmark was really low because the number of random reads from disk was the bottleneck. To remove this bottleneck, we removed innodb_flush_method=O_DIRECT (therefore using the default flush method), and then ran the following to load the whole table into the OS cache (not into the buffer pool). dd if=sbtest/sbtest.ibd of=/dev/null bs=1M To prevent the redo log from filling up, we also changed the innodb_log_file_size from 128M to 1G. With these changes - always using a buffer pool of 4G, restarting mysqld before each test ,and prewarming the buffer pool with "select sql_no_cache count(*), sum(length(c)) FROM sbtest where id between 1 and 20000000" - we reran the same test changing the number of requests: 10K transactions: sysbench --num-threads=1 --test=oltp --mysql-user=root --oltp-table-size=100000000 --oltp-index-updates=10 --oltp-non-index-updates=10 --oltp-point-selects=1 --max-requests=10000 run innodb_auto_lru_dump=0: transactions: (243.22 per sec.) innodb_auto_lru_dump=1: transactions: (230.62 per sec.) 50K transactions: sysbench --num-threads=1 --test=oltp --mysql-user=root --oltp-table-size=100000000 --oltp-index-updates=10 --oltp-non-index-updates=10 --oltp-point-selects=1 --max-requests=50000 run innodb_auto_lru_dump=0: transactions: (194.31 per sec.) innodb_auto_lru_dump=1: transactions: (175.69 per sec.) With innodb_auto_lru_dump=1 , performance drops by a factor of 5-10% ! After this, we wanted to run a completely different test with no writes, only reads. innodb_auto_lru_dump didn't show any difference when sysbench was executed with read only workload, and we believe the reason is simply the fact that sysbench wasn't changing too many pages in the buffer pool. The easiest way to change pages in the buffer pool is to perform a full scan of a large table with a small buffer pool. We set innodb_flush_method=O_DIRECT, since otherwise the read from the OS cache was too fast and we couldn't detect any effect of innodb_auto_lru_dump. With innodb_buffer_pool_size=4G, and restarting mysqld after each test, this was the the result of a full table scan: With innodb_auto_lru_dump=0 : mysql> select sql_no_cache count(*), sum(length(c)) FROM sbtest; +-----------+----------------+ | count(*) | sum(length(c)) | +-----------+----------------+ | 100000000 | 145342938 | +-----------+----------------+ 1 row in set (3 min 27.22 sec) With innodb_auto_lru_dump=1 : mysql> select sql_no_cache count(*), sum(length(c)) FROM sbtest; +-----------+----------------+ | count(*) | sum(length(c)) | +-----------+----------------+ | 100000000 | 145342938 | +-----------+----------------+ 1 row in set (3 min 38.43 sec) Again, innodb_auto_lru_dump=1 affects performance increasing the execution time by ~5% . It is also important to note that innodb_auto_lru_dump seems to affect performance only for some specific workload scenarios. In fact, the majority of the benchmarks we ran weren't showing any performance effect caused by innodb_auto_lru_dump.

No Comments Yet

Let us know what you think

Subscribe by email