Bug#1093243: Upgrade to 6.1.123 kernel causes mariadb hangs
On 1/24/25 16:30, Xan Charbonnet wrote: On 1/24/25 04:33, Pavel Begunkov wrote: Thanks for narrowing it down. Xan, can you try this change please? Waiters can miss wake ups without it, seems to match the description. diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 9b58ba4616d40..e5a8ee944ef59 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -592,8 +592,10 @@ static inline void __io_cq_unlock_post_flush(struct io_ring_ctx *ctx) io_commit_cqring(ctx); spin_unlock(&ctx->completion_lock); io_commit_cqring_flush(ctx); - if (!(ctx->flags & IORING_SETUP_DEFER_TASKRUN)) + if (!(ctx->flags & IORING_SETUP_DEFER_TASKRUN)) { + smp_mb(); __io_cqring_wake(ctx); + } } void io_cq_unlock_post(struct io_ring_ctx *ctx) Thanks Pavel! Early results look very good for this change. I'm now running 6.1.120 with your added smp_mb() call. The backup process which had been quickly triggering the issue has been running longer than it ever did when it would ultimately fail. So that's great! One sour note: overnight, replication hung on this machine, which is another failure that started happening with the jump from 6.1.119 to 6.1.123. The machine was running 6.1.124 with the __io_cq_unlock_post_flush function removed completely. That's the kernel we had celebrated yesterday for running the backup process successfully. So, we might have two separate issues to deal with, unfortunately. Possible, but it could also be a side effect of reverting the patch. As usual, in most cases patches are ported either because they're fixing sth or other fixes depend on it, and it's not yet apparent to me what happened with this one. This morning, I found that replication had hung and was behind by some 35,000 seconds. I attached gdb and then detached it, which got things moving again (which goes the extra mile to prove that this is a very closely related issue). Then it hung up again at about 25,000 seconds behind. At that point I rebooted into the new kernel, the 6.1.120 kernel with the added smp_mb() call. The lag is now all the way down to 5,000 seconds without hanging again. It looks like there are 5 io_uring-related patches in 6.1.122 and another 1 in 6.1.123. My guess is the replication is hitting a problem with one of those. Unfortunately, a replication hang is much harder for me to reproduce than the issue with the backup procedure, which always failed within 15 minutes. It certainly looks to me like the patched 6.1.120 does not have the hang (but it's hard to be 100% certain). Perhaps the next step is to apply the extra smp_mb() call to 6.1.123 and see if I can get replication to hang. Sounds like it works as expected with mb(), at least for now. I agree, it makes sense to continue testing with the patch, and I'll send it to stable in the meantime. Thanks for testing! -- Pavel Begunkov
Bug#1093243: Upgrade to 6.1.123 kernel causes mariadb hangs
On 1/23/25 20:49, Salvatore Bonaccorso wrote: Hi Xan, On Thu, Jan 23, 2025 at 02:31:34PM -0600, Xan Charbonnet wrote: I rented a Linode and have been trying to load it down with sysbench activity while doing a mariabackup and a mysqldump, also while spinning up the CPU with zstd benchmarks. So far I've had no luck triggering the fault. I've also been doing some kernel compilation. I followed this guide: https://www.dwarmstrong.org/kernel/ (except that I used make -j24 to build in parallel and used make localmodconfig to compile only the modules I need) I've built the following kernels: 6.1.123 (equivalent to linux-image-6.1.0-29-amd64) 6.1.122 6.1.121 6.1.120 So far they have all exhibited the behavior. Next up is 6.1.119 which is equivalent to linux-image-6.1.0-28-amd64. My expectation is that the fault will not appear for this kernel. It looks like the issue is here somewhere: https://www.kernel.org/pub/linux/kernel/v6.x/ChangeLog-6.1.120 I have to work on some other things, and it'll take a while to prove the negative (that is, to know that the failure isn't happening). I'll post back with the 6.1.119 results when I have them. Additionally please try with 6.1.120 and revert this commit 3ab9326f93ec ("io_uring: wake up optimisations") (which landed in 6.1.120). If that solves the problem maybe we miss some prequisites in the 6.1.y series here? I'm not sure why the commit was backported (need to look it up), but from a quick look it does seem to miss a barrier present in the original patch. -- Pavel Begunkov
Bug#1093243: Upgrade to 6.1.123 kernel causes mariadb hangs
On 1/24/25 05:24, Salvatore Bonaccorso wrote: HI Pavel, hi Jens, On Thu, Jan 23, 2025 at 11:20:40PM +, Pavel Begunkov wrote: On 1/23/25 20:49, Salvatore Bonaccorso wrote: Hi Xan, On Thu, Jan 23, 2025 at 02:31:34PM -0600, Xan Charbonnet wrote: I rented a Linode and have been trying to load it down with sysbench activity while doing a mariabackup and a mysqldump, also while spinning up the CPU with zstd benchmarks. So far I've had no luck triggering the fault. I've also been doing some kernel compilation. I followed this guide: https://www.dwarmstrong.org/kernel/ (except that I used make -j24 to build in parallel and used make localmodconfig to compile only the modules I need) I've built the following kernels: 6.1.123 (equivalent to linux-image-6.1.0-29-amd64) 6.1.122 6.1.121 6.1.120 So far they have all exhibited the behavior. Next up is 6.1.119 which is equivalent to linux-image-6.1.0-28-amd64. My expectation is that the fault will not appear for this kernel. It looks like the issue is here somewhere: https://www.kernel.org/pub/linux/kernel/v6.x/ChangeLog-6.1.120 I have to work on some other things, and it'll take a while to prove the negative (that is, to know that the failure isn't happening). I'll post back with the 6.1.119 results when I have them. Additionally please try with 6.1.120 and revert this commit 3ab9326f93ec ("io_uring: wake up optimisations") (which landed in 6.1.120). If that solves the problem maybe we miss some prequisites in the 6.1.y series here? I'm not sure why the commit was backported (need to look it up), but from a quick look it does seem to miss a barrier present in the original patch. Ack, this was here for reference: https://lore.kernel.org/stable/57b048be-31d4-4380-8296-56afc8862...@kernel.dk/ Xan Charbonnet was able to confirm in https://bugs.debian.org/1093243#99 that indeed reverting the commit fixes the mariadb related hangs. Thanks for narrowing it down. Xan, can you try this change please? Waiters can miss wake ups without it, seems to match the description. diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 9b58ba4616d40..e5a8ee944ef59 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -592,8 +592,10 @@ static inline void __io_cq_unlock_post_flush(struct io_ring_ctx *ctx) io_commit_cqring(ctx); spin_unlock(&ctx->completion_lock); io_commit_cqring_flush(ctx); - if (!(ctx->flags & IORING_SETUP_DEFER_TASKRUN)) + if (!(ctx->flags & IORING_SETUP_DEFER_TASKRUN)) { + smp_mb(); __io_cqring_wake(ctx); + } } void io_cq_unlock_post(struct io_ring_ctx *ctx) -- Pavel Begunkov
Bug#1093243: Upgrade to 6.1.123 kernel causes mariadb hangs
On 1/26/25 22:48, Xan Charbonnet wrote: Since applying the final patch on Friday, I have seen no problems with either the backup snapshot or catching up with replication. It sure seems like things are all fixed. I haven't yet tried it on our production Galera cluster, but I expect to on Monday. Great to hear that, thanks for the update. And I sent the fix, hopefully it'll be merged for the nearest stable release. Here are Debian packages containing the modified kernel. Use at your own risk of course. Any feedback about how this works or doesn't work would be very helpful. https://charbonnet.com/linux-image-6.1.0-29-with-proposed-1093243-fix_amd64.deb https://charbonnet.com/linux-image-6.1.0-30-with-proposed-1093243-fix_amd64.deb On 1/24/25 14:51, Jens Axboe wrote: On 1/24/25 1:33 PM, Salvatore Bonaccorso wrote: Hi Pavel, On Fri, Jan 24, 2025 at 06:40:51PM +, Pavel Begunkov wrote: On 1/24/25 16:30, Xan Charbonnet wrote: On 1/24/25 04:33, Pavel Begunkov wrote: Thanks for narrowing it down. Xan, can you try this change please? Waiters can miss wake ups without it, seems to match the description. diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 9b58ba4616d40..e5a8ee944ef59 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -592,8 +592,10 @@ static inline void __io_cq_unlock_post_flush(struct io_ring_ctx *ctx) io_commit_cqring(ctx); spin_unlock(&ctx->completion_lock); io_commit_cqring_flush(ctx); - if (!(ctx->flags & IORING_SETUP_DEFER_TASKRUN)) + if (!(ctx->flags & IORING_SETUP_DEFER_TASKRUN)) { + smp_mb(); __io_cqring_wake(ctx); + } } void io_cq_unlock_post(struct io_ring_ctx *ctx) Thanks Pavel! Early results look very good for this change. I'm now running 6.1.120 with your added smp_mb() call. The backup process which had been quickly triggering the issue has been running longer than it ever did when it would ultimately fail. So that's great! One sour note: overnight, replication hung on this machine, which is another failure that started happening with the jump from 6.1.119 to 6.1.123. The machine was running 6.1.124 with the __io_cq_unlock_post_flush function removed completely. That's the kernel we had celebrated yesterday for running the backup process successfully. So, we might have two separate issues to deal with, unfortunately. Possible, but it could also be a side effect of reverting the patch. As usual, in most cases patches are ported either because they're fixing sth or other fixes depend on it, and it's not yet apparent to me what happened with this one. I researched bit the lists, and there was the inclusion request on the stable list itself. Looking into the io-uring list I found https://lore.kernel.org/io-uring/CADZouDRFJ9jtXHqkX-PTKeT=gxswdmc42zesakr34psug9t...@mail.gmail.com/ which I think was the trigger to later on include in fact the commit in 6.1.120. Yep indeed, was just looking for the backstory and that is why it got backported. Just missed the fact that it should've been an io_cqring_wake() rather than __io_cqring_wake()... -- Pavel Begunkov
Bug#1093243: Upgrade to 6.1.123 kernel causes mariadb hangs
On 1/27/25 16:38, Xan Charbonnet wrote: The MariaDB developers are wondering whether another corruption bug, MDEV-35334 ( https://jira.mariadb.org/browse/MDEV-35334 ) might be related. The symptom was described as: the first 1 byte of a .ibd file is changed from 0 to 1, or the first 4 bytes are changed from 0 0 0 0 to 1 0 0 0. Is it possible that an io_uring issue might be causing that as well? Thanks. The hang bug is just that, waiters not waking up. The completions users get back should still be correct when they get them, and it's not even close to code that might corrupt data. I believe someone mentioned corruption reports from killing the hang task, I'd assume it should tolerate even sigkills (?). It's much more likely it's either some other kernel or even io_uring issue, or the db doesn't handle it right since the update. For that other report, did they update the kernel? I don't see a dmesg log in the report, that could also be useful to have in case some subsystem complained. -- Pavel Begunkov