On Jun 24, 2024 / 01:43, h...@infradead.org wrote:
> On Mon, Jun 24, 2024 at 04:58:29AM +0000, Shinichiro Kawasaki wrote:
> > Based on this guess, I guess a change below may avoid the failure.
> > 
> > Christoph, may I ask you to see if this change avoids the failure you 
> > observe?
> 
> Still fails in exactly the same way with that patch.

Hmm, sorry for wasting your time. My guess was wrong.

I took a look in the test script and dm-dust code again, and now I think the dd
command is expected to success. The added bad blocks have default wr_fail_cnt
value 0, then write error should not happen for the dd command. (Bryan, if this
understanding is wrong, please let me know.)

So the error log that Christoph observes indicates that the dd command failed,
and this failure is unexpected. I can not think of any cause of the failure.

Christoph, may I ask you to share the kernel messages during the test run?
Also, I would like to check the dd command output. The one liner patch below
to the blktests will create resutls/vdb/dm/002.full with the dd output.

diff --git a/tests/dm/002 b/tests/dm/002
index 6635c43..ad3b6c0 100755
--- a/tests/dm/002
+++ b/tests/dm/002
@@ -30,7 +30,7 @@ test_device() {
        dmsetup message dust1 0 addbadblock 72
        dmsetup message dust1 0 countbadblocks
        dmsetup message dust1 0 enable
-       dd if=/dev/zero of=/dev/mapper/dust1 bs=512 count=128 oflag=direct 
>/dev/null 2>&1 || return $?
+       dd if=/dev/zero of=/dev/mapper/dust1 bs=512 count=128 oflag=direct 
>"$FULL" 2>&1 || return $?
        sync
        dmsetup message dust1 0 countbadblocks
        sync

Reply via email to