On 29.04.2017 21:14, Eric Blake wrote: > Use blkdebug's new geometry constraints to emulate setups that > have needed past regression fixes: write zeroes asserting > when running through a loopback block device with max-transfer > smaller than cluster size, and discard rounding away portions > of requests not aligned to preferred boundaries. Also, add > coverage that the block layer is honoring max transfer limits. > > For now, a single iotest performs all actions, with the idea > that we can add future blkdebug constraint test cases in the > same file; but it can be split into multiple iotests if we find > reason to run one portion of the test in more setups than what > are possible in the other. > > For reference, the final portion of the test (checking whether > discard passes as much as possible to the lowest layers of the > stack) works as follows: > > qemu-io: discard 30M at 80000001, passed to blkdebug > blkdebug: discard 511 bytes at 80000001, -ENOTSUP (smaller than > blkdebug's 512 align) > blkdebug: discard 14371328 bytes at 80000512, passed to qcow2 > qcow2: discard 739840 bytes at 80000512, -ENOTSUP (smaller than > qcow2's 1M align) > qcow2: discard 13M bytes at 77M, succeeds > blkdebug: discard 15M bytes at 90M, passed to qcow2 > qcow2: discard 15M bytes at 90M, succeeds > blkdebug: discard 1356800 bytes at 105M, passed to qcow2 > qcow2: discard 1M at 105M, succeeds > qcow2: discard 308224 bytes at 106M, -ENOTSUP (smaller than qcow2's > 1M align) > blkdebug: discard 1 byte at 111457280, -ENOTSUP (smaller than > blkdebug's 512 align) > > Signed-off-by: Eric Blake <ebl...@redhat.com> > Reviewed-by: Max Reitz <mre...@redhat.com> > > --- > v11: rebase to context > v10: no change, rebase to context > v9: no change > v7-v8: not submitted (earlier half of series sent for 2.9) > v6: rebase to master by renumbering s/175/177/ > v5: rebase to master by renumbering s/173/175/ > v4: clean up some comments, nicer backing file creation, more commit message > v3: make comments tied more to test at hand, rather than the > particular hardware that led to the earlier patches being tested > v2: new patch > --- > tests/qemu-iotests/177 | 114 > +++++++++++++++++++++++++++++++++++++++++++++ > tests/qemu-iotests/177.out | 49 +++++++++++++++++++ > tests/qemu-iotests/group | 1 + > 3 files changed, 164 insertions(+) > create mode 100755 tests/qemu-iotests/177 > create mode 100644 tests/qemu-iotests/177.out > > diff --git a/tests/qemu-iotests/177 b/tests/qemu-iotests/177 > new file mode 100755 > index 0000000..e4ddec7 > --- /dev/null > +++ b/tests/qemu-iotests/177
[...] > +echo > +echo "== verify image content ==" > + > +function verify_io() > +{ > + if ($QEMU_IMG info -f "$IMGFMT" "$TEST_IMG" | > + grep "compat: 0.10" > /dev/null); then > + # For v2 images, discarded clusters are read from the backing file > + discarded=11 > + else > + # Discarded clusters are zeroed for v3 or later > + discarded=0 > + fi > + > + echo read -P 22 0 1000 > + echo read -P 33 1000 128k > + echo read -P 22 132072 7871512 > + echo read -P 0 8003584 2093056 > + echo read -P 22 10096640 23457792 > + echo read -P 0 32M 32M > + echo read -P 22 64M 13M > + echo read -P $discarded 77M 29M > + echo read -P 22 106M 22M > +} > + > +verify_io | $QEMU_IO "$TEST_IMG" | _filter_qemu_io This conflicts with Fam's image locking series that has been introduced in the meantime (and unfortunately I'm the one who has to base his block queue on Kevin's...). I suppose it's because the qemu_io process is launched before the qemu_img info process. Simply adding an -r to the qemu_io command fixes this, however. I'll do so in my branch, assuming you're OK with that. :-) Max
signature.asc
Description: OpenPGP digital signature