01.08.2019 20:35, Max Reitz wrote: > On 01.08.19 19:25, Vladimir Sementsov-Ogievskiy wrote: >> 01.08.2019 20:06, Max Reitz wrote: >>> On 01.08.19 18:03, Vladimir Sementsov-Ogievskiy wrote: >>>> 01.08.2019 18:12, Max Reitz wrote: >>>>> Perform two guest writes to not yet backed up areas of an image, where >>>>> the former touches an inner area of the latter. >>>>> >>>>> Before HEAD^, copy offloading broke this in two ways: >>>>> (1) The output differs from the reference output (what the source was >>>>> before the guest writes). >>>>> (2) But you will not see that in the failing output, because the job >>>>> offset is reported as being greater than the job length. This is >>>>> because one cluster is copied twice, and thus accounted for twice, >>>>> but of course the job length does not increase. >>>>> >>>>> Signed-off-by: Max Reitz <mre...@redhat.com> >>>>> --- >>>>> tests/qemu-iotests/056 | 34 ++++++++++++++++++++++++++++++++++ >>>>> tests/qemu-iotests/056.out | 4 ++-- >>>>> 2 files changed, 36 insertions(+), 2 deletions(-) >>>>> >>>>> diff --git a/tests/qemu-iotests/056 b/tests/qemu-iotests/056 >>>>> index f40fc11a09..d7198507f5 100755 >>>>> --- a/tests/qemu-iotests/056 >>>>> +++ b/tests/qemu-iotests/056 >>>>> @@ -133,6 +133,7 @@ class BackupTest(iotests.QMPTestCase): >>>>> self.vm = iotests.VM() >>>>> self.test_img = img_create('test') >>>>> self.dest_img = img_create('dest') >>>>> + self.ref_img = img_create('ref') >>>>> self.vm.add_drive(self.test_img) >>>>> self.vm.launch() >>>>> >>>>> @@ -140,6 +141,7 @@ class BackupTest(iotests.QMPTestCase): >>>>> self.vm.shutdown() >>>>> try_remove(self.test_img) >>>>> try_remove(self.dest_img) >>>>> + try_remove(self.ref_img) >>>>> >>>>> def hmp_io_writes(self, drive, patterns): >>>>> for pattern in patterns: >>>>> @@ -177,6 +179,38 @@ class BackupTest(iotests.QMPTestCase): >>>>> self.assert_qmp(event, 'data/error', qerror) >>>>> return False >>>>> >>>>> + def test_overlapping_writes(self): >>>>> + # Write something to back up >>>>> + self.hmp_io_writes('drive0', [('42', '0M', '2M')]) >>>>> + >>>>> + # Create a reference backup >>>>> + self.qmp_backup_and_wait(device='drive0', format=iotests.imgfmt, >>>>> + sync='full', target=self.ref_img) >>>>> + >>>>> + # Now to the test backup: We simulate the following guest >>>>> + # writes: >>>>> + # (1) [1M + 64k, 1M + 128k): Afterwards, everything in that >>>>> + # area should be in the target image, and we must not copy >>>>> + # it again (because the source image has changed now) >>>>> + # (64k is the job's cluster size) >>>>> + # (2) [1M, 2M): The backup job must not get overeager. It >>>>> + # must copy [1M, 1M + 64k) and [1M + 128k, 2M) separately, >>>>> + # but not the area in between. >>>>> + >>>>> + self.qmp_backup(device='drive0', format=iotests.imgfmt, >>>>> sync='full', >>>>> + target=self.dest_img, speed=1) >>>>> + >>>>> + self.hmp_io_writes('drive0', [('23', '%ik' % (1024 + 64), '64k'), >>>>> + ('66', '1M', '1M')]) >>>>> + >>>>> + # Let the job complete >>>>> + res = self.vm.qmp('block-job-set-speed', device='drive0', >>>>> speed=0) >>>>> + self.assert_qmp(res, 'return', {}) >>>>> + self.qmp_backup_wait('drive0') >>>>> + >>>>> + self.assertTrue(iotests.compare_images(self.ref_img, >>>>> self.dest_img), >>>>> + 'target image does not match reference image') >>>>> + >>>>> def test_dismiss_false(self): >>>>> res = self.vm.qmp('query-block-jobs') >>>>> self.assert_qmp(res, 'return', []) >>>>> diff --git a/tests/qemu-iotests/056.out b/tests/qemu-iotests/056.out >>>>> index dae404e278..36376bed87 100644 >>>>> --- a/tests/qemu-iotests/056.out >>>>> +++ b/tests/qemu-iotests/056.out >>>>> @@ -1,5 +1,5 @@ >>>>> -......... >>>>> +.......... >>>>> ---------------------------------------------------------------------- >>>>> -Ran 9 tests >>>>> +Ran 10 tests >>>>> >>>>> OK >>>>> >>>> >>>> Failed for me: >>>> -.......... >>>> +qemu-img: Could not open >>>> '/work/src/qemu/master/tests/qemu-iotests/scratch/dest.qcow2': Failed to >>>> get shared "write" lock >>>> +Is another process using the image >>>> [/work/src/qemu/master/tests/qemu-iotests/scratch/dest.qcow2]? >>>> +......F... >>>> +====================================================================== >>>> +FAIL: test_overlapping_writes (__main__.BackupTest) >>>> +---------------------------------------------------------------------- >>>> +Traceback (most recent call last): >>>> + File "056", line 212, in test_overlapping_writes >>>> + 'target image does not match reference image') >>>> +AssertionError: False is not true : target image does not match reference >>>> image >>>> + >>>> ---------------------------------------------------------------------- >>>> Ran 10 tests >>>> >>>> -OK >>>> +FAILED (failures=1) >>> >>> Hm. I hoped seeing BLOCK_JOB_COMPLETED would be enough. >> >> The problem is higher: "Failed to get shared "write" lock". Because of this >> iotests.compare_images >> can't work. So, because of locking, we need to shutdown qemu before starting >> qemu-img. >> And it's done so in test_complete_top() (I just looked at it as it's the >> only other user of compare_images >> in 056) > > The destination image shouldn’t be in use by qemu after the block job is > done, though. Therefore, there shouldn’t be a lock issue. That’s what > I meant. >
Ah, understand. Hmm, then it's strange that I see that error.. Clearly, in job_finalize_single(), job_clean() is called first, which should call backup_clean to unref target, then job_event_completed() is called... > >>>> So, with applied >>>> >>>> @@ -207,6 +207,7 @@ class BackupTest(iotests.QMPTestCase): >>>> res = self.vm.qmp('block-job-set-speed', device='drive0', >>>> speed=0) >>>> self.assert_qmp(res, 'return', {}) >>>> self.qmp_backup_wait('drive0') >>>> + self.vm.shutdown() >>>> >>>> self.assertTrue(iotests.compare_images(self.ref_img, >>>> self.dest_img), >>>> 'target image does not match reference image') >>> >>> I’d personally prefer auto_dismiss=False and then block-job-dismiss. >>> Although I can’t give a reason why. >>> >>>> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsement...@virtuozzo.com> >>>> Tested-by: Vladimir Sementsov-Ogievskiy <vsement...@virtuozzo.com> >>> >>> In any case, thanks! >>> >>> Max >>> >> >> > > -- Best regards, Vladimir