On 02/28/2017 01:33 PM, Vladimir Sementsov-Ogievskiy wrote: > Currently backup to nbd target is broken, as nbd doesn't have > .bdrv_get_info realization. > > Signed-off-by: Vladimir Sementsov-Ogievskiy <vsement...@virtuozzo.com> > --- > > v4: use error_report() > add article > > v3: fix compilation (I feel like an idiot) > adjust wording (Fam) > > v2: add WARNING > > === > > Since commit > > commit 4c9bca7e39a6e07ad02c1dcde3478363344ec60b > Author: John Snow <js...@redhat.com> > Date: Thu Feb 25 15:58:30 2016 -0500 > > block/backup: avoid copying less than full target clusters > > backup to nbd target is broken, we have "Couldn't determine the cluster size > of > the target image". > > Proposed NBD protocol extension - NBD_OPT_INFO should finally solve this > problem. > But until it is not realized, we need allow backup to nbd target due to > backward > compatibility.
Looks like my patches for NBD_OPT_INFO did not get included in a pull request for 2.9, and therefore missed soft freeze. https://lists.gnu.org/archive/html/qemu-devel/2017-02/msg04528.html In particular, there was confusion on whether the NBD protocol should be advertising the preferred block size for I/O (as in struct stat.st_blksize) vs. the optimum size (SCSI documents this as an optional parameter, but if set, then transactions larger than the optimal may take longer than ordinary). I'm also not sure where qcow2's cluster size should fit into this (it behaves more like a block size, in that anything smaller requires a read-modify-write, and therefore feels more like what NBD has currently documented as the preferred size). qemu's BlockLimits structure may need to track both numbers separately (right now, it appears to only be tracking the SCSI sense, although that is not documented well), and the NBD protocol extension proposal may need a tweak to expose more than just min/preferred/max values. Even though my NBD patches will miss 2.9, I think yours qualifies as a bug fix and can therefore be included under the soft freeze rules. > > Furthermore, is it entirely ok to disallow backup if bds lacks .bdrv_get_info? > Which behavior should be default: to fail backup or to use default cluster > size? Avoiding the risk of corrupted data is important. Reviewed-by: Eric Blake <ebl...@redhat.com> > > block/backup.c | 12 +++++++++++- > 1 file changed, 11 insertions(+), 1 deletion(-) > > diff --git a/block/backup.c b/block/backup.c > index ea38733849..ea160e9e82 100644 > --- a/block/backup.c > +++ b/block/backup.c > @@ -24,6 +24,7 @@ > #include "qemu/cutils.h" > #include "sysemu/block-backend.h" > #include "qemu/bitmap.h" > +#include "qemu/error-report.h" > > #define BACKUP_CLUSTER_SIZE_DEFAULT (1 << 16) > #define SLICE_TIME 100000000ULL /* ns */ > @@ -638,7 +639,16 @@ BlockJob *backup_job_create(const char *job_id, > BlockDriverState *bs, > * backup cluster size is smaller than the target cluster size. Even for > * targets with a backing file, try to avoid COW if possible. */ > ret = bdrv_get_info(target, &bdi); > - if (ret < 0 && !target->backing) { > + if (ret == -ENOTSUP) { > + /* Cluster size is not defined */ > + error_report("WARNING: The target block device doesn't provide " > + "information about the block size and it doesn't have a > " > + "backing file. The default block size of %u bytes is " > + "used. If the actual block size of the target exceeds " > + "this default, the backup may be unusable", > + BACKUP_CLUSTER_SIZE_DEFAULT); > + job->cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT; > + } else if (ret < 0 && !target->backing) { > error_setg_errno(errp, -ret, > "Couldn't determine the cluster size of the target image, " > "which has no backing file"); > -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org
signature.asc
Description: OpenPGP digital signature