Le ven. 15 nov. 2024 à 15:21, Frantisek Rysanek <frantisek.rysa...@post.cz>
a écrit :

> > And, I have an idea: rather than refer to driver=qcow2 and
> > file.filename, how about referring to the loopback device (NBD) that
> > you already have, courtesy of qemu-nbd ? Would that perhaps circumvent
> > the file lock? ;-)
> >
> > -blockdev node-name=xy,driver=raw,file.driver=host_device,\
> > file.filename=/dev/loop0,file.locking=off
> >
> > -device virtio-scsi-pci -device scsi-hd,drive=xy
> >
>
> I mean: the QEMU device emulation would not run on top of the QCOW
> file directly (and the underlying filesystem, and its locking
> feature), but would instead share a block device with your host-side
> mount. Thus, it would also plug directly into any block-level
> buffering going on, on the host side.
>
> On the guest, I'm wondering if you should mount the partition with
> -o direct. This should prevent any write-back buffering in the guest,
> which however you will not be doing, as you say.
> On the other hand, if you make changes to the FS on the host side,
> while the QEMU guest instance is already running, the guest probably
> will not get to know about any changes, probably unless you umount
> and remount, that with "-o direct" (to avoid local read caching in
> the guest).
>
> Even if this crazy stuff works in the end, I'm wondering if it's all
> worth the implied pitfalls :-)
> Apparently you still need to keep stuff in sync in some way...
>
> Frank
>
>
after reading page 17 @
https://vmsplice.net/~stefan/qemu-block-layer-features-and-concepts.pdf,
I'm almost there with :

qemu -snapshot \
-blockdev driver=file,node-name=file-driver,filename=file.qcow2,locking=off
\
-blockdev driver=qcow2,node-name=qcow-driver,file=file-driver \
-device ide-hd,drive=qcow-driver \
-hdb file2.qcow2

the difference lies in the fact that it's not `hda` but `hdc` : on the
guest side, the disk appears second after the one passed by `hdb`

Reply via email to