[Qemu-discuss] Is locking option available now?

2017-10-24 Thread Han Han
I create backing file with 'locking=off', but it seems the locking is
enabled:
On *qemu-2.10*
# qemu-img create -b 'json:{"file": {"driver": "file", "filename":
"/var/lib/libvirt/images/V.qcow2", "*locking*": "*off*"}}'
/var/lib/libvirt/images/a.qcow2 -f qcow2
Formatting '/var/lib/libvirt/images/a.qcow2', fmt=qcow2 size=8589934592
backing_file=json:{"file": {"driver": "file",, "filename":
"/var/lib/libvirt/images/V.qcow2",, "locking": "off"}} cluster_size=65536
lazy_refcounts=off refcount_bits=16

Then checking if locking works:
# qemu-kvm /var/lib/libvirt/images/a.qcow2

VNC server running on ::1:5900

# qemu-img info /var/lib/libvirt/images/a.qcow2
qemu-img: Could not open '/var/lib/libvirt/images/a.qcow2': Failed to get
shared "write" lock
Is another process using the image?

It looks "*locking=off*" not worked. So is this option available now? In
which version this feature will be supported?

-- 
Han Han
Quality Engineer
Redhat.

Email: h...@redhat.com
Phone: +861065339333


Re: [Qemu-discuss] [Qemu-block] Question regarding qemuimg check

2017-10-24 Thread Stefan Hajnoczi
On Mon, Oct 23, 2017 at 03:38:40PM +0300, Ala Hino wrote:
> I have a question regarding qemuimg check. We use qemuimg check in order to
> get the offset of image. we need the offset to reduce the size of the image
> to optimal.
> 
> In BZ 1502488 , we are encountering a
> use case where a leaked cluster error when executing qemuimg check. The
> root cause of that exception is killing qemu-kvm process during writing to
> a VM. In this case, executing qemuimg check ends with getting the leaked
> cluster error. Below is the error:
> 
> 2017-10-16 10:09:32,950+0530 DEBUG (tasks/0) [root] /usr/bin/taskset
> --cpu-list 0-3 /usr/bin/qemu-img check --output json -f qcow2
> /rhev/data-center/mnt/blockSD/8257cf14-d88d-4e4e-998c-9f8976dac2a2/images/7455de38-1df1-4acd-b07c-9dc2138aafb3/be4a4d85-d7e6-4725-b7f5-90c9d935c336
> (cwd None) (commands:69)
> 2017-10-16 10:09:33,576+0530 ERROR (tasks/0)
> [storage.TaskManager.Task]
> (Task='59404af6-b400-4e08-9691-9a64cdf00374') Unexpected error
> (task:872)
> Traceback (most recent call last):
>   File "/usr/share/vdsm/storage/task.py", line 879, in _run
> return fn(*args, **kargs)
>   File "/usr/share/vdsm/storage/task.py", line 333, in run
> return self.cmd(*self.argslist, **self.argsdict)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py",
> line 79, in wrapper
> return method(self, *args, **kwargs)
>   File "/usr/share/vdsm/storage/sp.py", line 1892, in finalizeMerge
> merge.finalize(subchainInfo)
>   File "/usr/share/vdsm/storage/merge.py", line 271, in finalize
> optimal_size = subchain.base_vol.optimal_size()
>   File "/usr/share/vdsm/storage/blockVolume.py", line 440, in optimal_size
> check = qemuimg.check(self.getVolumePath(), qemuimg.FORMAT.QCOW2)
>   File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 156, in check
> out = _run_cmd(cmd)
>   File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 416, in 
> _run_cmd
> raise QImgError(cmd, rc, out, err)
> QImgError: cmd=['/usr/bin/qemu-img', 'check', '--output', 'json',
> '-f', 'qcow2', 
> '/rhev/data-center/mnt/blockSD/8257cf14-d88d-4e4e-998c-9f8976dac2a2/images/7455de38-1df1-4acd-b07c-9dc2138aafb3/be4a4d85-d7e6-4725-b7f5-90c9d935c336'],
> ecode=3, stdout={
> QImgError: cmd=['/usr/bin/qemu-img', 'check', '--output', 'json',
> '-f', 'qcow2', 
> '/rhev/data-center/mnt/blockSD/8257cf14-d88d-4e4e-998c-9f8976dac2a2/images/7455de38-1df1-4acd-b07c-9dc2138aafb3/be4a4d85-d7e6-4725-b7f5-90c9d935c336'],
> ecode=3, stdout={
> "image-end-offset": 7188578304,
> "total-clusters": 180224,
> "check-errors": 0,
> "leaks": 200,
> "leaks-fixed": 0,
> "allocated-clusters": 109461,
> "filename":
> "/rhev/data-center/mnt/blockSD/8257cf14-d88d-4e4e-998c-9f8976dac2a2/images/7455de38-1df1-4acd-b07c-9dc2138aafb3/be4a4d85-d7e6-4725-b7f5-90c9d935c336",
> "format": "qcow2",
> "fragmented-clusters": 16741
> }
> , stderr=Leaked cluster 109202 refcount=1 reference=0
> 
> 
> Based on the error info, "This means waste of disk space, but no harm to
> data", is it OK to handle the error and continue in the flow as usual?

It may be best to fix the image file so that the same leak errors do not
appear again later:

  $ qemu-img check -f qcow2 -r leaks path/to/image.qcow2

> When hitting this behavior, the return code is 3. Are there other use
> cases, in addition to cluster leaks, where 3 is returned as the error code?
> Meaning, can we rely on that return code to determine that it is a leaked
> cluster failure?

3 means leaks only, this is documented on the qemu-img man page:

  In case the image does not have any inconsistencies, check exits with 0.  
Other exit codes indicate the kind of inconsistency found or if another error 
occurred. The following table summarizes
  all exit codes of the check subcommand:

  0   Check completed, the image is (now) consistent

  1   Check not completed because of internal errors

  2   Check completed, image is corrupted

  3   Check completed, image has leaked clusters, but is not corrupted

  63  Checks are not supported by the image format

> If we would like to ignore the cluster leaks, is there a way to call
> qemuimg check (with some parameter maybe ?) that will not raise the error?

Yes, see the qemu-img check repair command-line I posted above.

> Finally, are we doing the right thing to get the image offset in order to
> reduce its size to optimal?
> 
> (If you wonder why we need to reduce the image size, this is because during
> snapshot merge, we extend the image size to accumulate the data of the top
> and the base images.)

Yes, qemu-img check is the way to do this.

You're probably hoping there is another command like "qemu-img info"
that displays the end of image offset but unfortunately you can only
collect this information by performing a check (it scans the entire
image and can therefore produce the end of image offset).

Stefan



Re: [Qemu-discuss] [Qemu-block] Question regarding qemuimg check

2017-10-24 Thread Ala Hino
Thanks for the detailed reply, Stefan.

Can we always run qemu-img check with -r leaks option, even if there are no
leaks?

On Tue, Oct 24, 2017 at 3:06 PM, Stefan Hajnoczi  wrote:

> On Mon, Oct 23, 2017 at 03:38:40PM +0300, Ala Hino wrote:
> > I have a question regarding qemuimg check. We use qemuimg check in order
> to
> > get the offset of image. we need the offset to reduce the size of the
> image
> > to optimal.
> >
> > In BZ 1502488 , we are
> encountering a
> > use case where a leaked cluster error when executing qemuimg check. The
> > root cause of that exception is killing qemu-kvm process during writing
> to
> > a VM. In this case, executing qemuimg check ends with getting the leaked
> > cluster error. Below is the error:
> >
> > 2017-10-16 10:09:32,950+0530 DEBUG (tasks/0) [root] /usr/bin/taskset
> > --cpu-list 0-3 /usr/bin/qemu-img check --output json -f qcow2
> > /rhev/data-center/mnt/blockSD/8257cf14-d88d-4e4e-998c-
> 9f8976dac2a2/images/7455de38-1df1-4acd-b07c-9dc2138aafb3/
> be4a4d85-d7e6-4725-b7f5-90c9d935c336
> > (cwd None) (commands:69)
> > 2017-10-16 10:09:33,576+0530 ERROR (tasks/0)
> > [storage.TaskManager.Task]
> > (Task='59404af6-b400-4e08-9691-9a64cdf00374') Unexpected error
> > (task:872)
> > Traceback (most recent call last):
> >   File "/usr/share/vdsm/storage/task.py", line 879, in _run
> > return fn(*args, **kargs)
> >   File "/usr/share/vdsm/storage/task.py", line 333, in run
> > return self.cmd(*self.argslist, **self.argsdict)
> >   File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py",
> > line 79, in wrapper
> > return method(self, *args, **kwargs)
> >   File "/usr/share/vdsm/storage/sp.py", line 1892, in finalizeMerge
> > merge.finalize(subchainInfo)
> >   File "/usr/share/vdsm/storage/merge.py", line 271, in finalize
> > optimal_size = subchain.base_vol.optimal_size()
> >   File "/usr/share/vdsm/storage/blockVolume.py", line 440, in
> optimal_size
> > check = qemuimg.check(self.getVolumePath(), qemuimg.FORMAT.QCOW2)
> >   File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 156, in
> check
> > out = _run_cmd(cmd)
> >   File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 416, in
> _run_cmd
> > raise QImgError(cmd, rc, out, err)
> > QImgError: cmd=['/usr/bin/qemu-img', 'check', '--output', 'json',
> > '-f', 'qcow2', '/rhev/data-center/mnt/blockSD/8257cf14-d88d-4e4e-
> 998c-9f8976dac2a2/images/7455de38-1df1-4acd-b07c-
> 9dc2138aafb3/be4a4d85-d7e6-4725-b7f5-90c9d935c336'],
> > ecode=3, stdout={
> > QImgError: cmd=['/usr/bin/qemu-img', 'check', '--output', 'json',
> > '-f', 'qcow2', '/rhev/data-center/mnt/blockSD/8257cf14-d88d-4e4e-
> 998c-9f8976dac2a2/images/7455de38-1df1-4acd-b07c-
> 9dc2138aafb3/be4a4d85-d7e6-4725-b7f5-90c9d935c336'],
> > ecode=3, stdout={
> > "image-end-offset": 7188578304,
> > "total-clusters": 180224,
> > "check-errors": 0,
> > "leaks": 200,
> > "leaks-fixed": 0,
> > "allocated-clusters": 109461,
> > "filename":
> > "/rhev/data-center/mnt/blockSD/8257cf14-d88d-4e4e-
> 998c-9f8976dac2a2/images/7455de38-1df1-4acd-b07c-
> 9dc2138aafb3/be4a4d85-d7e6-4725-b7f5-90c9d935c336",
> > "format": "qcow2",
> > "fragmented-clusters": 16741
> > }
> > , stderr=Leaked cluster 109202 refcount=1 reference=0
> >
> >
> > Based on the error info, "This means waste of disk space, but no harm to
> > data", is it OK to handle the error and continue in the flow as usual?
>
> It may be best to fix the image file so that the same leak errors do not
> appear again later:
>
>   $ qemu-img check -f qcow2 -r leaks path/to/image.qcow2
>
> > When hitting this behavior, the return code is 3. Are there other use
> > cases, in addition to cluster leaks, where 3 is returned as the error
> code?
> > Meaning, can we rely on that return code to determine that it is a leaked
> > cluster failure?
>
> 3 means leaks only, this is documented on the qemu-img man page:
>
>   In case the image does not have any inconsistencies, check exits with
> 0.  Other exit codes indicate the kind of inconsistency found or if another
> error occurred. The following table summarizes
>   all exit codes of the check subcommand:
>
>   0   Check completed, the image is (now) consistent
>
>   1   Check not completed because of internal errors
>
>   2   Check completed, image is corrupted
>
>   3   Check completed, image has leaked clusters, but is not corrupted
>
>   63  Checks are not supported by the image format
>
> > If we would like to ignore the cluster leaks, is there a way to call
> > qemuimg check (with some parameter maybe ?) that will not raise the
> error?
>
> Yes, see the qemu-img check repair command-line I posted above.
>
> > Finally, are we doing the right thing to get the image offset in order to
> > reduce its size to optimal?
> >
> > (If you wonder why we need to reduce the image size, this is because
> during
> > snapshot merge, we extend the image size to accumulate

Re: [Qemu-discuss] from git source build instructions still correct?

2017-10-24 Thread Dennis Luehring

Am 23.10.2017 um 11:11 schrieb Peter Maydell:

The instructions are right, but it looks like unfortunately our
git server at git.qemu.org is having a problem currently. Hopefully
we'll get that fixed soon...


fixed - thanks




[Qemu-discuss] Allow certain CPU cores for VMs

2017-10-24 Thread Martin Snowman
Hi,

is it possible to configure in qemu or libvirt to exclude certain cores
from VM deployment ?
Let's say we have 24 cores on hypervisor and I want to allocate first four
cores to openvswitch and use next 20 cores for QEMU VMs.

Is this configuration possible ? Whats the best approach.

Example of current Qemu VM:
/usr/bin/qemu-system-x86_64 -name one-76 -S -machine
pc-i440fx-wily,accel=kvm,usb=off -m 12288 -realtime mlock=off -smp
8,sockets=8,cores=1,threads=1 -uuid e1729682-f965-4810-aa3c-c2b7e5e717cb
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-one-76/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
-boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=gluster://:24007//76/disk.0,format=qcow2,if=none,id=drive-virtio-disk0,cache=none
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive
file=//76/disk.1,format=raw,if=none,id=drive-ide0-0-0,readonly=on
-device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=30,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=02:00:0a:00:65:0b,bus=pci.0,addr=0x3
-netdev tap,fd=44,id=hostnet1 -device
rtl8139,netdev=hostnet1,id=net1,mac=02:00:0a:00:6e:6f,bus=pci.0,addr=0x4
-vnc 0.0.0.0:76 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on

Thanks for replies in advance.

BR,
Martin


Re: [Qemu-discuss] Allow certain CPU cores for VMs

2017-10-24 Thread Aleksei
libvirt has CPU pinning feature: 
https://libvirt.org/formatdomain.html#elementsCPUAllocation


AFAIK this functionality is not available in QEMU itself, but can be 
accomplished by scripting utilities like taskset.


You might also want to instruct the kernel not to do any work on the 
cores you dedicate to VMs - look into kernel args isolcpus, nohz_full 
and rcu_nocbs.



On 24/10/17 16:39, Martin Snowman wrote:

Hi,

is it possible to configure in qemu or libvirt to exclude certain cores
from VM deployment ?
Let's say we have 24 cores on hypervisor and I want to allocate first four
cores to openvswitch and use next 20 cores for QEMU VMs.

Is this configuration possible ? Whats the best approach.

Example of current Qemu VM:
/usr/bin/qemu-system-x86_64 -name one-76 -S -machine
pc-i440fx-wily,accel=kvm,usb=off -m 12288 -realtime mlock=off -smp
8,sockets=8,cores=1,threads=1 -uuid e1729682-f965-4810-aa3c-c2b7e5e717cb
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-one-76/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
-boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=gluster://:24007//76/disk.0,format=qcow2,if=none,id=drive-virtio-disk0,cache=none
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive
file=//76/disk.1,format=raw,if=none,id=drive-ide0-0-0,readonly=on
-device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev
tap,fd=30,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=02:00:0a:00:65:0b,bus=pci.0,addr=0x3
-netdev tap,fd=44,id=hostnet1 -device
rtl8139,netdev=hostnet1,id=net1,mac=02:00:0a:00:6e:6f,bus=pci.0,addr=0x4
-vnc 0.0.0.0:76 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on

Thanks for replies in advance.

BR,
Martin


--
/--Regards, Aleksei/


Re: [Qemu-discuss] Coldfire 5282 Support

2017-10-24 Thread William Mahoney

Quick question. On the MCF5282 there is a huge memory mapped IO starting at 
0x4000 and going for 1A. All of the IO is relative to this starting 
point, so when my call back for an I/O write happens, for example, I get the 
offset into the area. Fine.

In this area is the Fast Ethernet Controller (FEC) at offset 1000. The support 
for the FEC is already done in the hardware for 68K in general, so that’s 
great, only the address is different. Easily solved. But… It essentially 
creates a “hole” in the region at location 1000. The same is true of the 
timers, since that was done for the 5208. 

1) If I define the large region and THEN define the small one (since FEC 
support is already done), will the “more recent” region get the I/O requests 
and I’m good to go? And if so, will my “below the FEC” part and the “above the 
FEC” part still have the correct offsets?

2) If that is not the case, I define “the below the FEC” part in one IO space, 
the “above the FEC in another IO space, and then adjust the offsets for “above” 
accordingly?


Or a different way of saying this is “Is there any priority for overlapping 
defined I/O spaces, and if so is it LIFO?”

Thanks!

Bill





Re: [Qemu-discuss] Coldfire 5282 Support

2017-10-24 Thread Peter Maydell
On 24 October 2017 at 21:34, William Mahoney  wrote:
> Quick question. On the MCF5282 there is a huge memory mapped IO starting at 
> 0x4000 and going for 1A. All of the IO is relative to this starting 
> point, so when my call back for an I/O write happens, for example, I get the 
> offset into the area. Fine.

What's actually in this region that wants the offset from the
base of it? Often "all the IO is in this window" designs are
really just "there are lots of different IO devices which
are at different places within this range". That is, is
there actually any behaviour needed for "in the IO range
but not actually a device" ?

> 1) If I define the large region and THEN define the small one (since FEC 
> support is already done), will the “more recent” region get the I/O requests 
> and I’m good to go? And if so, will my “below the FEC” part and the “above 
> the FEC” part still have the correct offsets?

You can do this sort of thing, but you need to define
the region priorities (using memory_region_add_subregion_overlap()
for at least one of them. See docs/devel/memory.txt and
in particular the section on overlapping regions.

thanks
-- PMM