Hi, all
I'm using a qcow2 image stored on a SSD RAID1 (2 x intel S3500), and I'm
benchmarking the
system using fio. Although the throughput in VM (with KVM and virtio enabled)
is acceptable (67%
of thoughtput in host), the IOPS performance seems is extremely low only
2% of IOPS in host.
tion=metadata test.qcow2 100G
Allocating the blocks in VM:
dd if=/dev/zero of=/dev/vdb bs=1M
where vdb is the target image.
At 2014-06-23 11:01:20, "Fam Zheng" wrote:
>On Mon, 06/23 10:06, lihuiba wrote:
>> Hi, all
>>
>>
>> I'm using a qcow2
of fio exceeds 8GB, the transform wil
be degraded to reading
external table, and the performances goes extremely low.
At 2014-06-23 11:22:37, "Fam Zheng" wrote:
>Cc'ing more qcow2 experts.
>
>On Mon, 06/23 11:14, lihuiba wrote:
>> >Did you prefill
Hi, all
I'm a user of qemu/kvm, and I'm wondering some internals of qemu/kvm, so I'd
better post it in
this developer's mailing list.
To be specific, I'm wondering how data is flushed to disk. Intuitively, when
the guest issues a
SYNCHRONIZE CACHE command in the SCSI layer, qemu/kvm should
Hi, all
bdrv_co_flush() will flush all cached data to persistent storage, and I'm
wondering whether guest sync() will eventually trigger bdrv_co_flush() be
called.
Intuitively, guest sync() should trigger bdrv_co_flush() in qemu. But simple
grep gave me a negative answer. So I'm wondering why
Hi, all
bdrv_co_flush() will flush all cached data to persistent storage, and I'm
wondering whether guest sync() will eventually trigger bdrv_co_flush() be
called.
Intuitively, guest sync() should trigger bdrv_co_flush() in qemu. But simple
grep gave me a negative answer. So I'm wondering why
Hi, all
bdrv_co_flush() will flush all cached data to persistent storage, and I'm
wondering whether guest sync() will eventually trigger bdrv_co_flush() be
called.
Intuitively, guest sync() should trigger bdrv_co_flush() in qemu. But simple
grep gave me a negative answer. So I'm wondering why
>I guess the only thing that would need to implement something new is
>qcow2_snapshot_goto(), which currently refuses to load a snapshot that
>has a different disk size.
>Once this is done, just removing the check in qcow2_truncate() should be
>okay.
Thanks! I'll see what I can do, later.
Hi folks,
Could you enlighten me how to achieve proportional IO sharing by using cgroup,
instead of qemu's io-throttling?
My qemu config is like: -drive
file=$DISKFILe,if=none,format=qcow2,cache=none,aio=native -device
virtio-blk-pci...
Test command inside vm is like: dd if=/dev/vdc of=/de
Hi,
In our production environment, we need to extend a qcow2 image with snapshots
in it. This feature, however, is not implemented yet.
So I want to ask, if this feature is under active development? How can I help
with this feature?
It seems that, this feature is not too difficult as long
Max,
I'll see what I can do, and give you my plan.
Thanks!
>On 29.12.2015 10:38, lihuiba wrote:
>> Hi,
>>
>> In our production environment, we need to extend a qcow2 image with
>> snapshots in it. This feature, however, is not implemented yet.
>>
&
At 2016-01-05 21:55:56, "Eric Blake" wrote:
>On 01/05/2016 05:10 AM, lihuiba wrote:
>
>>>> In our production environment, we need to extend a qcow2 image with
>>>> snapshots in it.
>
>>> The thing is that one would need to update all the in
12 matches
Mail list logo