Re: [Qemu-devel] [Qemu-discuss] Qemu snapshot mode

2012-09-23 Thread xuanmao_001
Hi, all
I want to change the path of temporary snapshot file. Can you gei me some 
ideas, or tell me which file will write the  temporary snapshot file in qemu 
source code. thanks.




xuanmao_001

From: Dunrong Huang
Date: 2012-09-06 18:00
To: xuanmao_001
CC: qemu-discuss; Jakob Bohm
Subject: Re: [Qemu-discuss] Qemu snapshot mode
2012/9/6 xuanmao_001 :
> Hi, all,
> When I start VM with snapshot mode(--snapshot). I do some operations,like
> copy, delete files. I must shutdown VM, the disk state can revert.
> so I want to know if the snapshot mode can revert when I reboot VM.
>
No, it cant.
Actually, when you start QEMU with -snapshot, a temporary snapshot file
which is not visible to the user will be created in /tmp, the file
will exist util QEMU exit.
So even if you reboot VM, QEMU is still running and the temporary
snapshot also exists.
This make you failed to convert VM after reboot.
> Thanks.
> ________
> xuanmao_001



-- 
Best Regards,

Dunrong Huang

[Qemu-devel] question about qemu disk cache mode

2013-09-03 Thread xuanmao_001
Dear qemuers:

my qemu-kvm version is 1.0.1
I would like to figure out the qemu disk cache mode. I have visited the 
qemu-options.hx
there is two cache that I didn't understand: the host cache page and the qemu 
disk write cache.

Is "host page cache" only used for read. and "qemu disk write cache" used for 
writing.

which cache the data reached first? host page cache or qemu disk write cache?




xuanmao_001

Re: [Qemu-devel] question about qemu disk cache mode

2013-09-04 Thread xuanmao_001
I understand the physical disk cache and host page cache.

I want to the difference between guest disk write cache and host page cache 
that described with "Caching modes in Qemu" in
https://events.linuxfoundation.org/slides/2011/linuxcon-japan/lcj2011_hajnoczi.pdf


give me some more information please, thanks.




xuanmao_001

From: Kevin Wolf
Date: 2013-09-04 16:00
To: xuanmao_001
CC: qemu-devel; qemu-discuss
Subject: Re: question about qemu disk cache mode
Am 04.09.2013 um 05:47 hat xuanmao_001 geschrieben:
> Dear qemuers:
>  
> my qemu-kvm version is 1.0.1
> I would like to figure out the qemu disk cache mode. I have visited the
> qemu-options.hx
> there is two cache that I didn't understand: the host cache page and the qemu
> disk write cache.
>  
> Is "host page cache" only used for read. and "qemu disk write cache" used for
> writing.
>  
> which cache the data reached first? host page cache or qemu disk write cache?

You're probably misunderstanding the latter, I assume that what you've
read about is the "disk cache", not a "qemu disk cache". This is the
cache on your hardware, the physical hard disk. The host page cache is
the caching that the Linux kernel does for every file (unless it's
bypassed with O_DIRECT, with is exposed as cache=none/directsync in
qemu). None of this is implemented in or specific to qemu.

When you write data, it reaches the page cache in the kernel first (if
it is used at all), and then the cache on the hard disk.

Hope this helps.

Kevin

Re: [Qemu-devel] question about qemu disk cache mode

2013-09-04 Thread xuanmao_001
so, the guest disk write cache just only for none and writeback cache mode?
the cache allocate on host for disk file image by qemu?




xuanmao_001

From: Kevin Wolf
Date: 2013-09-04 17:45
To: xuanmao_001
CC: qemu-devel; qemu-discuss
Subject: Re: Re: question about qemu disk cache mode
Am 04.09.2013 um 11:07 hat xuanmao_001 geschrieben:
> I understand the physical disk cache and host page cache.
>  
> I want to the difference between guest disk write cache and host page cache
> that described with "Caching modes in Qemu" in
> 
> https://events.linuxfoundation.org/slides/2011/linuxcon-japan/lcj2011_hajnoczi.pdf
> 
>  
> 
>  
> 
> give me some more information please, thanks.

It simply describes whether the guest will see a volatile write cache.
This is the case if any writeback cache is involved in the stack, be it
the host kernel page cache or the host disk write cache. Only if the
whole stack uses writethrough caching, the guest won't see a volatile
write cache.

Kevin

>  
> ━━━━━━━
> xuanmao_001
>  
> From: Kevin Wolf
> Date: 2013-09-04 16:00
> To: xuanmao_001
> CC: qemu-devel; qemu-discuss
> Subject: Re: question about qemu disk cache mode
> Am 04.09.2013 um 05:47 hat xuanmao_001 geschrieben:
> > Dear qemuers:
> >  
> > my qemu-kvm version is 1.0.1
> > I would like to figure out the qemu disk cache mode. I have visited the
> > qemu-options.hx
> > there is two cache that I didn't understand: the host cache page and the 
> > qemu
> > disk write cache.
> >  
> > Is "host page cache" only used for read. and "qemu disk write cache" used 
> > for
> > writing.
> >  
> > which cache the data reached first? host page cache or qemu disk write 
> > cache?
>  
> You're probably misunderstanding the latter, I assume that what you've
> read about is the "disk cache", not a "qemu disk cache". This is the
> cache on your hardware, the physical hard disk. The host page cache is
> the caching that the Linux kernel does for every file (unless it's
> bypassed with O_DIRECT, with is exposed as cache=none/directsync in
> qemu). None of this is implemented in or specific to qemu.
>  
> When you write data, it reaches the page cache in the kernel first (if
> it is used at all), and then the cache on the hard disk.
>  
> Hope this helps.
>  
> Kevin

[Qemu-devel] qemu-kvm-1.0.1 cdrom device with iso hotplug issue

2014-04-14 Thread xuanmao_001
Hi, there:

I found an issue that cdrom device hotplug iso image.
1. if I startup qemu with an iso image, then iso can easily be replaced and it 
worked.
2. but if I startup qemu with null image, when I change iso image use qemu 
monitor command "change", 
   it told me with following message:
   'This disc contains a "UDF" file system and requires an operating system 
that supports the ISO-13346"
   "UDF" file system specification' inside VM. 
But I can opened it with tools like "UltraISO"

qemu version: qemu-kvm-1.0.1
vm os: winxp sp3
iso image: any

Following is my qemu command line:
with iso image:
qemu-system-x86_64 -cpu core2duo -m 1024 -enable-kvm -localtime -nodefaults 
-drive 
file=/var/lib/libvirt/images/fwq_test_xp_acpi.img,cache=writeback,if=virtio 
-drive 
file=/var/lib/libvirt/images/linux_image/VS2008SP1CHSX1512981.iso,if=none,media=cdrom,id=ide0-1-0
 -device ide-cd,bus=ide.1,unit=0,drive=ide0-1-0,id=ide0-1-0 -net none 
-usbdevice tablet -vnc 0.0.0.0:3 -vga cirrus -monitor stdio 

with null iso image:
qemu-system-x86_64 -cpu core2duo -m 1024 -enable-kvm -localtime -nodefaults 
-drive 
file=/var/lib/libvirt/images/fwq_test_xp_acpi.img,cache=writeback,if=virtio 
-drive if=none,media=cdrom,id=ide0-1-0 -device 
ide-cd,bus=ide.1,unit=0,drive=ide0-1-0,id=ide0-1-0 -net none -usbdevice tablet 
-vnc 0.0.0.0:3 -vga cirrus -monitor stdio 




xuanmao_001

[Qemu-devel] qemu-1.7.0 vm migration with nbd usage

2014-02-12 Thread xuanmao_001
Hi, Is there a document describe vm migration with nbd server or any example 
about it?

give me some idea, thanks!

[Qemu-devel] qemu-1.5 vm migration usage

2013-10-31 Thread xuanmao_001
Hi, Is there a document describe vm migration with nbd?

I can't now find it on google, please help me, thanks!

[Qemu-devel] Qemu with Microsoft VSS

2014-12-16 Thread xuanmao_001
I use the qemu guest agent, and freeze filesystem with vss, but I don't know 
what can vss do?  
use in which scenario ?
anyone explain to me, thanks!




xuanmao_001

[Qemu-devel] savevm too slow

2013-09-05 Thread xuanmao_001
Hi, qemuers:

I found that the guest disk file cache mode will affect to the time of savevm.

the cache 'writeback' too slow. but the cache 'unsafe' is as fast as it can, 
less than 10 seconds.

here is the example I use virsh:
@cache with writeback:
#the first snapshot
real0m21.904s
user0m0.006s
sys 0m0.008s

#the secondary snapshot
real2m11.624s
user0m0.013s
sys 0m0.008s

@cache with unsafe:
#the first snapshot
real0m0.730s
user0m0.006s
sys 0m0.005s

#the secondary snapshot
real0m1.296s
user0m0.002s
sys 0m0.008s

so, what the difference between them when using different cache.

the other question: when I change the buffer size #define IO_BUF_SIZE 32768 
to #define IO_BUF_SIZE (1 * 1024 * 1024), the savevm is more quickly.


thanks.



xuanmao_001

Re: [Qemu-devel] savevm too slow

2013-09-08 Thread xuanmao_001
>> the other question: when I change the buffer size #define IO_BUF_SIZE 32768
>> to #define IO_BUF_SIZE (1 * 1024 * 1024), the savevm is more quickly.

> Is this for cache=unsafe as well?

> Juan, any specific reason for using 32k? I think it would be better to
> have a multiple of the qcow2 cluster size, otherwise we get COW for the
> empty part of newly allocated clusters. If we can't make it dynamic,
> using at least fixed 64k to match the qcow2 default would probably
> improve things a bit.

with cache=writeback.  Is there any risk for setting cache=writeback with 
IO_BUF_SIZE 1M ?




xuanmao_001

From: Kevin Wolf
Date: 2013-09-06 18:38
To: xuanmao_001
CC: qemu-discuss; qemu-devel; quintela; stefanha; mreitz
Subject: Re: savevm too slow
Am 06.09.2013 um 03:31 hat xuanmao_001 geschrieben:
> Hi, qemuers:
>  
> I found that the guest disk file cache mode will affect to the time of savevm.
>  
> the cache 'writeback' too slow. but the cache 'unsafe' is as fast as it can,
> less than 10 seconds.
>  
> here is the example I use virsh:
> @cache with writeback:
> #the first snapshot
> real0m21.904s
> user0m0.006s
> sys 0m0.008s
>  
> #the secondary snapshot
> real2m11.624s
> user0m0.013s
> sys 0m0.008s
>  
> @cache with unsafe:
> #the first snapshot
> real0m0.730s
> user0m0.006s
> sys 0m0.005s
>  
> #the secondary snapshot
> real0m1.296s
> user0m0.002s
> sys 0m0.008s

I sent patches that should eliminate the difference between the first
and second snapshot at least.

> so, what the difference between them when using different cache.

cache=unsafe ignores any flush requests. It's possible that there is
potential for optimisation with cache=writeback, i.e. it sends flush
requests that aren't necessary in fact. This is something that I haven't
checked yet.

> the other question: when I change the buffer size #define IO_BUF_SIZE 32768
> to #define IO_BUF_SIZE (1 * 1024 * 1024), the savevm is more quickly.

Is this for cache=unsafe as well?

Juan, any specific reason for using 32k? I think it would be better to
have a multiple of the qcow2 cluster size, otherwise we get COW for the
empty part of newly allocated clusters. If we can't make it dynamic,
using at least fixed 64k to match the qcow2 default would probably
improve things a bit.

Kevin

Re: [Qemu-devel] [Qemu-discuss] virtio with Windows 8.

2013-09-08 Thread xuanmao_001
This will help you:
http://www.linux-kvm.org/page/Boot_from_virtio_block_device




xuanmao_001

From: Yaodong Yang
Date: 2013-09-09 13:02
To: qemu-devel@nongnu.org; qemu-disc...@nongnu.org
Subject: [Qemu-discuss] virtio with Windows 8.
Hi all,

1. I create a raw image named as win8.img, using the following command:
/usr/local/kvm/bin/qemu-img create -f raw win8.img 20G

2. I try to install win8 with the following command, but I failed several times.

sudo /usr/local/kvm/bin/qemu-system-x86_64 -enable-kvm -drive 
file=./win8.img,if=virtio,cache=none -cdrom ./win8.iso -boot d -m 2048.

3. I try the same command but with if=ide, it works. 

Could someone tell me the reason for this failure? 

I appreciate it very much!

yaodong

Re: [Qemu-devel] savevm too slow

2013-09-09 Thread xuanmao_001
> I sent patches that should eliminate the difference between the first
> and second snapshot at least.

where I can find the patches that can eliminate the difference between the first
and second snapshot ? Does they fit qemu-kvm-1.0,1 ?




xuanmao_001

From: Kevin Wolf
Date: 2013-09-09 16:35
To: xuanmao_001
CC: qemu-discuss; qemu-devel; quintela; stefanha; mreitz
Subject: Re: Re: savevm too slow
Am 09.09.2013 um 03:57 hat xuanmao_001 geschrieben:
> >> the other question: when I change the buffer size #define IO_BUF_SIZE 32768
> >> to #define IO_BUF_SIZE (1 * 1024 * 1024), the savevm is more quickly.
>  
> > Is this for cache=unsafe as well?
>  
> > Juan, any specific reason for using 32k? I think it would be better to
> > have a multiple of the qcow2 cluster size, otherwise we get COW for the
> > empty part of newly allocated clusters. If we can't make it dynamic,
> > using at least fixed 64k to match the qcow2 default would probably
> > improve things a bit.
>  
> with cache=writeback.  Is there any risk for setting cache=writeback with
> IO_BUF_SIZE 1M ?

No. Using a larger buffer size should be safe.

Kevin

> ━━━━━━━━━━━
> xuanmao_001
>  
> From: Kevin Wolf
> Date: 2013-09-06 18:38
> To: xuanmao_001
> CC: qemu-discuss; qemu-devel; quintela; stefanha; mreitz
> Subject: Re: savevm too slow
> Am 06.09.2013 um 03:31 hat xuanmao_001 geschrieben:
> > Hi, qemuers:
> >  
> >
>  I found that the guest disk file cache mode will affect to the time of 
> savevm.
> >  
> > the cache 'writeback' too slow. but the cache 'unsafe' is as fast as it can,
> > less than 10 seconds.
> >  
> > here is the example I use virsh:
> > @cache with writeback:
> > #the first snapshot
> > real0m21.904s
> > user0m0.006s
> > sys 0m0.008s
> >  
> > #the secondary snapshot
> > real2m11.624s
> > user0m0.013s
> > sys 0m0.008s
> >  
> > @cache with unsafe:
> > #the first snapshot
> > real0m0.730s
> > user0m0.006s
> > sys 0m0.005s
> >  
> > #the secondary snapshot
> > real0m1.296s
> > user0m0.002s
> > sys 0m0.008s
>  
> I sent patches that should eliminate the difference between the first
> and second snapshot at least.
>  
> > so, what the difference between them when using different cache.
>  
> cache=unsafe ignores any flush requests. It's possible that there is
> potential for optimisation with cache=writeback, i.e. it sends flush
> requests that aren't necessary in fact. This is something that I haven't
> checked yet.
>  
> > the other question: when I change the buffer size #define IO_BUF_SIZE 32768
> > to #define IO_BUF_SIZE (1 * 1024 * 1024), the savevm is more quickly.
>  
> Is this for cache=unsafe as well?
>  
> Juan, any specific reason for using 32k? I think it would be better to
> have a multiple of the qcow2 cluster size, otherwise we get COW for the
> empty part of newly allocated clusters. If we can't make it dynamic,
> using at least fixed 64k to match the qcow2 default would probably
> improve things a bit.
>  
> Kevin

[Qemu-devel] qemu-img convert will increase the VM image

2013-09-11 Thread xuanmao_001
Hi, all:

I have question about qemu-img convert ...

I have a orignal image with following information:
# qemu-img info ori.qcow2   
image: ori.qcow2
file format: qcow2
virtual size: 2.0G (2097152000 bytes)
disk size: 308M
cluster_size: 65536

when I executed with "qemu-img convert -f qcow2 -O qcow2 ori.qcow2 new.qcow2"
the new.qcow2 like following:
# qemu-img info new.qcow2 
image: new.qcow2
file format: qcow2
virtual size: 2.0G (2097152000 bytes)
disk size: 748M
cluster_size: 65536

so, my question is why the image size increased after my operation?

thanks.




xuanmao_001

Re: [Qemu-devel] qemu-img convert will increase the VM image

2013-09-11 Thread xuanmao_001
] 1280/ 4016256 sectors allocated at 
offset 39 MiB (1)
 24 [41484288]  384/ 4014976 sectors not allocated at 
offset 40 MiB (0)
 25 [41680896]  128/ 4014592 sectors allocated at 
offset 40 MiB (1)
 26 [41746432] 7040/ 4014464 sectors not allocated at 
offset 40 MiB (0)
 27 [45350912]  640/ 4007424 sectors allocated at 
offset 43 MiB (1)
 28 [45678592] 7040/ 4006784 sectors not allocated at 
offset 44 MiB (0)
 29 [49283072]  128/ 3999744 sectors allocated at 
offset 47 MiB (1)
 30 [49348608]  384/ 3999616 sectors not allocated at 
offset 47 MiB (0)

they are different. 
what caused the problem ?



xuanmao_001

From: Kevin Wolf
Date: 2013-09-11 16:28
To: xuanmao_001
CC: qemu-devel; qemu-discuss
Subject: Re: qemu-img convert will increase the VM image
Am 11.09.2013 um 09:14 hat xuanmao_001 geschrieben:
> Hi, all:
>  
> I have question about qemu-img convert ...
>  
> I have a orignal image with following information:
> # qemu-img info ori.qcow2   
> image: ori.qcow2
> file format: qcow2
> virtual size: 2.0G (2097152000 bytes)
> disk size: 308M
> cluster_size: 65536
>  
> when I executed with "qemu-img convert -f qcow2 -O qcow2 ori.qcow2 new.qcow2"
> the new.qcow2 like following:
> # qemu-img info new.qcow2 
> image: new.qcow2
> file format: qcow2
> virtual size: 2.0G (2097152000 bytes)
> disk size: 748M
> cluster_size: 65536
>  
> so, my question is why the image size increased after my operation?

You can try comparing the output of the qemu-io 'map' command for both
images.

Kevin