eed to use external snapshots with qmp blockdev-snapshot-sync ?
(Seem more complex to delete old snapshots)
Regards,
Alexandre Derumier.
p1 image1)
ls /file1
the behaviour is completly different. Did I miss something ?
- Mail original -
De: "Stefan Hajnoczi"
À: "Alexandre DERUMIER"
Cc: "Jeff Cody" , "qemu-devel"
Envoyé: Samedi 25 Août 2012 16:01:45
Objet: Re: [Qemu-devel] qcow2: onli
will be internal! (currently unsupported)."
is Live internal snapshots on the roadmap ?
Thanks Again,
Alexandre Derumier
- Mail original -
De: "Stefan Hajnoczi"
À: "Alexandre DERUMIER"
Cc: "Jeff Cody" , "qemu-devel"
Envoyé: Dimanche 26 A
rimitives.
Thanks, I'll look at the libvirt to see how they do things.
- Mail original -
De: "Stefan Hajnoczi"
À: "Alexandre DERUMIER"
Cc: "Jeff Cody" , "qemu-devel" ,
"Paolo Bonzini" , "Eric Blake"
Envoyé: Lundi 27 Août 2012 1
Ok, got it,
Thanks Paolo !
- Mail original -
De: "Paolo Bonzini"
À: qemu-devel@nongnu.org
Envoyé: Lundi 27 Août 2012 12:10:34
Objet: Re: [Qemu-devel] qcow2: online snasphots : internal vs external ?
Il 27/08/2012 11:26, Alexandre DERUMIER ha scritto:
> how can I rollback
works fine
any idea ?
Regards,
Alexandre Derumier
ace to add
>>disk-only snapshots since we already have
>>qmp-transaction/snapshot-blkdev-sync for that.
But indeed, qmp-transaction/snapshot-blkdev-sync seem to be better place.
- Mail original -
De: "Stefan Hajnoczi"
À: "Kevin Wolf"
Cc: "Alexand
Hi, I have observed same behaviour with vm with lot of memory transfert, or
playing video in the guest.
https://lists.gnu.org/archive/html/qemu-devel/2012-09/msg00138.html
You can try to tunned to xbzrle cache size, maybe it'll improve speed.
- Mail original -
De: "Vasilis Liaskovitis
return ret;
}
2)or add a fallback in qemu-img, if bdrv_create doesn't exist, use bdrv_open to
see if the backend device is pre-existing ?
Regards,
Alexandre Derumier
Thanks guys !
I'll try to send a patch next week.
Regards,
Alexandre
- Mail original -
De: "ronnie sahlberg"
À: "Paolo Bonzini"
Cc: "Kevin Wolf" , "Alexandre DERUMIER"
, "qemu-devel"
Envoyé: Jeudi 25 Octobre 2012 16:00:4
Hi,
I'm trying to use qmp "query-balloon", to get stats,
>From Doc, I expect to have
-> { "execute": "query-balloon" }
<- {
"return":{
"actual":1073741824,
"mem_swapped_in":0,
"mem_swapped_out":0,
"major_page_faults":142,
"minor_page_faults":239
1.4 :)
I'll send a patch fixing the old doc shortly.
Thanks for your response !
- Mail original -----
De: "Luiz Capitulino"
À: "Alexandre DERUMIER"
Cc: "qemu-devel"
Envoyé: Jeudi 6 Décembre 2012 13:34:23
Objet: Re: [Qemu-devel] qmp query-balloon
ds/12157-Win2003R2-in-KVM-VM-is-slow-in-PVE-2-2-when-multiply-CPU-cores-allowed
I'll try to redone test myself this week
Regards,
Alexandre
- Mail original -
De: "Peter Lieven"
À: "Alexandre DERUMIER"
Cc: "Dietmar Maurer" , "Stefan Hajnoczi&q
ing qemu-ga is easy for linux (we can read /proc/meminfo), but with windows
guest is impossible currently.
(we need to query wmi counter, and executing process is no possible with qga)
Adding the new wmi counter ("System Cache Resident Bytes") is quite easy in
windows balloon driver.
Hi list,
I have a bsod when booting windows 2003 SP2 x64 with hpet enabled, with qemu
1.3. (screenshot attached)
/usr/bin/kvm -id 9 -chardev
socket,id=qmp,path=/var/run/qemu-server/9.qmp,server,nowait -mon
chardev=qmp,mode=control -vnc unix:/var/run/qemu-server/9.vnc,x509,password
My bsod screenshot was not full, here a new one:
this seem to hang on acpi.sys
ACPI.SYS address F735EE64 base at F7352000)
I have tested xith host kernel 3.2,26,27 and 2.6.32 from rhel6, same problem.
- Mail original -
De: "Alexandre DERUMIER"
À: "qemu-devel&quo
seem to be related to seabios update:
seabios: q35 update
http://lists.gnu.org/archive/html/qemu-devel/2012-12/msg00113.html
- Mail original -
De: "Alexandre DERUMIER"
À: "qemu-devel"
Envoyé: Mardi 11 Décembre 2012 07:30:58
Objet: Re: [Qemu-devel] qemu 1.3: windo
Thanks Gerd,
This fix also my win2003 R2 SP2 x64 acpi bsod with hpet enabled
Regards,
Alexandre
- Mail original -
De: "Gerd Hoffmann"
À: qemu-devel@nongnu.org
Cc: "Gerd Hoffmann" , qemu-sta...@nongnu.org
Envoyé: Mardi 11 Décembre 2012 08:34:12
Objet: [Qemu-devel] [PATCH 1/1] seabios:
Hi,
this wiki talked about cpu hotplug for qemu 1.3.
http://wiki.qemu.org/Features/CPUHotplug
Is it planned for qemu 1.4 ? or later release ?
Regards,
Alexandre
ld like to have a look at it. (I would like to prepare work for next
proxmox release)
----- Mail original -
De: "Igor Mammedov"
À: "Alexandre DERUMIER"
Cc: "qemu-devel"
Envoyé: Mercredi 12 Décembre 2012 21:13:06
Objet: Re: [Qemu-devel] cpu hotplug roadmap
Hi list,
I'm trying to pass hyper-v feature, with -cpu qemu64,+hv_relaxed (qemu 1.3)
but I got an
"CPU feature hv_relaxed not found"
Does this require an specific host kernel or host cpu feature support ?
Regards,
Alexandre Derumier
Hi, I have had the same problem with stable qemu 1.3.
It's was an acpi problem with seabios.
this commit fix it
http://git.qemu.org/?p=qemu.git;a=commit;h=ff1562908d1da12362aa9e3f3bfc7ba0da8114a4
- Mail original -
De: "楼正伟"
À: lazy...@126.com
Cc: qemu-devel@nongnu.org
Envoyé: Lundi 2
VM status: paused (internal-error)
(downtime is around 700ms)
I can reproduce it 100%
Regards,
Alexandre Derumier
- Mail original -
De: "Juan Quintela"
À: qemu-devel@nongnu.org
Cc: anth...@codemonkey.ws
Envoyé: Vendredi 21 Décembre 2012 20:41:03
Objet: [Qemu-devel] [PULL 00/
e console (stdout)? See
>>kvm_handle_internal_error in kvm-all.c for what to expect.
I'll have a look at this.
Currently I start the target vm with --daemonize, does I need to remove this
option to see stdout ?
- Mail original -
De: "Paolo Bonzini"
À: "Alexandre DERUMIER&quo
Ok,I'll try to bisect it tomorrow and will do more tests.
I'll keep you in touch!
Alexandre
- Mail original -
De: "Paolo Bonzini"
À: "Alexandre DERUMIER"
Cc: qemu-devel@nongnu.org, anth...@codemonkey.ws, "Juan Quintela"
Envoyé: Jeudi 27
Hi list,
After discuss with Stefan Yesterday here some more info:
(this is for stable qemu 1.3, it was working fine with qemu 1.2)
The problem seem that whesettings a migrate_set_downtime to 1sec,
the transfert of the vm seem to send all the memory of the vm in 1 step, and
not by increment.
So
ultifunction = "off"
2) -usb-tablet don't work on ehci with ubuntu
--
- don't work (mouse not moving) with or without companion
- device is correctly displayed with #lsusb,so maybe it's an xorg driver
problem.
- It's working fine on windows guest.
Both bugs result to don't have a working mouse in ubuntu installer.
(vmmouse is not available, and usb-tablet is selected but it's not working).
Any idea ?
Regards,
Alexandre Derumier
I have tested with only usb1, ubuntu 12.10
-usb -device usb-tablet
The vmmouse is not loaded, but the usb-tablet works
- Mail original -
De: "Laszlo Ersek"
À: "tiziano mueller"
Cc: "qemu-devel" , "Alexandre DERUMIER"
Envoyé: Mercredi 2
blet
Maybe somebody known how/when the vmmouse is loaded ?
- Mail original -
De: "Tiziano Müller"
À: "Alexandre DERUMIER"
Cc: "qemu-devel"
Envoyé: Mercredi 20 Février 2013 14:12:59
Objet: Re: [Qemu-devel] qemu 1.4 : ubuntu 12.10 : ehci + companion + usb-table
, centos 6.3,... But it's
work fine on windows. So is it a linux guest bug ?
- the companion change the default select mouse ???
----- Mail original -
De: "Alexandre DERUMIER"
À: "tiziano mueller"
Cc: "qemu-devel"
Envoyé: Mercredi 20 Février 2013 14:
h the new usb-tablet for handling
ehci ?
http://git.qemu.org/?p=qemu.git;a=commit;h=427e3aa151c749225364d0c30640e2e3c1756d9d
- Mail original -
De: "Alexandre DERUMIER"
À: "tiziano mueller"
Cc: "qemu-devel" , pve-de...@pve.proxmox.com
Envoyé: Jeudi 21 Fé
problem. Maybe is it Xorg version related ?
- Mail original -----
De: "Alexandre DERUMIER"
À: "qemu-devel"
Cc: hdego...@redhat.com, pve-de...@pve.proxmox.com
Envoyé: Jeudi 21 Février 2013 16:50:54
Objet: Re: [pve-devel] [Qemu-devel] qemu 1.4 : ubuntu 12.10 : ehci + compa
works fine here with debian squeeze + debian wheezy guests.
- Mail original -
De: "Jason Baron"
À: kw...@redhat.com, afaer...@suse.de, ag...@suse.de
Cc: qemu-devel@nongnu.org, yamah...@valinux.co.jp, "alex williamson"
, aligu...@us.ibm.com, "jan kiszka"
Envoyé: Jeudi 30 Août 2012 20:
Hi,
I'm trying to implement xbzrle live migration,
But i'm having non finishing migration with high memory changes in guest.
(simply playing a youtube video in the guest).
At the end of the migration, the remaining memory to transfert goes up and down.
I have tried to dynamicaly add memory to ca
Hi,
I'm trying to boot scsi-block device with lsi controller, and it doesn't boot.
(don't find devices).
lsi + scsi-block : don't boot
lsi + scsi-hd : boot
virtio-scsi + scsi-block : boot
Regards,
Alexandre Derumier
Thanks,
But Why does it work with lsi + scsi-hd and not scsi-block?
For now I'll use scsi-hd for these (very old) guests, it's not a problem.
- Mail original -
De: "Paolo Bonzini"
À: "Alexandre DERUMIER"
Cc: qemu-devel@nongnu.org
Envoyé: Vendredi 7 Sep
I can also reproduce it. (Host cpu intel or amd, Guest cpu qemu64/kvm64/host).
They are also a bugreport on the freebsd mailing here:
http://lists.freebsd.org/pipermail/freebsd-amd64/2013-January/015092.html
- Mail original -
De: "Dietmar Maurer"
À: qemu-devel@nongnu.org
Envoyé: Merc
I don't think it's fixed in 1.3 or 1.4, some proxmox users have reported again
this bug with guest kernel 2.6.32. (proxmox host is rhel 6.3 kernel + qemu 1.4)
- Mail original -
De: "Davide Guerri"
À: "Alexandre DERUMIER"
Cc: "Peter Lieven" ,
-
De: "Michael S. Tsirkin"
À: "Peter Lieven"
Cc: "Davide Guerri" , "Alexandre DERUMIER"
, "Stefan Hajnoczi" ,
qemu-devel@nongnu.org, "Jan Kiszka" , "Peter Lieven"
, "Dietmar Maurer"
Envoyé: Dimanche 17 Mars 2013 10:0
Hi,
x-data-plane syntax is deprecated (should be remove in qemu 2.2),
it's using now iothreads
http://comments.gmane.org/gmane.comp.emulators.qemu/279118
qemu -object iothread,id=iothread0 \
-drive if=none,id=drive0,file=test.qcow2,format=qcow2 \
-device virtio-blk-pci,iothread=io
Hi Paolo,
do you you think it'll be possible to use block jobs with dataplane ?
Or is it technically impossible ?
- Mail original -
De: "Paolo Bonzini"
À: "Alexandre DERUMIER" , "Scott Sullivan"
Cc: qemu-devel@nongnu.org
Envoyé: Vendredi 3 Octobre
Ok, Great :)
Thanks !
- Mail original -
De: "Paolo Bonzini"
À: "Alexandre DERUMIER"
Cc: qemu-devel@nongnu.org, "Scott Sullivan"
Envoyé: Vendredi 3 Octobre 2014 17:33:00
Objet: Re: is x-data-plane considered "stable" ?
Il 03/10/2014 16:26, A
Hi, I can't use virtio-serial, with q35 machine, on a pci bridge (other devices
works fine).
Is it a known bug ?
error message:
---
kvm: -device virtio-serial,id=spice,bus=pci.0,addr=0x9: Bus 'pci.0' not found
architecture is:
pcie.0
--->pcidmi (i82801b11-bridge)
---
idge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on
-device
virtio-net-pci,romfile=,mac=82:EC:EA:1E:E8:90,netdev=net0,bus=pci.0,addr=0x12,id=net0
-rtc driftfix=slew,base=localtime
-global kvm-pit.lost_tick_policy=discard
- Mail original -----
De: "Alexandre DERUMIER"
À:
device.
Sorry to disturb the mailing list about this.
Thanks !
Alexandre
----- Mail original -
De: "Gonglei"
À: "Alexandre DERUMIER"
Cc: "qemu-devel"
Envoyé: Mardi 12 Août 2014 14:09:26
Objet: Re: [Qemu-devel] q35 : virtio-serial on pci bridge : bus no
quot; , the rbd volume is
sparse after conversion.
Could it be related to the "bdrv_co_write_zeroes" missing features in
block/rbd.c ?
(It's available in other block drivers (scsi,gluster,raw-aio) , and I don't
have this problem with theses block drivers).
Regards,
Alexandre Derumier
Hi,
It seem that drive-mirror block job, remove the detect-zeroes drive property on
the target drive
qemu
-device virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5
-drive
file=/source.raw,if=none,id=drive-scsi2,cache=writeback,discard=on,aio=native,detect-zeroes=unmap
# info block
d
and top ?
""sync": what parts of the disk image should be copied to the destination;
possibilities include "full" for all the disk, "top" for only the sectors
allocated in the topmost image".
(what is topmost image ?)
- Mail original -
De: &quo
currently, after drive-mirror, I doing fstrim inside the guest
(with virtio-scsi + discard),
and like this I can free space on rbd storage.
- Mail original -
De: "Andrey Korolyov"
À: "Fam Zheng"
Cc: "Alexandre DERUMIER" , "qemu-devel"
, "Ce
2s to convert the empty file
(because it's skipping zero block), and drive mirror take around 5min.
- Mail original -
De: "Fam Zheng"
À: "Alexandre DERUMIER"
Cc: "qemu-devel" , "Ceph Devel"
Envoyé: Samedi 11 Octobre 2014 10:25:35
Objet: Re:
1: /source.qcow2 (qcow2)
Detect zeroes:on
#du -sh source.qcow2 : 2M
drive-mirror source.qcow2 -> target.qcow2
# info block
drive-virtio1: /target.qcow2 (qcow2)
#du -sh target.qcow2 : 11G
- Mail original -
De: "Paolo Bonzini"
À: "Alexandre DERUMIER&quo
>>Ah, you're right. We need to add an options field, or use a new
>>blockdev-mirror command.
Ok, thanks. Can't help to implement this, but I'll glad to help for testing.
- Mail original -
De: "Paolo Bonzini"
À: "Alexandre DERUMIER"
Cc:
Hi,
I was reading this interesting presentation,
http://vmsplice.net/~stefan/stefanha-kvm-forum-2014.pdf
and I have a specific question.
I'm currently evaluate ceph/rbd storage performance through qemu,
and the current bottleneck seem to be the cpu usage of the iothread.
(rbd procotol cpu
qemu rbd block driver.
- Mail original -
De: "Stefan Hajnoczi"
À: "Alexandre DERUMIER"
Cc: "qemu-devel" , "josh durgin"
Envoyé: Vendredi 24 Octobre 2014 11:04:06
Objet: Re: is it possible to use a disk with multiple iothreads ?
On Thu, Oct 23
>>You are missing debug information unfortunately,
Ok thanks, I'll try to add qemu debug symbols.
(I have already libc6,librbd,librados debug symbols installed)
- Mail original -
De: "Paolo Bonzini"
À: "Alexandre DERUMIER" , "Stefan Hajnocz
e same behavior if qemu is started with pc-dimm devices)
qemu 2.1
Guest kernel : 3.12.
Does it need a guest balloon module update ?
Regards,
Alexandre Derumier
d = "Westmere E56xx/L56xx/X56xx (Nehalem-C)"
That also doesn't without change level < 10.
User also report that "-cpu host" was working fine with qemu 0.15.
I see that other intel cpudefs in target-x86_64.conf have level=2, maybe does
it need to be the same for westmere ?
Best Regards,
Alexandre Derumier
d = "Westmere E56xx/L56xx/X56xx (Nehalem-C)"
That also doesn't without change level < 10.
User also report that "-cpu host" was working fine with qemu 0.15.
I see that other intel cpudefs in target-x86_64.conf have level=2, maybe does
it need to be the same for westmere ?
Best Regards,
Alexandre Derumier
ot;i64 syscall xd"
extfeature_ecx = "lahf_lm"
xlevel = "0x800A"
model_id = "Westmere E56xx/L56xx/X56xx (Nehalem-C)"
That also doesn't without change level < 10.
User also report that "-cpu host" was working fine with qemu 0.15.
I see
forget the command line
/usr/bin/kvm -chardev socket,id=monitor,path=/var/run/qemu-
server/10345.mon,server,nowait -mon chardev=monitor,mode=readline -vnc
unix:/var/run/qemu-server/10345.vnc,x509,password -pidfile /var/run
/qemu-server/10345.pid -daemonize -usb -device usb-
tablet,bus=usb.0,port=1
Public bug reported:
Hi,
i'm working on proxmox 2 distribution,
we are using qemu-kvm-git.
Users reports kvm segfault with some system guest,
after live migration with usb tablet.
original thread:
http://forum.proxmox.com/threads/8455-2-0RC1-Live-Migration-VM-Crashes-after-a-few-minutes
Guests
Fixed by
http://git.qemu.org/?p=qemu.git;a=commit;h=5ca2358ac895139e624881c5b3bf3095d3cc4515
usb-desc: fix user trigerrable segfaults (!config)
** Changed in: qemu
Status: New => Fix Released
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subs
Hello,
I can confirm the problem too, (opteron 63XX -> opteron 61XX)
qemu 1.7.1 (qemu64 or kvm64 vcpu) , host kernel 2.6.32 (rhel6.5)
I can reproduce it 100%
- Mail original -
De: "Markus Kovero"
À: qemu-devel@nongnu.org
Envoyé: Lundi 27 Janvier 2014 15:20:19
Objet: Re: [Qemu-devel] l
Hello,
I known that qemu live migration with disk with cache=writeback are not safe
with storage like nfs,iscsi...
Is it also true with rbd ?
If yes, it is possible to disable manually writeback online with qmp ?
Best Regards,
Alexandre
I never seen this, sorry ...
- Mail original -
De: "Stefan Priebe - Profihost AG"
À: pve-de...@pve.proxmox.com, "qemu-devel"
Envoyé: Vendredi 10 Mai 2013 08:12:39
Objet: [pve-devel] kvm process disappears
Hello list,
i've now seen this several times. A VM is suddently down no segfault
Just an idea, maybe are you out of memory and process are killed ?
nothing in logs ?
- Mail original -
De: "Stefan Priebe - Profihost AG"
À: pve-de...@pve.proxmox.com, "qemu-devel"
Envoyé: Vendredi 10 Mai 2013 08:12:39
Objet: [pve-devel] kvm process disappears
Hello list,
i've now se
>>140GB free mem also nothing in dmesg... which logs did you mean?
I thinked of /var/log/messages, logs with OOM Killer. But seem to not be your
case ;)
Do you use HA ?
- Mail original -
De: "Stefan Priebe - Profihost AG"
À: "Alexandre DERUMIER"
Cc:
Do you force rbd_cache=true in ceph.conf?
if yes, do you use cache=writeback ?
according to ceph doc:
http://ceph.com/docs/next/rbd/qemu-rbd/
"Important If you set rbd_cache=true, you must set cache=writeback or risk data
loss. Without cache=writeback, QEMU will not send flush requests to librb
quot;Stefan Priebe - Profihost AG"
À: "Alexandre DERUMIER"
Cc: pve-de...@pve.proxmox.com, "qemu-devel"
Envoyé: Jeudi 6 Février 2014 12:19:36
Objet: Re: [pve-devel] QEMU LIve Migration - swap_free: Bad swap file entry
Am 06.02.2014 12:14, schrieb Alexandre DERUMIER:
> D
do you use xbzrle for live migration ?
- Mail original -
De: "Stefan Priebe"
À: "Dr. David Alan Gilbert"
Cc: "Alexandre DERUMIER" , "qemu-devel"
Envoyé: Jeudi 6 Février 2014 21:00:27
Objet: Re: [Qemu-devel] [pve-devel] QEMU LIve Migration -
known if it's a qemu bug or freebsd bug ?
Regards,
Alexandre Derumier
I have done tests on intel host, It's booting fine (kvm64 cpu).
also tested freebsd 9.2, it's also hanging on amd host.
- Mail original -----
De: "Alexandre DERUMIER"
À: "qemu-devel"
Envoyé: Vendredi 15 Novembre 2013 10:59:43
Objet: [Qemu-devel] freebs
xist, though it would be possible to
>>implement (for toggling cache.direct, that is; cache.writeback is guest
>>visible and can therefore only be toggled by the guest)
yes, that's what I have in mind, toggling cache.direct=on before migration,
then disable it after the migration.
ail original -
De: "Josh Durgin"
À: "Alexandre DERUMIER" , "Kevin Wolf"
Cc: ceph-us...@lists.ceph.com, "qemu-devel"
Envoyé: Samedi 19 Avril 2014 00:33:12
Objet: Re: [ceph-users] [Qemu-devel] qemu + rbd block driver with
cache=writeback, is live migration safe
al -
De: "Kevin Wolf"
À: "Josh Durgin"
Cc: "Alexandre DERUMIER" , ceph-us...@lists.ceph.com,
"qemu-devel"
Envoyé: Mardi 22 Avril 2014 11:08:08
Objet: Re: [Qemu-devel] [ceph-users] qemu + rbd block driver with
cache=writeback, is live migration safe ?
H Stefan,
only for write ? or also read ?
I'll try to reproduce on my test cluster.
- Mail original -
De: "Stefan Priebe"
À: "qemu-devel"
Envoyé: Dimanche 15 Février 2015 19:46:12
Objet: [Qemu-devel] slow speed for virtio-scsi since qemu 2.2
Hi,
while i get a constant random 4k i/o
only way unplug is working for me, is to start with -smp 2 minimum
-smp 2,sockets=2,cores=1,maxcpus=4
Then I can hotplug|unplug cpuid >= 2
Regards,
Alexandre Derumier
- Mail original -
De: "Zhu Guihua"
À: "qemu-devel"
Cc: "Zhu Guihua" , tangc...@cn
di 26 Janvier 2015 03:01:48
Objet: Re: [Qemu-devel] [PATCH v2 00/11] cpu: add i386 cpu hot remove support
On Fri, 2015-01-23 at 11:24 +0100, Alexandre DERUMIER wrote:
> Hello,
>
> I'm currently testing the new cpu unplug features,
> Works fine here with debian guests and ke
fan fnst"
, "Igor Mammedov" ,
"afaerber"
Envoyé: Lundi 26 Janvier 2015 03:01:48
Objet: Re: [Qemu-devel] [PATCH v2 00/11] cpu: add i386 cpu hot remove support
On Fri, 2015-01-23 at 11:24 +0100, Alexandre DERUMIER wrote:
> Hello,
>
> I'm currently testi
uot; ,
"Anshul Makkar" , "chen fan fnst"
, "Igor Mammedov" ,
"afaerber"
Envoyé: Lundi 26 Janvier 2015 04:47:13
Objet: Re: [Qemu-devel] [PATCH v2 00/11] cpu: add i386 cpu hot remove support
On Mon, 2015-01-26 at 04:19 +0100, Alexandre DERUMIER wrote:
> T
Hi,
I'm currently testing virtio-scsi and iothread,
and I'm seeing qemu segfault when I try to remove an scsi drive
on top of an virtio-scsi controller with iothread enabled.
virtio-blk + iothread drive_del is supported since this patch
http://comments.gmane.org/gmane.comp.emulators.qemu/291
Ok,
thanks paolo !
- Mail original -
De: "pbonzini"
À: "aderumier" , "qemu-devel"
Envoyé: Mercredi 1 Avril 2015 12:27:27
Objet: Re: virtio-scsi + iothread : segfault on drive_del
On 01/04/2015 05:34, Alexandre DERUMIER wrote:
>
> I'm c
Hi,
I'm currently testing cpu hotplug with a windows 2012R2 standard guest,
and I can't get it too work. (works fine with linux guest).
host kernel : rhel7 3.10 kernel
qemu 2.2
qemu command line : -smp cpus=1,sockets=2,cores=1,maxcpus=2
Started with 1cpu, topogoly is 2sockets with 1cores.
T
y hotplug, maybe for cpu too)
- Mail original -
De: "Andrey Korolyov"
À: "aderumier"
Cc: "qemu-devel"
Envoyé: Mercredi 14 Janvier 2015 17:07:41
Objet: Re: [Qemu-devel] cpu hotplug and windows guest (win2012r2)
On Fri, Jan 9, 2015 at 4:35 PM, Andrey Kor
center edition was needed)
- Mail original -
De: "Igor Mammedov"
À: "aderumier"
Cc: "qemu-devel"
Envoyé: Lundi 19 Janvier 2015 17:06:37
Objet: Re: [Qemu-devel] cpu hotplug and windows guest (win2012r2)
On Fri, 9 Jan 2015 11:26:08 +0100 (CET)
Alexandre D
9, 2015 at 4:35 PM, Andrey Korolyov wrote:
> On Fri, Jan 9, 2015 at 1:26 PM, Alexandre DERUMIER
> wrote:
>> Hi,
>>
>> I'm currently testing cpu hotplug with a windows 2012R2 standard guest,
>>
>> and I can't get it too work. (works fine with
use ? standard or datacenter
?)
Thanks for your help!
Regards,
Alexandre
- Mail original -
De: "Andrey Korolyov"
À: "aderumier"
Cc: "qemu-devel"
Envoyé: Mercredi 21 Janvier 2015 00:16:45
Objet: Re: [Qemu-devel] cpu hotplug and windows guest (win2012r2)
Hi,
I would like to known if it's possible to hot-add|hot-plug an iothread object
on a running guest ?
(I would like to be able to hotplug new virtio devices on new iothread at the
same time)
Regards,
Alexandre
>>Yes, there is an object_add/object-add command (respectively HMP and
>>QMP), but I don't think libvirt has bindings already.
Thanks Paolo ! I'm currently implementing iothread on proxmox, so no libvirt.
Alexandre.
- Mail original -
De: "Paolo Bonzi
Hi,
Isn't it related to drive options ?
"
werror=action,rerror=action
Specify which action to take on write and read errors. Valid actions are:
“ignore” (ignore the error and try to continue), “stop” (pause QEMU), “report”
(report the error to the guest), “enospc” (pause QEMU only if the host
Hi,
I think they was already reported some month ago,
and a patch was submitted to the mailing list (but waiting that memory unplug
was merged before apply it)
http://lists.gnu.org/archive/html/qemu-devel/2014-11/msg02362.html
- Mail original -
De: "Luiz Capitulino"
À: "qemu-devel
Hi,
I have noticed that balloon stats are not working if a qemu guest is started
with -machine option.
(-machine pc, or any version) . Tested of qemu 1.7,2.1 && 2.2
When the guest is starting (balloon driver not yet loaded)
$VAR1 = {
'last-update' => 0,
'stats' => {
I have forgot to said that we don't setup pooling interval manually. (which
seem to works fine without -machine)
Now,if I setup guest-stats-polling-interval with qom-set,
it seem to works fine with -machine option.
- Mail original -
De: "aderumier"
À: "qemu-devel" , "Luiz Capitulino"
t;
Cc: "qemu-devel" , "dietmar"
Envoyé: Mardi 10 Mars 2015 14:30:20
Objet: Re: [Qemu-devel] balloon stats not working if qemu is started with
-machine option
On Mon, 9 Mar 2015 08:04:54 +0100 (CET)
Alexandre DERUMIER wrote:
> I have forgot to said that we don't se
CAP_X86_SMM);
}
I'm not sure if it's a qemu bug or kernel/kvm bug.
Help is welcome.
Regards,
Alexandre Derumier
Hi,
with qemu (2.4.1), if I do an internal snapshot of an rbd device,
then I pause the vm with vm_stop,
the qemu process is hanging forever
monitor commands to reproduce:
# snapshot_blkdev_internal drive-virtio0 yoursnapname
# stop
I don't see this with qcow2 or sheepdog block driver for
Some other infos:
I can reproduce it too with manual snapshot with rbd command
#rbd --image myrbdvolume snap create --snap snap1
qemu monitor:
#stop
This is with ceph hammer 0.94.5.
in qemu vm_stop, the only thing related to block driver are
bdrv_drain_all();
ret = bdrv_flush_all(
Also,
this occur only with rbd_cache=false or qemu drive cache=none.
If I use rbd_cache=true or qemu drive cache=writeback, I don't have this bug.
- Mail original -
De: "aderumier"
À: "ceph-devel" , "qemu-devel"
Envoyé: Lundi 9 Novembre 2015 04:23:10
Objet: Re: qemu : rbd block dri
Something is really wrong,
because guest is also freezing, with a simple snapshot, with cache=none /
rbd_cache=false
qemu monitor : snapshot_blkdev_internal drive-virtio0 snap1
or
rbd command : rbd --image myrbdvolume snap create --snap snap1
Then the guest can't read/write to disk anymore
Novembre 2015 08:22:34
Objet: Re: [Qemu-devel] qemu : rbd block driver internal snapshot and vm_stop
is hanging forever
On 11/09/2015 10:19 AM, Denis V. Lunev wrote:
> On 11/09/2015 06:10 AM, Alexandre DERUMIER wrote:
>> Hi,
>>
>> with qemu (2.4.1), if I do an internal sn
1 - 100 of 183 matches
Mail list logo