[Bug 1877688] [NEW] 9p virtfs device reports error when opening certain files

2020-05-08 Thread A A
Public bug reported: Reading certain files on a 9p mounted FS produces this error message: qemu-system-x86_64: VirtFS reply type 117 needs 12 bytes, buffer has 12, less than minimum After this error message is generated, further accesses to the 9p FS hangs whatever tries to access it. The Arch

[Bug 1877688] Re: 9p virtfs device reports error when opening certain files

2020-05-08 Thread A A
** Description changed: Reading certain files on a 9p mounted FS produces this error message: qemu-system-x86_64: VirtFS reply type 117 needs 12 bytes, buffer has 12, less than minimum After this error message is generated, further accesses to the 9p FS hangs whatever tries to

[Bug 1877688] Re: 9p virtfs device reports error when opening certain files

2020-05-08 Thread A A
Here's a C program to trigger this behavior. I don't think it matters what the contents of "file" or its size is. ** Description changed: Reading certain files on a 9p mounted FS produces this error message: qemu-system-x86_64: VirtFS reply type 117 needs 12 bytes, b

[Bug 1877688] Re: 9p virtfs device reports error when opening certain files

2020-05-09 Thread A A
Thanks, it works. -- You received this bug notification because you are a member of qemu- devel-ml, which is subscribed to QEMU. https://bugs.launchpad.net/bugs/1877688 Title: 9p virtfs device reports error when opening certain files Status in QEMU: In Progress Bug description: Reading

[Bug 1872790] [NEW] empty qcow2

2020-04-14 Thread a
Public bug reported: I plugged multiple qcow2 to a Windows guest. On the Windows disk manager all disks are listed perfectly, with their data, their real space, I even can explore all files on the Explorer, all cool On third party disk manager (all of them), I only have the C:\ HDD who act

[Bug 1872790] Re: empty qcow2

2020-05-29 Thread a
WDM claims it to be a MBR Linux 5.6.14 QEMU 5.0.0-6 `nobody 19023 109 21.1 7151512 3462300 ? Sl 13:18 0:32 /usr/bin/qemu-system-x86_64 -name guest=win10machine,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-4-win10machine /master

[Bug 1872790] Re: empty qcow2

2020-05-29 Thread a
WDM claims it to be a MBR Linux 5.6.14 QEMU 5.0.0-6 `nobody 19023 109 21.1 7151512 3462300 ? Sl 13:18 0:32 /usr/bin/qemu- system-x86_64 -name guest=win10machine,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-4-win10machine /master-key.aes -machine

Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled

2013-08-20 Thread Zhanghaoyu (A)
;>> >>> -watchdog-action poweroff -device >>> >>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa >>> >>> >>> >>Which QEMU version is this? Can you try with e1000 NICs instead of virtio? >>> >> >>> >This QEMU version is

[Qemu-devel] [kvm] segmentation fault when guest reboot or reset after hotunplug virtio NIC

2013-08-29 Thread Zhanghaoyu (A)
qemu-1.5.1 libvirt-1.1.0 guest os:win2k8 R2 x64bit or sles11sp2 x64 or win2k3 32bit Steps shown as below: 1.use virsh to start a vm with a virtio NIC 2.after booting, use virsh detach-device to hotunplug the virito NIC 3.use virsh reboot/reset the restart the vm 4.when vm is rebooting

Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled

2013-08-31 Thread Zhanghaoyu (A)
>> >> >> >This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem >> >exists, including the performance degradation and readonly GFNs' flooding. >> >I tried with e1000 NICs instead of virtio, including the performance >> >

[Qemu-devel] [KVM] segmentation fault happened when reboot VM after hot-uplug virtio NIC

2013-09-03 Thread Zhanghaoyu (A)
545825d4cda03ea292b7788b3401b99860efe8bc) libvirt: 1.1.0 guest os: win2k8 R2 x64bit or sles11sp2 x64 or win2k3 32bit You can reproduce this problem by following steps: 1. start a VM with virtio NIC(s) 2. hot-unplug a virtio NIC from the VM 3. reboot the VM, then segmentation fault happened during starting period

Re: [Qemu-devel] [KVM] segmentation fault happened when reboot VM after hot-uplug virtio NIC

2013-09-03 Thread Zhanghaoyu (A)
ost: SLES11SP2 (kenrel version: 3.0.58) >> qemu: 1.5.1, upstream-qemu (commit >> 545825d4cda03ea292b7788b3401b99860efe8bc) >> libvirt: 1.1.0 >> guest os: win2k8 R2 x64bit or sles11sp2 x64 or win2k3 32bit >> >> You can reproduce this problem by following steps: >

Re: [Qemu-devel] latest version qemu compile error

2013-04-09 Thread Zhanghaoyu (A)
> > I compile the QEMU source download from qemu.git > > (http://git.qemu.org/git/qemu.git) on 4-9-2013, errors reported as > > below, > > > > > > > > hw/virtio/dataplane/vring.c: In function 'vring_enable_notification': > > > > hw/virtio/dataplane/vring.c:72: warning: implicit declaration of

Re: [Qemu-devel] latest version qemu compile error

2013-04-10 Thread Zhanghaoyu (A)
> > The log of "make V=1" is identical with that of "make", shown as below, > > > > hw/virtio/dataplane/vring.c: In function 'vring_enable_notification': > > hw/virtio/dataplane/vring.c:72: warning: implicit declaration of function > > 'vring_avail_event' > > hw/virtio/dataplane/vring.c:72: warni

[Qemu-devel] reply: reply: reply: qemu crashed when starting vm(kvm) with vnc connect

2013-04-18 Thread Zhanghaoyu (A)
> > On Mon, Apr 08, 2013 at 12:27:06PM +, Zhanghaoyu (A) wrote: > >> On Sun, Apr 07, 2013 at 04:58:07AM +0000, Zhanghaoyu (A) wrote: > >>>>>> I start a kvm VM with vnc(using the zrle protocol) connect, sometimes > >>>>>> qemu progra

[Qemu-devel] KVM VM(windows xp) reseted when running geekbench for about 2 days

2013-04-18 Thread Zhanghaoyu (A)
I start 10 VMs(windows xp), then running geekbench tool on them, about 2 days, one of them was reset, I found the reset operation is done by int kvm_cpu_exec(CPUArchState *env) { ... switch (run->exit_reason) ... case KVM_EXIT_SHUTDOWN: DPRINTF("shutdown\n");

Re: [Qemu-devel] KVM VM(windows xp) reseted when running geekbench for about 2 days

2013-04-18 Thread Zhanghaoyu (A)
>> On Thu, Apr 18, 2013 at 12:00:49PM +, Zhanghaoyu (A) wrote: >>> I start 10 VMs(windows xp), then running geekbench tool on them, >>> about 2 days, one of them was reset, I found the reset operation is >>> done by int kvm_cpu_exec(CPUArchState *env) {

Re: [Qemu-devel] KVM VM(windows xp) reseted when running geekbench for about 2 days

2013-04-23 Thread Zhanghaoyu (A)
>> >> On Thu, Apr 18, 2013 at 12:00:49PM +, Zhanghaoyu (A) wrote: >> >>> I start 10 VMs(windows xp), then running geekbench tool on them, >> >>> about 2 days, one of them was reset, I found the reset operation >> &

Re: [Qemu-devel] KVM VM(windows xp) reseted when running geekbench for about 2 days

2013-04-25 Thread Zhanghaoyu (A)
>> >> >> On Thu, Apr 18, 2013 at 12:00:49PM +0000, Zhanghaoyu (A) wrote: >> >> >>> I start 10 VMs(windows xp), then running geekbench tool on >> >> >>> them, about 2 days, one of them was reset, I found the reset >&

Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled

2013-07-27 Thread Zhanghaoyu (A)
n above suspect, I want to find the two adjacent versions of >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1), >> and analyze the differences between this two versions, or apply the >> patches between this two versions by bisection method, finally find the key

Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled

2013-07-30 Thread Zhanghaoyu (A)
is problem or not (e.g. 2.6.39, 3.0-rc1), >> >> and analyze the differences between this two versions, or apply the >> >> patches between this two versions by bisection method, finally find the >> >> key patches. >> >> >> >> Any better ideas? >> >

Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled

2013-08-05 Thread Zhanghaoyu (A)
with this problem. >> >> >> Based on above suspect, I want to find the two adjacent versions of >> >> >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1), >> >> >> and analyze the differences between this two versions, or apply t

Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled

2013-08-05 Thread Zhanghaoyu (A)
gt; If EPT disabled, this problem gone. >> >> >> >> >> >> >> >> I suspect that kvm hypervisor has business with this problem. >> >> >> >> Based on above suspect, I want to find the two adjacent versions of >> >> >

Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled

2013-08-05 Thread Zhanghaoyu (A)
>Hi, > >Am 05.08.2013 11:09, schrieb Zhanghaoyu (A): >> When I build the upstream, encounter a problem that I compile and >> install the upstream(commit: e769ece3b129698d2b09811a6f6d304e4eaa8c29) >> on sles11sp2 environment via below command cp >> /boot/config-

Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled

2013-08-06 Thread Zhanghaoyu (A)
>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log), >> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none >> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu qemu32 >> -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid >> 0505ec91-38

Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled

2013-08-06 Thread Zhanghaoyu (A)
starting. > >Thanks, >Zhang Haoyu > >>-- >> Gleb. Should we focus on the first bad commit(612819c3c6e67bac8fceaa7cc402f13b1b63f7e4) and the surprising GFNs' flooding? I applied below patch to __direct_map(), @@ -2223,6 +2223,8 @@ static int __d

Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with EPT enabled

2013-08-14 Thread Zhanghaoyu (A)
ists, including the performance degradation and readonly GFNs' flooding. >> >I tried with e1000 NICs instead of virtio, including the performance >> >degradation and readonly GFNs' flooding, the QEMU version is 1.5.2. >> >No matter e1000 NICs or virtio NICs, t

[Qemu-devel] KVM VM(rhel-5.5) %si is too high when TX/RX packets

2013-05-02 Thread Zhanghaoyu (A)
I running a VM(RHEL-5.5) on KVM hypervisor(linux-3.8 + QEMU-1.4.1), and direct-assign intel 82576 VF to the VM. When TX/RX packets on VM to the other host via iperf tool, top tool result on VM shown that the %si is too high, approximately 95% ~ 100%, but from the view of host, the VM's

Re: [Qemu-devel] KVM VM(rhel-5.5) %si is too high when TX/RX packets

2013-05-03 Thread Zhanghaoyu (A)
> I running a VM(RHEL-5.5) on KVM hypervisor(linux-3.8 + QEMU-1.4.1), and > direct-assign intel 82576 VF to the VM. When TX/RX packets on VM to the other > host via iperf tool, top tool result on VM shown that the %si is too high, > approximately 95% ~ 100%, but from the view of ho

[Qemu-devel] [PATCH] [KVM] Needless to update msi route when only msi-x entry "control" section changed

2013-05-04 Thread Zhanghaoyu (A)
table. Signed-off-by: Zhang Haoyu Signed-off-by: Huang Weidong Signed-off-by: Qin Chuanyu --- hw/i386/kvm/pci-assign.c | 3 +++ 1 files changed, 3 insertions(+) --- a/hw/i386/kvm/pci-assign.c 2013-05-04 15:53:18.0 +0800 +++ b/hw/i386/kvm/pci-assign.c 2013-05-04 15:50:46.0 +

Re: [Qemu-devel] KVM VM(rhel-5.5) %si is too high when TX/RX packets

2013-05-04 Thread Zhanghaoyu (A)
>> I running a VM(RHEL-5.5) on KVM hypervisor(linux-3.8 + QEMU-1.4.1), >> and direct-assign intel 82576 VF to the VM. When TX/RX packets on VM to the >> other host via iperf tool, top tool result on VM shown that the %si is too >> high, approximately 95% ~ 100%, but fr

Re: [Qemu-devel] [PATCH] [KVM] Needless to update msi route when only msi-x entry "control" section changed

2013-05-05 Thread Zhanghaoyu (A)
-x entry "control" section, >> needless to update VM irq routing table. >> >> Signed-off-by: Zhang Haoyu >> Signed-off-by: Huang Weidong >> Signed-off-by: Qin Chuanyu >> --- >> hw/i386/kvm/pci-assign.c | 3 +++ >> 1 files changed, 3 in

Re: [Qemu-devel] [PATCH] [KVM] Needless to update msi route when only msi-x entry "control" section changed

2013-05-06 Thread Zhanghaoyu (A)
e is so low. >> >> Masking/unmasking msi-x vector only set msi-x entry "control" section, >> >> needless to update VM irq routing table. >> >> >> >> Signed-off-by: Zhang Haoyu >> >> Signed-off-by: Huang Weidong >> >>

[Qemu-devel] hotplug: VM got stuck when attaching a pass-through device to the non-pass-through VM for the first time

2014-02-17 Thread Zhanghaoyu (A)
Hi, all The VM will get stuck for a while(about 6s for a VM with 20GB memory) when attaching a pass-through PCI card to the non-pass-through VM for the first time. The reason is that the host will build the whole VT-d GPA->HPA DMAR page-table, which needs a lot of time, and during this t

Re: [Qemu-devel] hotplug: VM got stuck when attaching a pass-through device to the non-pass-through VM for the first time

2014-02-18 Thread Zhanghaoyu (A)
>> Hi, all >> >> The VM will get stuck for a while(about 6s for a VM with 20GB memory) when >> attaching a pass-through PCI card to the non-pass-through VM for the first >> time. >> The reason is that the host will build the whole VT-d GPA->HPA DMAR &

Re: [Qemu-devel] hotplug: VM got stuck when attaching a pass-through device to the non-pass-through VM for the first time

2014-02-18 Thread Zhanghaoyu (A)
>> What if you detach and re-attach? >> Is it fast then? >> If yes this means the issue is COW breaking that occurs with >> get_user_pages, not translation as such. >> Try hugepages with prealloc - does it help? > >I agree it's either COW breaking or (similarly) locking pages that the guest >hasn

Re: [Qemu-devel] hotplug: VM got stuck when attaching a pass-through device to the non-pass-through VM for the first time

2014-02-24 Thread Zhanghaoyu (A)
t;> Or the new shared flag - IIRC shared VMAs don't do COW either. > >Only if the problem isn't locking and zeroing of untouched pages (also, it is >not upstream is it?). > >Can you make a profile with perf? > "-rt mlock=on" option is not set, perf top -p

Re: [Qemu-devel] [RFC] create a single workqueue for each vm to update vm irq routing table

2013-11-29 Thread Zhanghaoyu (A)
ote: >> > > > Il 26/11/2013 13:40, Zhanghaoyu (A) ha scritto: >> > > > > When guest set irq smp_affinity, VMEXIT occurs, then the vcpu >> > > > > thread will IOCTL return to QEMU from hypervisor, then vcpu >> > > > > thread ask the

Re: [Qemu-devel] [PATCH] migration: avoid starting a new migration task while the previous one still exist

2013-11-04 Thread Zhanghaoyu (A)
> > Avoid starting a new migration task while the previous one still > exist. > > Can you explain how to reproduce the problem? > When network disconnection between source and destination happened, the migration thread stuck at below stack, #0 0x7f07e96c8288 in wri

Re: [Qemu-devel] [PATCH] migration: avoid starting a new migration task while the previous one still exist

2013-11-05 Thread Zhanghaoyu (A)
>>>> Avoid starting a new migration task while the previous one still >>> exist. >>> >>> Can you explain how to reproduce the problem? >>> >> When network disconnection between source and destination happened, >> the migration thread st

Re: [Qemu-devel] About the IO-mirroring functionality inside the qemu

2013-11-05 Thread Zhanghaoyu (A)
>Hi all, > >Does the Qemu have the storage migration tool, like the io-mirroring inside >the vmware? io-mirroring means for all the ioes, they are send to both source >and destination at the same time. drive_mirror maybe your choice. Thanks, Zhang Haoyu > >Thanks!

[Qemu-devel] [patch] avoid a bogus COMPLETED->CANCELLED transition

2013-11-07 Thread Zhanghaoyu (A)
Avoid a bogus COMPLETED->CANCELLED transition. There is a period of time from the timing of setting COMPLETED state to that of migration thread exits, so during which it's problematic in COMPLETED->CANCELLED transition. Signed-off-by: Zeng Junliang Signed-off-by: Zhang Haoyu ---

[Qemu-devel] [patch] introduce MIG_STATE_CANCELLING state

2013-11-07 Thread Zhanghaoyu (A)
Introduce MIG_STATE_CANCELLING state to avoid starting a new migration task while the previous one still exist. Signed-off-by: Zeng Junliang Signed-off-by: Zhang Haoyu --- migration.c | 26 -- 1 files changed, 16 insertions(+), 10 deletions(-) diff --git a

Re: [Qemu-devel] [migration] questions about removing the old block-migration code

2013-11-07 Thread Zhanghaoyu (A)
gt; >Buggy and tightly coupled with the live migration code, making it hard to >modify either area independently. Thanks a lot for explaining. Till now, we still use the old block-migration code in our virtualization solution. Could you detail the bugs that the old block-migration code have? Thank

[Qemu-devel] question about VM kernel parameter idle=

2013-11-20 Thread Zhanghaoyu (A)
Hi, all What's the difference of the linux guest kernel parameter idle=, especially in performance? Taking the performance into account, which one is best? In my opinion, if the number of all VMs' vcpus is far more than that of pcpus, e.g. SPECVirt test, idle=halt is better for server's total

[Qemu-devel] [RFC] create a single workqueue for each vm to update vm irq routing table

2013-11-26 Thread Zhanghaoyu (A)
wait RCU grace period, and during this period, this vcpu cannot provide service to VM, so those interrupts delivered to this vcpu cannot be handled in time, and the apps running on this vcpu cannot be serviced too. It's unacceptable in some real-time scenario, e.g. telecom. So, I want to cre

Re: [Qemu-devel] [RFC] create a single workqueue for each vm to update vm irq routing table

2013-11-27 Thread Zhanghaoyu (A)
>> > I don't think a workqueue is even needed. You just need to use >> > call_rcu to free "old" after releasing kvm->irq_lock. >> > >> > What do you think? >> >> It should be rate limited somehow. Since it guest triggarable gu

Re: [Qemu-devel] [RFC] create a single workqueue for each vm to update vm irq routing table

2013-11-27 Thread Zhanghaoyu (A)
synchronize_rcu(), you have the additional guarantee that any >>> > parallel accesses to the old routing table have completed. Since >>> > we also trigger the irq from rcu context, you know that after >>> > synchronize_rcu() you won't get any interrupts to

Re: [Qemu-devel] [RFC] create a single workqueue for each vm to update vm irq routing table

2013-11-28 Thread Zhanghaoyu (A)
he case with RCU? (See my answer above: "the >>>> vcpus already see the new routing table after the rcu_assign_pointer >>>> that is in kvm_irq_routing_update"). >>> With synchronize_rcu(), you have the additional guarantee that any >>> paral

[Qemu-devel] [PATCH] rdma: fix multiple VMs parallel migration

2013-10-10 Thread Zhanghaoyu (A)
respectively. Signed-off-by: Frank Yang --- migration-rdma.c | 58 +--- 1 file changed, 39 insertions(+), 19 deletions(-) diff --git a/migration-rdma.c b/migration-rdma.c index f94f3b4..33e8a92 100644 --- a/migration-rdma.c +++ b/migration-rdma.c

Re: [Qemu-devel] why no progress shown after introduce NBD migration cookie

2013-10-22 Thread Zhanghaoyu (A)
Hi, all Could someone make a detailed statement for the buggy implementation of traditional storage-migration method that migrating the storage in iteration way? Thanks, Zhang Haoyu >>>> hi Michal, >>>> >>>> I used libvirt-1.0.3, ran below comma

[Qemu-devel] migration: question about buggy implementation of traditional live migration with storage that migrating the storage in iteration way

2013-10-25 Thread Zhanghaoyu (A)
Hi, all Could someone make a detailed statement for the buggy implementation of traditional live migration with storage that migrating the storage in iteration way? Thanks, Zhang Haoyu >>>> hi Michal, >>>> >>>> I used libvirt-1.0.3, ran below comma

Re: [Qemu-devel] [RESEND][PATCH] migration: drop MADVISE_DONT_NEED for incoming zero pages

2013-10-29 Thread Zhanghaoyu (A)
y transferred > zero page was memset to zero and thus allocated. Since commit > 211ea740 we check for zeroness of a target page before we memset > it to zero. Additionally we memmap target memory so it is essentially > zero initialized (except for e.g. option roms and bios which are lo

[Qemu-devel] [migration] questions about removing the old block-migration code

2013-11-02 Thread Zhanghaoyu (A)
Hi, Juan I read below words on the report of , We were going to remove the old block-migration code Then people fixed it Good: it works now Bad: We have to maintain both It uses the same port than migration You need to migrate all/none of block devices The old block-migration code said above is t

[Qemu-devel] [PATCH] migration: avoid starting a new migration task while the previous one still exist

2013-11-04 Thread Zhanghaoyu (A)
Avoid starting a new migration task while the previous one still exist. Signed-off-by: Zeng Junliang --- migration.c | 34 ++ 1 files changed, 22 insertions(+), 12 deletions(-) diff --git a/migration.c b/migration.c index 2b1ab20..ab4c439 100644 --- a

Re: [Qemu-devel] [PATCH] raw-posix.c: remove raw device access for cdrom

2015-07-01 Thread M A
On Jul 1, 2015, at 6:13 PM, Programmingkid wrote: > Fix real cdrom access in Mac OS X so it can be used in QEMU. > It simply removes the r from a device file's name. This > allows for a real cdrom to be accessible to the guest. > It has been successfully tested with a Windows X

[Qemu-devel] qemu crashed when starting vm(kvm) with vnc connect

2013-04-02 Thread Zhanghaoyu (A)
I start a kvm VM with vnc(using the zrle protocol) connect, sometimes qemu program crashed during starting period, received signal SIGABRT. Trying about 20 times, this crash may be reproduced. I guess the cause memory corruption or double free. The backtrace shown as below: 0x7f32eda3dd95

[Qemu-devel] 答复: qemu crashed when starting vm(kvm) with vnc connect

2013-04-06 Thread Zhanghaoyu (A)
>> I start a kvm VM with vnc(using the zrle protocol) connect, sometimes qemu >> program crashed during starting period, received signal SIGABRT. >> Trying about 20 times, this crash may be reproduced. >> I guess the cause memory corruption or double free. > >

[Qemu-devel] reply: reply: qemu crashed when starting vm(kvm) with vnc connect

2013-04-08 Thread Zhanghaoyu (A)
On Sun, Apr 07, 2013 at 04:58:07AM +, Zhanghaoyu (A) wrote: > >>> I start a kvm VM with vnc(using the zrle protocol) connect, sometimes > >>> qemu program crashed during starting period, received signal SIGABRT. > >>> Trying about 20 times, this crash

[Qemu-devel] latest version qemu compile error

2013-04-09 Thread Zhanghaoyu (A)
I compile the QEMU source download from qemu.git (http://git.qemu.org/git/qemu.git) on 4-9-2013, errors reported as below, hw/virtio/dataplane/vring.c: In function 'vring_enable_notification': hw/virtio/dataplane/vring.c:72: warning: implicit declaration of function 'vring_avail_event' hw/virtio

Re: [Qemu-devel] meaningless to compare irqfd's msi message with new msi message in virtio_pci_vq_vector_unmask

2013-07-03 Thread Zhanghaoyu (A)
gned-off-by: Zhang Haoyu > Signed-off-by: Zhang Huanzhong > --- > hw/virtio/virtio-pci.c |8 +++- > kvm-all.c |5 + > 2 files changed, 8 insertions(+), 5 deletions(-) > > diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c index > b07

[Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled

2013-07-11 Thread Zhanghaoyu (A)
hi all, I met similar problem to these, while performing live migration or save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2, guest:suse11sp2), running tele-communication software suite in guest, https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html http://comments.

Re: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled

2013-07-11 Thread Zhanghaoyu (A)
> Hi, > > Am 11.07.2013 11:36, schrieb Zhanghaoyu (A): > > I met similar problem to these, while performing live migration or > save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2, > guest:suse11sp2), running tele-communication software suite in guest, >

[Qemu-devel] meaningless to compare irqfd's msi message with new msi message in virtio_pci_vq_vector_unmask

2013-06-25 Thread Zhanghaoyu (A)
ng --- hw/virtio/virtio-pci.c |8 +++- kvm-all.c |5 + 2 files changed, 8 insertions(+), 5 deletions(-) diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c index b070b64..e4829a3 100644 --- a/hw/virtio/virtio-pci.c +++ b/hw/virtio/virtio-pci.c

[Qemu-devel] [PATCH] migration: add timeout option for tcp migraion send/receive socket

2013-06-28 Thread Zhanghaoyu (A)
++- 2 files changed, 26 insertions(+), 1 deletions(-) diff --git a/include/migration/migration.h b/include/migration/migration.h index f0640e0..1a56248 100644 --- a/include/migration/migration.h +++ b/include/migration/migration.h @@ -23,6 +23,8 @@ #include "qapi-types.h"

[Qemu-devel] [PATCH] migration: add timeout option for tcp migration send/receive socket

2013-06-29 Thread Zhanghaoyu (A)
++- 2 files changed, 26 insertions(+), 1 deletions(-) diff --git a/include/migration/migration.h b/include/migration/migration.h index f0640e0..1a56248 100644 --- a/include/migration/migration.h +++ b/include/migration/migration.h @@ -23,6 +23,8 @@ #include "qapi-types.h"

[Qemu-devel] [RFC] sync NIC's MAC maintained in NICConf as soon as emualted NIC's MAC changed in guest

2013-09-22 Thread Zhanghaoyu (A)
Hi, all Do live migration if emulated NIC's MAC has been changed, RARP with wrong MAC address will broadcast via qemu_announce_self in destination, so, long time network disconnection probably happen. I want to do below works to resolve this problem, 1. change NICConf's MAC as soon as emulated N

Re: [Qemu-devel] [RFC] sync NIC's MAC maintained in NICConf as soon as emualted NIC's MAC changed in guest

2013-09-25 Thread Zhanghaoyu (A)
aoyu > >I think announce needs to poke at the current MAC instead of the default one >in NICConf. >We can make it respect link down state while we are at it. > NICConf structures are incorporated in different emulated NIC's structure, e.g., VirtIONet, E1000State_st, RTL8139Stat

Re: [Qemu-devel] [RFC] sync NIC's MAC maintained in NICConf as soon as emualted NIC's MAC changed in guest

2013-09-25 Thread Zhanghaoyu (A)
; >> >I think announce needs to poke at the current MAC instead of the default >> >one in NICConf. >> >We can make it respect link down state while we are at it. >> > >> NICConf structures are incorporated in different emulated NIC's >> structur

Re: [Qemu-devel] [RFC] sync NIC's MAC maintained in NICConf as soon as emualted NIC's MAC changed in guest

2013-09-25 Thread Zhanghaoyu (A)
o corresponding >> >> >> NICConf in NIC's migration load handler >> >> >> >> >> >> Any better ideas? >> >> >> >> >> >> Thanks, >> >> >> Zhang Haoyu >> >> > >>

Re: [Qemu-devel] [RFC] sync NIC's MAC maintained in NICConf as soon as emualted NIC's MAC changed in guest

2013-09-25 Thread Zhanghaoyu (A)
c. >> >> >> >> >> >> >> >> BTW, in native scenario, reboot will revert the changed MAC >> >> >> >> to original one, too. >> >> >> >> >> >> >> >> >> 2. sync NIC's (more

Re: [Qemu-devel] [RFC] sync NIC's MAC maintained in NICConf as soon as emualted NIC's MAC changed in guest

2013-09-25 Thread Zhanghaoyu (A)
ks to resolve this problem, 1. change NICConf's >> MAC as soon as emulated NIC's MAC changed in guest 2. sync NIC's (more >> precisely, queue) MAC to corresponding NICConf in NIC's migration load >> handler >> >> Any better ideas? > >As Michael

[Qemu-devel] [Bug 599958] Re: Timedrift problems with Win7: hpet missing time drift fixups

2013-10-01 Thread Ben A
Forgot to add: Reproduced the above behavior in both 1.5.1 and 1.6.0. Adding -no-hpet to commandline removed both problems (full disclosure: this fix wasn't tested in 1.5.1 but I have no reason to believe behavior would be different.) -- You received this bug notification because you

[Qemu-devel] [Bug 599958] Re: Timedrift problems with Win7: hpet missing time drift fixups

2013-10-01 Thread Ben A
Apparently this bug's still alive and kicking. There's an obvious clock skew problem on Windows 7; in the Date & Time dialog, the clock jumps through seconds visibly too fast. I also found a case where HPET bugs are causing a real problem: Terraria (dedicated server) seems to

Re: [Qemu-devel] [PATCH v4 4/4] hw/input/adb.c: implement QKeyCode support

2016-03-14 Thread M A
gt; >>>>> +} >>>>> +keycode = s->data[s->rptr]; >>>>> +if (++s->rptr == sizeof(s->data)) { >>>>> + s->rptr = 0; >>>>> } >>>>> +s->count--; >>>>> + >>>&

[Bug 1787] Qemu asan test make vm crash when using qxl and spice

2023-07-25 Thread zhangjianguo (A)
Bug links: https://gitlab.com/qemu-project/qemu/-/issues/1787 When we tested QEMU with asan, the vm crash. How to reproduce the bug: 1、 Start the vm with qxl and spice. 2、 Attach the vm with vnc and spice. 3、 Placed for more than three days. 4、 Operation on spice client and possible reproduce

Re: [PATCH] migrate/multifd: fix coredump when the multifd thread cleanup

2023-07-25 Thread chenyuhui (A)
@Peter Xu @Fabiano Rosas Kindly ping on this. On 2023/6/27 9:11, chenyuhui (A) wrote: > > On 2023/6/26 21:16, chenyuhui (A) wrote: >> >> On 2023/6/21 22:22, Fabiano Rosas wrote: >>> Jianguo Zhang via writes: >>> >>>> From: Yuhui Chen >>>

Re: [PATCH] migrate/multifd: fix coredump when the multifd thread cleanup

2023-07-26 Thread chenyuhui (A)
On 2023/7/26 0:53, Peter Xu wrote: > On Tue, Jul 25, 2023 at 04:43:28PM +0800, chenyuhui (A) wrote: >> @Peter Xu @Fabiano Rosas >> Kindly ping on this. > > Ah I see what's missing - please copy maintainer (Juan) for any migration > patches, especially multifd on

[RFC PATCH] iothread: add set_iothread_poll_* commands

2019-10-22 Thread yezhenyu (A)
.json | 23 +++ 6 files changed, 184 insertions(+) diff --git a/hmp-commands.hx b/hmp-commands.hx index a2c3ffc218..6fa0c5227a 100644 --- a/hmp-commands.hx +++ b/hmp-commands.hx @@ -74,6 +74,48 @@ VM initialization using configuration data provided on the command line and via the QMP monitor

QEMU cpu socket allocation

2022-05-17 Thread Rajesh A
Hi QEMU dev Virt Manager is able to configure a QEMU VM with more CPU sockets than the physical host has. For example, in the below VM, when I request 16 vCPU cores, by default it takes as 16 Sockets with 1 core each. The host itself has only 2 Sockets. 1. How does QEMU allow this and how

RE: QEMU cpu socket allocation

2022-05-17 Thread Rajesh A
Hi Peter Thanks. Yes, I believe (Sockets,Cores,Threads) = (1,16,1) should be the best performance, as the VM does not need to access the memory of another NUMA node. So, is it a bug that Virt Manager uses more Sockets by default, when i choose "Copy host CPU Configuration" ? regard

答复: [PATCH] usb/dev-wacom: fix OOB write in usb_mouse_poll()

2023-02-15 Thread ningqiang (A)
u.org; kra...@redhat.com; ningqiang (A) ; soul chen 主题: Re: [PATCH] usb/dev-wacom: fix OOB write in usb_mouse_poll() Hi Philippe, On Mon, Feb 13, 2023 at 7:26 PM Philippe Mathieu-Daudé wrote: > > Hi Mauro, > > On 13/2/23 18:41, Mauro Matteo Cascella wrote: > > The guest can con

A few QEMU questiosn

2022-10-03 Thread a b
Hello, there, I have a few newbie QEMU questions. I found that mmu_idx in aarch64-softmmu falls in 8, 10 and 12. I need some help to understand what they are for. I cannot find which macros are for mmu-idx 8, 10 and 12 at target/arm/cpu.h<https://git.qemu.org/?p=qemu.git;a=blob;f=target/

Re: A few QEMU questiosn

2022-10-06 Thread a b
Thanks a lot Peter for the clarification. It is very helpful. My naive understanding is that each MMU has only 1 TLB, why do we need an array of CPUTLBDescFast structures? How are these different CPUTLBDescFast data structures correlate with a hardware TLB? 220 typedef struct CPUTLB { 221

Qemu - how to run in Win?

2023-02-05 Thread Jacob A
Hello, After installing Qemu on Win, I don't see any shortcut to run it? There is only a link to 'uninstall'. launching exe files doesn't do anything. Can you please explain how to launch this application? Thanks, J. Please see the attached image.

Re: Qemu - how to run in Win?

2023-02-06 Thread Jacob A
Understood, thanks. Will stick to GUI app. On Mon, 6 Feb 2023 at 11:19, Bin Meng wrote: > On Mon, Feb 6, 2023 at 5:55 PM Philippe Mathieu-Daudé > wrote: > > > > Cc'ing Yonggang & Stefan. > > > > On 5/2/23 13:01, Jacob A wrote: > > > Hello, >

RE: [PATCH] hw/intc: sifive_plic: Avoid overflowing the addr_config buffer

2022-05-31 Thread limingwang (A)
past the end of addr_config. > > Fixes: ad40be27084536 ("target/riscv: Support start kernel directly by KVM") > Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1050 > Signed-off-by: Alistair Francis Reviewed-by: Mingwang Li Mingwang > --- > hw/intc/sifive_plic.c

RE: [PATCH] hw/riscv: virt: bugfix the memory-backend-file command is invalid

2021-10-07 Thread limingwang (A)
the following error information: qemu-system-riscv64: Failed initializing vhost-user memory map, consider using -object memory-backend-file share=on qemu-system-riscv64: vhost_set_mem_table failed: Interrupted system call (4) qemu-system-riscv64: unable to start vhost net: 4: falling back on userspac

RE: [PATCH v2] hw/riscv: virt: bugfix the memory-backend-file command is invalid

2021-10-15 Thread limingwang (A)
egion_init_ram_from_file" function and assigns the value of fd to mr->ram_block-fd. If the QEMU uses the default memory to initialize the system, the QEMU cannot obtain the fd in the "vhost_user_mem_section_filter" function when initializing the vhost-user. As a result, an error is reported in

答复: [PATCH] block: fix core for unlock not permitted

2022-04-12 Thread suruifeng (A)
Hi, The recurrence probability is extremely low. I have not reproduced this in the latest version. However, after reviewing the latest code, we find that this also exists. This is my understanding of the latest code, if there is a mistake in my understanding, please tell me. bdrv_flush_all

Re: A few QEMU questiosn

2022-10-06 Thread a b
Thanks Peter. How QEMU deals with different page size? Does a 2GB hugepage has a single corresponding TLB entry? Or it is partitioned to 512 4K pages and has 512 TLB entries? does a CPUTLBDescFast always hold TLB entries for the same single process? Is it always flushed/restored on context

撤回: Qemu asan test reported heap-use-after-free error when using qxl and spice

2023-07-06 Thread zhangjianguo (A)
zhangjianguo (A) 将撤回邮件“Qemu asan test reported heap-use-after-free error when using qxl and spice”。

Re: [PATCH] migrate/multifd: fix coredump when the multifd thread cleanup

2023-06-26 Thread chenyuhui (A)
On 2023/6/26 21:16, chenyuhui (A) wrote: > > On 2023/6/21 22:22, Fabiano Rosas wrote: >> Jianguo Zhang via writes: >> >>> From: Yuhui Chen >>> >>> There is a coredump while trying to destroy mutex when >>> p->running is false but p

[Qemu-devel] Booting my laptop with images from Qemu advent calendar

2016-12-07 Thread Oscar A
Hello dear all, I just started to get interested in operative systems, booting, and virtual machines, and found this project, which is very nice. I have a question. For instance, take the Hanoi´s towers image of yesterday. I want to boot my laptop with it. I guess that if I do it correctly

[Qemu-devel] why guest memory size not equal to my setting?

2017-06-10 Thread ??6????A
Hello Qemu-devel, Recently I'm trying to study vm memory allocation on qemu-kvm environment. I found some interesting here: I have create a 8GB(8388608 k) memory guest using Centos 7. but when I using dmesg to show the init memory, it was 9437184 k,around 9216MB. I would like to know th

[Qemu-devel] Why we need redirect the access to bar0 through the PCI config space access function

2017-05-05 Thread wuzongyong (A)
Hi, I’m testing gpu passthrough on KVM with NVIDIA gpu card(M60, 10de:13f2) based on vfio. And I noticed the function vfio_nvidia_bar0_mirror_quirk in qemu/hw/vfio/pci-quirks.c, could someone please explain the aim of these codes in detail to me? I don’t think it is necessary if we don’t need r

[Qemu-devel] 撤回: Why we need redirect the access to bar0 through the PCI config space access function

2017-05-07 Thread wuzongyong (A)
wuzongyong (A) 将撤回邮件“Why we need redirect the access to bar0 through the PCI config space access function”。

[Qemu-devel] A question about this commit 9894dc0cdcc397ee5b26370bc53da6d360a363c2

2016-08-23 Thread Gaohaifeng (A)
Hi Daniel & Paolo, Commit 9894dc0c "char: convert from GIOChannel to QIOChannel", about the below code segment: -static gboolean tcp_chr_read(GIOChannel *chan, GIOCondition cond, void *opaque) +static gboolean tcp_chr_read(QIOChannel *chan, GIOCondition cond, void *opaque) { CharDriverState

Re: [Qemu-devel] A question about this commit 9894dc0cdcc397ee5b26370bc53da6d360a363c2

2016-08-25 Thread Gaohaifeng (A)
On Tue, Aug 23, 2016 at 08:57:44AM +, Gaohaifeng (A) wrote: > Hi Daniel & Paolo, > > > > Commit 9894dc0c "char: convert from GIOChannel to QIOChannel", about > > > > the below code segment: > > > > -static gboolean tcp_chr_read(G

[Qemu-devel] [Bug 1703506] [NEW] SMT not supported by QEMU on AMD Ryzen CPU

2017-07-10 Thread A S
Public bug reported: HyperThreading/SMT is supported by AMD Ryzen CPUs but results in this message when setting the topology to threads=2: qemu-system-x86_64: AMD CPU doesn't support hyperthreading. Please configure -smp options properly. Checking in a Windows 10 guest reveals that SMT i

  1   2   3   4   5   6   7   8   9   10   >