Public bug reported:
Reading certain files on a 9p mounted FS produces this error message:
qemu-system-x86_64: VirtFS reply type 117 needs 12 bytes, buffer has 12,
less than minimum
After this error message is generated, further accesses to the 9p FS
hangs whatever tries to access it. The Arch
** Description changed:
Reading certain files on a 9p mounted FS produces this error message:
qemu-system-x86_64: VirtFS reply type 117 needs 12 bytes, buffer has 12,
less than minimum
After this error message is generated, further accesses to the 9p FS
hangs whatever tries to
Here's a C program to trigger this behavior. I don't think it matters
what the contents of "file" or its size is.
** Description changed:
Reading certain files on a 9p mounted FS produces this error message:
qemu-system-x86_64: VirtFS reply type 117 needs 12 bytes, b
Thanks, it works.
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1877688
Title:
9p virtfs device reports error when opening certain files
Status in QEMU:
In Progress
Bug description:
Reading
Public bug reported:
I plugged multiple qcow2 to a Windows guest. On the Windows disk manager
all disks are listed perfectly, with their data, their real space, I
even can explore all files on the Explorer, all cool
On third party disk manager (all of them), I only have the C:\ HDD who
act
WDM claims it to be a MBR
Linux 5.6.14
QEMU 5.0.0-6
`nobody 19023 109 21.1 7151512 3462300 ? Sl 13:18 0:32
/usr/bin/qemu-system-x86_64 -name guest=win10machine,debug-threads=on -S
-object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-4-win10machine
/master
WDM claims it to be a MBR
Linux 5.6.14
QEMU 5.0.0-6
`nobody 19023 109 21.1 7151512 3462300 ? Sl 13:18 0:32 /usr/bin/qemu-
system-x86_64 -name guest=win10machine,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-4-win10machine
/master-key.aes -machine
;>> >>> -watchdog-action poweroff -device
>>> >>> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
>>> >>>
>>> >>Which QEMU version is this? Can you try with e1000 NICs instead of virtio?
>>> >>
>>> >This QEMU version is
qemu-1.5.1
libvirt-1.1.0
guest os:win2k8 R2 x64bit or sles11sp2 x64 or win2k3 32bit
Steps shown as below:
1.use virsh to start a vm with a virtio NIC
2.after booting, use virsh detach-device to hotunplug the virito NIC
3.use virsh reboot/reset the restart the vm
4.when vm is rebooting
>> >>
>> >This QEMU version is 1.0.0, but I also test QEMU 1.5.2, the same problem
>> >exists, including the performance degradation and readonly GFNs' flooding.
>> >I tried with e1000 NICs instead of virtio, including the performance
>> >
545825d4cda03ea292b7788b3401b99860efe8bc)
libvirt: 1.1.0
guest os: win2k8 R2 x64bit or sles11sp2 x64 or win2k3 32bit
You can reproduce this problem by following steps:
1. start a VM with virtio NIC(s)
2. hot-unplug a virtio NIC from the VM
3. reboot the VM, then segmentation fault happened during starting period
ost: SLES11SP2 (kenrel version: 3.0.58)
>> qemu: 1.5.1, upstream-qemu (commit
>> 545825d4cda03ea292b7788b3401b99860efe8bc)
>> libvirt: 1.1.0
>> guest os: win2k8 R2 x64bit or sles11sp2 x64 or win2k3 32bit
>>
>> You can reproduce this problem by following steps:
>
> > I compile the QEMU source download from qemu.git
> > (http://git.qemu.org/git/qemu.git) on 4-9-2013, errors reported as
> > below,
> >
> >
> >
> > hw/virtio/dataplane/vring.c: In function 'vring_enable_notification':
> >
> > hw/virtio/dataplane/vring.c:72: warning: implicit declaration of
> > The log of "make V=1" is identical with that of "make", shown as below,
> >
> > hw/virtio/dataplane/vring.c: In function 'vring_enable_notification':
> > hw/virtio/dataplane/vring.c:72: warning: implicit declaration of function
> > 'vring_avail_event'
> > hw/virtio/dataplane/vring.c:72: warni
> > On Mon, Apr 08, 2013 at 12:27:06PM +, Zhanghaoyu (A) wrote:
> >> On Sun, Apr 07, 2013 at 04:58:07AM +0000, Zhanghaoyu (A) wrote:
> >>>>>> I start a kvm VM with vnc(using the zrle protocol) connect, sometimes
> >>>>>> qemu progra
I start 10 VMs(windows xp), then running geekbench tool on them, about 2 days,
one of them was reset,
I found the reset operation is done by
int kvm_cpu_exec(CPUArchState *env)
{
...
switch (run->exit_reason)
...
case KVM_EXIT_SHUTDOWN:
DPRINTF("shutdown\n");
>> On Thu, Apr 18, 2013 at 12:00:49PM +, Zhanghaoyu (A) wrote:
>>> I start 10 VMs(windows xp), then running geekbench tool on them,
>>> about 2 days, one of them was reset, I found the reset operation is
>>> done by int kvm_cpu_exec(CPUArchState *env) {
>> >> On Thu, Apr 18, 2013 at 12:00:49PM +, Zhanghaoyu (A) wrote:
>> >>> I start 10 VMs(windows xp), then running geekbench tool on them,
>> >>> about 2 days, one of them was reset, I found the reset operation
>> &
>> >> >> On Thu, Apr 18, 2013 at 12:00:49PM +0000, Zhanghaoyu (A) wrote:
>> >> >>> I start 10 VMs(windows xp), then running geekbench tool on
>> >> >>> them, about 2 days, one of them was reset, I found the reset
>&
n above suspect, I want to find the two adjacent versions of
>> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
>> and analyze the differences between this two versions, or apply the
>> patches between this two versions by bisection method, finally find the key
is problem or not (e.g. 2.6.39, 3.0-rc1),
>> >> and analyze the differences between this two versions, or apply the
>> >> patches between this two versions by bisection method, finally find the
>> >> key patches.
>> >>
>> >> Any better ideas?
>> >
with this problem.
>> >> >> Based on above suspect, I want to find the two adjacent versions of
>> >> >> kvm-kmod which triggers this problem or not (e.g. 2.6.39, 3.0-rc1),
>> >> >> and analyze the differences between this two versions, or apply t
gt; If EPT disabled, this problem gone.
>> >> >> >>
>> >> >> >> I suspect that kvm hypervisor has business with this problem.
>> >> >> >> Based on above suspect, I want to find the two adjacent versions of
>> >> >
>Hi,
>
>Am 05.08.2013 11:09, schrieb Zhanghaoyu (A):
>> When I build the upstream, encounter a problem that I compile and
>> install the upstream(commit: e769ece3b129698d2b09811a6f6d304e4eaa8c29)
>> on sles11sp2 environment via below command cp
>> /boot/config-
>> The QEMU command line (/var/log/libvirt/qemu/[domain name].log),
>> LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none
>> /usr/local/bin/qemu-system-x86_64 -name ATS1 -S -M pc-0.12 -cpu qemu32
>> -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid
>> 0505ec91-38
starting.
>
>Thanks,
>Zhang Haoyu
>
>>--
>> Gleb.
Should we focus on the first bad
commit(612819c3c6e67bac8fceaa7cc402f13b1b63f7e4) and the surprising GFNs'
flooding?
I applied below patch to __direct_map(),
@@ -2223,6 +2223,8 @@ static int __d
ists, including the performance degradation and readonly GFNs' flooding.
>> >I tried with e1000 NICs instead of virtio, including the performance
>> >degradation and readonly GFNs' flooding, the QEMU version is 1.5.2.
>> >No matter e1000 NICs or virtio NICs, t
I running a VM(RHEL-5.5) on KVM hypervisor(linux-3.8 + QEMU-1.4.1), and
direct-assign intel 82576 VF to the VM. When TX/RX packets on VM to the other
host via iperf tool, top tool result on VM shown that the %si is too high,
approximately 95% ~ 100%, but from the view of host, the VM's
> I running a VM(RHEL-5.5) on KVM hypervisor(linux-3.8 + QEMU-1.4.1), and
> direct-assign intel 82576 VF to the VM. When TX/RX packets on VM to the other
> host via iperf tool, top tool result on VM shown that the %si is too high,
> approximately 95% ~ 100%, but from the view of ho
table.
Signed-off-by: Zhang Haoyu
Signed-off-by: Huang Weidong
Signed-off-by: Qin Chuanyu
---
hw/i386/kvm/pci-assign.c | 3 +++
1 files changed, 3 insertions(+)
--- a/hw/i386/kvm/pci-assign.c 2013-05-04 15:53:18.0 +0800
+++ b/hw/i386/kvm/pci-assign.c 2013-05-04 15:50:46.0 +
>> I running a VM(RHEL-5.5) on KVM hypervisor(linux-3.8 + QEMU-1.4.1),
>> and direct-assign intel 82576 VF to the VM. When TX/RX packets on VM to the
>> other host via iperf tool, top tool result on VM shown that the %si is too
>> high, approximately 95% ~ 100%, but fr
-x entry "control" section,
>> needless to update VM irq routing table.
>>
>> Signed-off-by: Zhang Haoyu
>> Signed-off-by: Huang Weidong
>> Signed-off-by: Qin Chuanyu
>> ---
>> hw/i386/kvm/pci-assign.c | 3 +++
>> 1 files changed, 3 in
e is so low.
>> >> Masking/unmasking msi-x vector only set msi-x entry "control" section,
>> >> needless to update VM irq routing table.
>> >>
>> >> Signed-off-by: Zhang Haoyu
>> >> Signed-off-by: Huang Weidong
>> >>
Hi, all
The VM will get stuck for a while(about 6s for a VM with 20GB memory) when
attaching a pass-through PCI card to the non-pass-through VM for the first
time.
The reason is that the host will build the whole VT-d GPA->HPA DMAR page-table,
which needs a lot of time, and during this t
>> Hi, all
>>
>> The VM will get stuck for a while(about 6s for a VM with 20GB memory) when
>> attaching a pass-through PCI card to the non-pass-through VM for the first
>> time.
>> The reason is that the host will build the whole VT-d GPA->HPA DMAR
&
>> What if you detach and re-attach?
>> Is it fast then?
>> If yes this means the issue is COW breaking that occurs with
>> get_user_pages, not translation as such.
>> Try hugepages with prealloc - does it help?
>
>I agree it's either COW breaking or (similarly) locking pages that the guest
>hasn
t;> Or the new shared flag - IIRC shared VMAs don't do COW either.
>
>Only if the problem isn't locking and zeroing of untouched pages (also, it is
>not upstream is it?).
>
>Can you make a profile with perf?
>
"-rt mlock=on" option is not set, perf top -p
ote:
>> > > > Il 26/11/2013 13:40, Zhanghaoyu (A) ha scritto:
>> > > > > When guest set irq smp_affinity, VMEXIT occurs, then the vcpu
>> > > > > thread will IOCTL return to QEMU from hypervisor, then vcpu
>> > > > > thread ask the
> > Avoid starting a new migration task while the previous one still
> exist.
>
> Can you explain how to reproduce the problem?
>
When network disconnection between source and destination happened, the
migration thread stuck at below stack,
#0 0x7f07e96c8288 in wri
>>>> Avoid starting a new migration task while the previous one still
>>> exist.
>>>
>>> Can you explain how to reproduce the problem?
>>>
>> When network disconnection between source and destination happened,
>> the migration thread st
>Hi all,
>
>Does the Qemu have the storage migration tool, like the io-mirroring inside
>the vmware? io-mirroring means for all the ioes, they are send to both source
>and destination at the same time.
drive_mirror maybe your choice.
Thanks,
Zhang Haoyu
>
>Thanks!
Avoid a bogus COMPLETED->CANCELLED transition.
There is a period of time from the timing of setting COMPLETED state to that of
migration thread exits, so during which it's problematic in
COMPLETED->CANCELLED transition.
Signed-off-by: Zeng Junliang
Signed-off-by: Zhang Haoyu
---
Introduce MIG_STATE_CANCELLING state to avoid starting a new migration task
while the previous one still exist.
Signed-off-by: Zeng Junliang
Signed-off-by: Zhang Haoyu
---
migration.c | 26 --
1 files changed, 16 insertions(+), 10 deletions(-)
diff --git a
gt;
>Buggy and tightly coupled with the live migration code, making it hard to
>modify either area independently.
Thanks a lot for explaining.
Till now, we still use the old block-migration code in our virtualization
solution.
Could you detail the bugs that the old block-migration code have?
Thank
Hi, all
What's the difference of the linux guest kernel parameter
idle=, especially in performance?
Taking the performance into account, which one is best?
In my opinion, if the number of all VMs' vcpus is far more than that of pcpus,
e.g. SPECVirt test, idle=halt is better for server's total
wait RCU grace period, and during this period, this
vcpu cannot provide service to VM,
so those interrupts delivered to this vcpu cannot be handled in time, and the
apps running on this vcpu cannot be serviced too.
It's unacceptable in some real-time scenario, e.g. telecom.
So, I want to cre
>> > I don't think a workqueue is even needed. You just need to use
>> > call_rcu to free "old" after releasing kvm->irq_lock.
>> >
>> > What do you think?
>>
>> It should be rate limited somehow. Since it guest triggarable gu
synchronize_rcu(), you have the additional guarantee that any
>>> > parallel accesses to the old routing table have completed. Since
>>> > we also trigger the irq from rcu context, you know that after
>>> > synchronize_rcu() you won't get any interrupts to
he case with RCU? (See my answer above: "the
>>>> vcpus already see the new routing table after the rcu_assign_pointer
>>>> that is in kvm_irq_routing_update").
>>> With synchronize_rcu(), you have the additional guarantee that any
>>> paral
respectively.
Signed-off-by: Frank Yang
---
migration-rdma.c | 58 +---
1 file changed, 39 insertions(+), 19 deletions(-)
diff --git a/migration-rdma.c b/migration-rdma.c
index f94f3b4..33e8a92 100644
--- a/migration-rdma.c
+++ b/migration-rdma.c
Hi, all
Could someone make a detailed statement for the buggy implementation of
traditional storage-migration method that migrating the storage in iteration
way?
Thanks,
Zhang Haoyu
>>>> hi Michal,
>>>>
>>>> I used libvirt-1.0.3, ran below comma
Hi, all
Could someone make a detailed statement for the buggy implementation of
traditional live migration with storage that migrating the storage in iteration
way?
Thanks,
Zhang Haoyu
>>>> hi Michal,
>>>>
>>>> I used libvirt-1.0.3, ran below comma
y transferred
> zero page was memset to zero and thus allocated. Since commit
> 211ea740 we check for zeroness of a target page before we memset
> it to zero. Additionally we memmap target memory so it is essentially
> zero initialized (except for e.g. option roms and bios which are lo
Hi, Juan
I read below words on the report of ,
We were going to remove the old block-migration code
Then people fixed it
Good: it works now
Bad: We have to maintain both
It uses the same port than migration
You need to migrate all/none of block devices
The old block-migration code said above is t
Avoid starting a new migration task while the previous one still exist.
Signed-off-by: Zeng Junliang
---
migration.c | 34 ++
1 files changed, 22 insertions(+), 12 deletions(-)
diff --git a/migration.c b/migration.c
index 2b1ab20..ab4c439 100644
--- a
On Jul 1, 2015, at 6:13 PM, Programmingkid wrote:
> Fix real cdrom access in Mac OS X so it can be used in QEMU.
> It simply removes the r from a device file's name. This
> allows for a real cdrom to be accessible to the guest.
> It has been successfully tested with a Windows X
I start a kvm VM with vnc(using the zrle protocol) connect, sometimes qemu
program crashed during starting period, received signal SIGABRT.
Trying about 20 times, this crash may be reproduced.
I guess the cause memory corruption or double free.
The backtrace shown as below:
0x7f32eda3dd95
>> I start a kvm VM with vnc(using the zrle protocol) connect, sometimes qemu
>> program crashed during starting period, received signal SIGABRT.
>> Trying about 20 times, this crash may be reproduced.
>> I guess the cause memory corruption or double free.
>
>
On Sun, Apr 07, 2013 at 04:58:07AM +, Zhanghaoyu (A) wrote:
> >>> I start a kvm VM with vnc(using the zrle protocol) connect, sometimes
> >>> qemu program crashed during starting period, received signal SIGABRT.
> >>> Trying about 20 times, this crash
I compile the QEMU source download from qemu.git
(http://git.qemu.org/git/qemu.git) on 4-9-2013, errors reported as below,
hw/virtio/dataplane/vring.c: In function 'vring_enable_notification':
hw/virtio/dataplane/vring.c:72: warning: implicit declaration of function
'vring_avail_event'
hw/virtio
gned-off-by: Zhang Haoyu
> Signed-off-by: Zhang Huanzhong
> ---
> hw/virtio/virtio-pci.c |8 +++-
> kvm-all.c |5 +
> 2 files changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c index
> b07
hi all,
I met similar problem to these, while performing live migration or save-restore
test on the kvm platform (qemu:1.4.0, host:suse11sp2, guest:suse11sp2), running
tele-communication software suite in guest,
https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
http://comments.
> Hi,
>
> Am 11.07.2013 11:36, schrieb Zhanghaoyu (A):
> > I met similar problem to these, while performing live migration or
> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
> guest:suse11sp2), running tele-communication software suite in guest,
>
ng
---
hw/virtio/virtio-pci.c |8 +++-
kvm-all.c |5 +
2 files changed, 8 insertions(+), 5 deletions(-)
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index b070b64..e4829a3 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
++-
2 files changed, 26 insertions(+), 1 deletions(-)
diff --git a/include/migration/migration.h b/include/migration/migration.h
index f0640e0..1a56248 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -23,6 +23,8 @@
#include "qapi-types.h"
++-
2 files changed, 26 insertions(+), 1 deletions(-)
diff --git a/include/migration/migration.h b/include/migration/migration.h
index f0640e0..1a56248 100644
--- a/include/migration/migration.h
+++ b/include/migration/migration.h
@@ -23,6 +23,8 @@
#include "qapi-types.h"
Hi, all
Do live migration if emulated NIC's MAC has been changed, RARP with wrong MAC
address will broadcast via qemu_announce_self in destination,
so, long time network disconnection probably happen.
I want to do below works to resolve this problem,
1. change NICConf's MAC as soon as emulated N
aoyu
>
>I think announce needs to poke at the current MAC instead of the default one
>in NICConf.
>We can make it respect link down state while we are at it.
>
NICConf structures are incorporated in different emulated NIC's structure,
e.g., VirtIONet, E1000State_st, RTL8139Stat
;
>> >I think announce needs to poke at the current MAC instead of the default
>> >one in NICConf.
>> >We can make it respect link down state while we are at it.
>> >
>> NICConf structures are incorporated in different emulated NIC's
>> structur
o corresponding
>> >> >> NICConf in NIC's migration load handler
>> >> >>
>> >> >> Any better ideas?
>> >> >>
>> >> >> Thanks,
>> >> >> Zhang Haoyu
>> >> >
>>
c.
>> >> >> >>
>> >> >> >> BTW, in native scenario, reboot will revert the changed MAC
>> >> >> >> to original one, too.
>> >> >> >>
>> >> >> >> >> 2. sync NIC's (more
ks to resolve this problem, 1. change NICConf's
>> MAC as soon as emulated NIC's MAC changed in guest 2. sync NIC's (more
>> precisely, queue) MAC to corresponding NICConf in NIC's migration load
>> handler
>>
>> Any better ideas?
>
>As Michael
Forgot to add: Reproduced the above behavior in both 1.5.1 and 1.6.0.
Adding -no-hpet to commandline removed both problems (full disclosure:
this fix wasn't tested in 1.5.1 but I have no reason to believe behavior
would be different.)
--
You received this bug notification because you
Apparently this bug's still alive and kicking.
There's an obvious clock skew problem on Windows 7; in the Date & Time
dialog, the clock jumps through seconds visibly too fast.
I also found a case where HPET bugs are causing a real problem: Terraria
(dedicated server) seems to
gt;
>>>>> +}
>>>>> +keycode = s->data[s->rptr];
>>>>> +if (++s->rptr == sizeof(s->data)) {
>>>>> + s->rptr = 0;
>>>>> }
>>>>> +s->count--;
>>>>> +
>>>&
Bug links: https://gitlab.com/qemu-project/qemu/-/issues/1787
When we tested QEMU with asan, the vm crash.
How to reproduce the bug:
1、 Start the vm with qxl and spice.
2、 Attach the vm with vnc and spice.
3、 Placed for more than three days.
4、 Operation on spice client and possible reproduce
@Peter Xu @Fabiano Rosas
Kindly ping on this.
On 2023/6/27 9:11, chenyuhui (A) wrote:
>
> On 2023/6/26 21:16, chenyuhui (A) wrote:
>>
>> On 2023/6/21 22:22, Fabiano Rosas wrote:
>>> Jianguo Zhang via writes:
>>>
>>>> From: Yuhui Chen
>>>
On 2023/7/26 0:53, Peter Xu wrote:
> On Tue, Jul 25, 2023 at 04:43:28PM +0800, chenyuhui (A) wrote:
>> @Peter Xu @Fabiano Rosas
>> Kindly ping on this.
>
> Ah I see what's missing - please copy maintainer (Juan) for any migration
> patches, especially multifd on
.json | 23 +++
6 files changed, 184 insertions(+)
diff --git a/hmp-commands.hx b/hmp-commands.hx
index a2c3ffc218..6fa0c5227a 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -74,6 +74,48 @@ VM initialization using configuration data provided on the
command line
and via the QMP monitor
Hi QEMU dev
Virt Manager is able to configure a QEMU VM with more CPU sockets than the
physical host has.
For example, in the below VM, when I request 16 vCPU cores, by default it
takes as 16 Sockets with 1 core each. The host itself has only 2 Sockets.
1. How does QEMU allow this and how
Hi Peter
Thanks. Yes, I believe (Sockets,Cores,Threads) = (1,16,1) should be the best
performance, as the VM does not need to access the memory of another NUMA node.
So, is it a bug that Virt Manager uses more Sockets by default, when i choose
"Copy host CPU Configuration" ?
regard
u.org; kra...@redhat.com; ningqiang (A)
; soul chen
主题: Re: [PATCH] usb/dev-wacom: fix OOB write in usb_mouse_poll()
Hi Philippe,
On Mon, Feb 13, 2023 at 7:26 PM Philippe Mathieu-Daudé
wrote:
>
> Hi Mauro,
>
> On 13/2/23 18:41, Mauro Matteo Cascella wrote:
> > The guest can con
Hello, there,
I have a few newbie QEMU questions. I found that mmu_idx in aarch64-softmmu
falls in 8, 10 and 12.
I need some help to understand what they are for.
I cannot find which macros are for mmu-idx 8, 10 and 12 at
target/arm/cpu.h<https://git.qemu.org/?p=qemu.git;a=blob;f=target/
Thanks a lot Peter for the clarification. It is very helpful.
My naive understanding is that each MMU has only 1 TLB, why do we need an array
of CPUTLBDescFast structures? How are these different CPUTLBDescFast data
structures correlate with a hardware TLB?
220 typedef struct CPUTLB {
221
Hello,
After installing Qemu on Win, I don't see any shortcut to run it? There is
only a link to 'uninstall'. launching exe files doesn't do anything. Can
you please explain how to launch this application?
Thanks,
J.
Please see the attached image.
Understood, thanks. Will stick to GUI app.
On Mon, 6 Feb 2023 at 11:19, Bin Meng wrote:
> On Mon, Feb 6, 2023 at 5:55 PM Philippe Mathieu-Daudé
> wrote:
> >
> > Cc'ing Yonggang & Stefan.
> >
> > On 5/2/23 13:01, Jacob A wrote:
> > > Hello,
>
past the end of addr_config.
>
> Fixes: ad40be27084536 ("target/riscv: Support start kernel directly by KVM")
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1050
> Signed-off-by: Alistair Francis
Reviewed-by: Mingwang Li
Mingwang
> ---
> hw/intc/sifive_plic.c
the following error information:
qemu-system-riscv64: Failed initializing vhost-user memory map, consider using
-object memory-backend-file share=on
qemu-system-riscv64: vhost_set_mem_table failed: Interrupted system call (4)
qemu-system-riscv64: unable to start vhost net: 4: falling back on userspac
egion_init_ram_from_file"
function
and assigns the value of fd to mr->ram_block-fd. If the QEMU uses the default
memory to
initialize the system, the QEMU cannot obtain the fd in the
"vhost_user_mem_section_filter"
function when initializing the vhost-user. As a result, an error is reported in
Hi,
The recurrence probability is extremely low. I have not reproduced this in the
latest version.
However, after reviewing the latest code, we find that this also exists.
This is my understanding of the latest code, if there is a mistake in my
understanding, please tell me.
bdrv_flush_all
Thanks Peter.
How QEMU deals with different page size? Does a 2GB hugepage has a single
corresponding TLB entry? Or it is partitioned to 512 4K pages and has 512 TLB
entries?
does a CPUTLBDescFast always hold TLB entries for the same single process? Is
it always flushed/restored on context
zhangjianguo (A) 将撤回邮件“Qemu asan test reported heap-use-after-free error when
using qxl and spice”。
On 2023/6/26 21:16, chenyuhui (A) wrote:
>
> On 2023/6/21 22:22, Fabiano Rosas wrote:
>> Jianguo Zhang via writes:
>>
>>> From: Yuhui Chen
>>>
>>> There is a coredump while trying to destroy mutex when
>>> p->running is false but p
Hello dear all,
I just started to get interested in operative systems, booting, and
virtual machines, and found this project, which is very nice. I have
a question. For instance, take the Hanoi´s towers image of yesterday.
I want to boot my laptop with it. I guess that if I do it correctly
Hello Qemu-devel,
Recently I'm trying to study vm memory allocation on qemu-kvm environment.
I found some interesting here:
I have create a 8GB(8388608 k) memory guest using Centos 7. but when I using
dmesg to show the init memory,
it was 9437184 k,around 9216MB. I would like to know th
Hi,
I’m testing gpu passthrough on KVM with NVIDIA gpu card(M60, 10de:13f2) based
on vfio. And I noticed the function vfio_nvidia_bar0_mirror_quirk in
qemu/hw/vfio/pci-quirks.c, could someone please explain the aim of these codes
in detail to me? I don’t think it is necessary if we don’t need r
wuzongyong (A) 将撤回邮件“Why we need redirect the access to bar0 through the PCI
config space access function”。
Hi Daniel & Paolo,
Commit 9894dc0c "char: convert from GIOChannel to QIOChannel", about
the below code segment:
-static gboolean tcp_chr_read(GIOChannel *chan, GIOCondition cond, void *opaque)
+static gboolean tcp_chr_read(QIOChannel *chan, GIOCondition cond, void *opaque)
{
CharDriverState
On Tue, Aug 23, 2016 at 08:57:44AM +, Gaohaifeng (A) wrote:
> Hi Daniel & Paolo,
> >
> > Commit 9894dc0c "char: convert from GIOChannel to QIOChannel", about
> >
> > the below code segment:
> >
> > -static gboolean tcp_chr_read(G
Public bug reported:
HyperThreading/SMT is supported by AMD Ryzen CPUs but results in this
message when setting the topology to threads=2:
qemu-system-x86_64: AMD CPU doesn't support hyperthreading. Please
configure -smp options properly.
Checking in a Windows 10 guest reveals that SMT i
1 - 100 of 2248 matches
Mail list logo