Hi David,
On 17.03.24 09:37, Keqian Zhu via wrote:
>> For vCPU being hotplugged, qemu_init_vcpu() is called. In this
>> function, we set vcpu state as stopped, and then wait vcpu thread to
>> be created.
>>
>> As the vcpu state is stopped, it will inform us it has been created
>> and then wait
Hi David,
Thanks for reviewing.
On 17.03.24 09:37, Keqian Zhu via wrote:
>> Both main loop thread and vCPU thread are allowed to call
>> pause_all_vcpus(), and in general resume_all_vcpus() is called after
>> it. Two issues live in pause_all_vcpus():
>
>In general, calling pause_all_vcpus() fro
During we waited on qemu_pause_cond the bql was unlocked,
the vcpu's state may has been changed by other thread, so
we must request the pause state on all vcpus again.
For example:
Both main loop thread and vCPU thread are allowed to call
pause_all_vcpus(), and in general resume_all_vcpus() is
Hi Salil,
[...]
+void cpu_address_space_destroy(CPUState *cpu, int asidx) {
+CPUAddressSpace *cpuas;
+
+assert(cpu->cpu_ases);
+assert(asidx >= 0 && asidx < cpu->num_ases);
+/* KVM cannot currently support multiple address spaces. */
+assert(asidx == 0 || !kvm_enabled());
+
+
Hi Stefan, this indeed helps, thank you.
Keqian
On Mon, 16 Jan 2023 at 03:20, zhukeqian via
mailto:qemu-devel@nongnu.org>> wrote:
> And if IO operation is blocked, is vCPU thread will blocked when do
> deactivate?
Yes, blk_drain() is a synchronous function. It blocks until in-fl
I found blk_drain() is invoked by virtio_blk_reset(), so only the second
question remains :).
发件人: zhukeqian <>
发送时间: 2023年1月16日 16:18
收件人: 'Michael S. Tsirkin' ; 'Stefan Hajnoczi'
; 'Peter Maydell'
抄送: qemu-devel@nongnu.org; Wubin (H) ; Chentao (Boby)
Hi all maintainers and community friends,
Recently I am reviewing and learning the virtio and eventloop implementation of
latest QEMU,
and now I have a questions for help:
In general, the IO requests of virtio is popped in iothread/mainloop and may
submitted to "async IO
Engine" (io_uring/linu
>> > I notice this doesn't seem to have gone in yet -- whose tree is it
>> > going to go via?
>>
>> I'd guess ARM tree (due to almost sole user virt-arm).
>> (there are toy users like microvm and new loongarch)
>
>OK; applied to target-arm.next, thanks.
Thanks, Peter.
Keqian.
OK, I'll send v2 soon.
-邮件原件-
发件人: Peter Maydell [mailto:peter.mayd...@linaro.org]
发送时间: 2022年8月16日 17:42
收件人: zhukeqian
抄送: qemu-devel@nongnu.org; qemu-...@nongnu.org; qemu-triv...@nongnu.org;
Philippe Mathieu-Daudé ; Eric Auger ;
Peter Xu ; Igor Mammedov ; Wanghaibin
(D)
主
Hi Peter,
Setup an ARM virtual machine of machine virt and execute qmp
"query-acpi-ospm-status" can trigger this bug.
Thanks.
-邮件原件-
发件人: Qemu-devel [mailto:qemu-devel-bounces+zhukeqian1=huawei@nongnu.org] 代表
Peter Maydell
发送时间: 2022年8月16日 17:30
收件人: zhukeqian
抄送:
Hi,
Yep. It is known issue. Paolo will revert it.
Thanks.
Hello,
I synced today qemu code, and found the qemu can't bootup the windows guest.
This issue was caused by commit id 39205528 and revert this patch, the windows
guest can bootup.
qemu-system-x86_64: ../accel/kvm/kvm-all.c:690:
Thanks, drew. I'll be more careful in the future.
Keqian.
On Fri, Mar 12, 2021 at 11:39:49PM +0100, Igor Mammedov wrote:
> happens on current master,
>
> to reproduce start
> ./x86_64-softmmu/qemu-system-x86_64 -enable-kvm -m 1g -M pc -vnc localhost:0 \
> -snapshot -cdrom Fedora-Workstation
Thanks for your bug report. I was just off work, will dig into it tomorrow.
thanks :)
Keqian
On 09/03/2021 15.05, Keqian Zhu wrote:
>
>
> On 2021/3/9 21:48, Thomas Huth wrote:
>> On 17/12/2020 02.49, Keqian Zhu wrote:
>>> The parameters start and size are transfered from QEMU memory
>>> emula
Hi Kirti,
On 2020/11/2 5:00, Alex Williamson wrote:
> From: Kirti Wankhede
>
> Added helper functions to get IOMMU info capability chain.
> Added function to get migration capability information from that
> capability chain for IOMMU container.
>
> Similar change was proposed earlier:
> https:/
Hi Kirti,
On 2020/11/2 5:01, Alex Williamson wrote:
> From: Kirti Wankhede
>
> With vIOMMU, IO virtual address range can get unmapped while in pre-copy
> phase of migration. In that case, unmap ioctl should return pages pinned
> in that range and QEMU should find its correcponding guest physical
On 2020/12/14 23:36, Peter Xu wrote:
> On Mon, Dec 14, 2020 at 10:14:11AM +0800, zhukeqian wrote:
>
> [...]
>
>>>>> Though indeed I must confess I don't know how it worked in general when
>>>>> host
>>>>> page size != target pag
On 2020/12/11 23:25, Peter Xu wrote:
> On Fri, Dec 11, 2020 at 09:13:10AM +0800, zhukeqian wrote:
>>
>> On 2020/12/10 22:50, Peter Xu wrote:
>>> On Thu, Dec 10, 2020 at 10:53:23AM +0800, zhukeqian wrote:
>>>>
>>>>
>>>> On 2020/12/10
On 2020/12/10 22:50, Peter Xu wrote:
> On Thu, Dec 10, 2020 at 10:53:23AM +0800, zhukeqian wrote:
>>
>>
>> On 2020/12/10 10:08, Peter Xu wrote:
>>> Keqian,
>>>
>>> On Thu, Dec 10, 2020 at 09:46:06AM +0800, zhukeqian wrote:
>>>> Hi,
&
On 2020/12/10 10:08, Peter Xu wrote:
> Keqian,
>
> On Thu, Dec 10, 2020 at 09:46:06AM +0800, zhukeqian wrote:
>> Hi,
>>
>> I see that if start or size is not PAGE aligned, it also clears areas
>> which beyond caller's expectation, so do we also need to c
Hi,
I see that if start or size is not PAGE aligned, it also clears areas
which beyond caller's expectation, so do we also need to consider this?
Thanks,
Keqian
On 2020/12/9 10:33, Zenghui Yu wrote:
> Hi Peter,
>
> Thanks for having a look at it.
>
> On 2020/12/8 23:16, Peter Xu wrote:
>> Hi,
Hi folks, kindly ping ...
This bugfix can save several MBs memory, waiting for review, thanks.
Keqian.
On 2020/11/30 21:11, Keqian Zhu wrote:
> Keqian Zhu (2):
> ramlist: Make dirty bitmap blocks of ramlist resizable
> ramlist: Resize dirty bitmap blocks after remove ramblock
>
> softmmu/p
Hi Thomas,
On 2020/7/28 16:48, Thomas Huth wrote:
> On 27/07/2020 16.41, Peter Maydell wrote:
>> On Mon, 27 Jul 2020 at 14:03, Keqian Zhu wrote:
>>>
>>> Avoid covering object refcount of qemu_irq, otherwise it may causes
>>> memory leak.
>>>
>>> Signed-off-by: Keqian Zhu
>>> ---
>>> hw/core/irq
Hi Peter,
On 2020/7/27 22:41, Peter Maydell wrote:
> On Mon, 27 Jul 2020 at 14:03, Keqian Zhu wrote:
>>
>> Avoid covering object refcount of qemu_irq, otherwise it may causes
>> memory leak.
>>
>> Signed-off-by: Keqian Zhu
>> ---
>> hw/core/irq.c | 4 +++-
>> 1 file changed, 3 insertions(+), 1
Hi Qiang,
On 2020/7/27 22:37, Li Qiang wrote:
> Keqian Zhu 于2020年7月27日周一 下午9:03写道:
>>
>> Avoid covering object refcount of qemu_irq, otherwise it may causes
>> memory leak.
>
> Any reproducer?
>
In mainline Qemu. this function is only used in qtest. One of our internal
self-developed module als
Hi Dave,
On 2020/7/3 22:20, Dr. David Alan Gilbert wrote:
> * Keqian Zhu (zhukeqi...@huawei.com) wrote:
>> real_dirty_pages becomes equal to total ram size after dirty log sync
>> in ram_init_bitmaps, the reason is that the bitmap of ramblock is
>> initialized to be all set, so old path counts the
Please ignore this patch :-)
If we shutdown VM during migration, the migration thread may still
ref current_migration at this point.
On 2020/6/28 14:49, Keqian Zhu wrote:
> In migration_shutdown, global var current_migration is freed but not
> assigned to NULL, which may cause heap-use-after-free
Hi Dave,
On 2020/6/16 17:58, Dr. David Alan Gilbert wrote:
> * zhukeqian (zhukeqi...@huawei.com) wrote:
>> Hi Dave,
>>
>> On 2020/6/16 17:35, Dr. David Alan Gilbert wrote:
>>> * Keqian Zhu (zhukeqi...@huawei.com) wrote:
>>>> real_dirty_pages becomes eq
Hi Dave,
On 2020/6/16 17:35, Dr. David Alan Gilbert wrote:
> * Keqian Zhu (zhukeqi...@huawei.com) wrote:
>> real_dirty_pages becomes equal to total ram size after dirty log sync
>> in ram_init_bitmaps, the reason is that the bitmap of ramblock is
>> initialized to be all set, so old path counts th
Hi Jay Zhou,
On 2020/6/15 19:50, Zhoujian (jay) wrote:
> Hi Keqian,
>
>> -Original Message-----
>> From: zhukeqian
>> Sent: Monday, June 15, 2020 11:19 AM
>> To: qemu-devel@nongnu.org; qemu-...@nongnu.org; Paolo Bonzini
>> ; Zhoujian (jay)
>> Cc: J
Hi Paolo and Jian Zhou,
Do you have any suggestion on this patch?
Thanks,
Keqian
On 2020/6/1 12:02, Keqian Zhu wrote:
> DIRTY_LOG_INITIALLY_ALL_SET feature is on the queue. This fixs the
> dirty rate calculation for this feature. After introducing this
> feature, real_dirty_pages is equal to tot
Hi Dr.David,
Sorry for the reply delay, just come back from holiday.
On 2020/4/30 22:12, Dr. David Alan Gilbert wrote:
> * Keqian Zhu (zhukeqi...@huawei.com) wrote:
>> At the tail stage of throttling, the Guest is very sensitive to
>> CPU percentage while the @cpu-throttle-increment is excessive
Hi Peter,
On 2020/4/17 19:09, Peter Maydell wrote:
> On Mon, 13 Apr 2020 at 10:18, Keqian Zhu wrote:
>>
>> Replace kvm_device_access with kvm_gicc_access to simplify
>> code.
>>
>> Signed-off-by: Keqian Zhu
>> ---
>> hw/intc/arm_gicv3_kvm.c | 5 ++---
>> 1 file changed, 2 insertions(+), 3 delet
Hi Eric,
On 2020/3/31 23:03, Eric Blake wrote:
> On 3/15/20 11:29 PM, Keqian Zhu wrote:
>> At the tail stage of throttling, the Guest is very sensitive to
>> CPU percentage while the @cpu-throttle-increment is excessive
>> usually at tail stage.
>>
>> If this parameter is true, we will compute the
Friendly ping...
Hi all,
Could you please review this patch. Thanks very much.
Thanks,
Keqian
On 2020/3/16 12:29, Keqian Zhu wrote:
> At the tail stage of throttling, the Guest is very sensitive to
> CPU percentage while the @cpu-throttle-increment is excessive
> usually at tail stage.
>
> If
Hi Nengyuan,
On 2020/3/18 15:22, Pan Nengyuan wrote:
> Correcting zhang hailiang's email.
>
> On 3/18/2020 3:16 PM, Pan Nengyuan wrote:
>> This fix coverity issues 94417686:
>> 1260break;
>> CID 94417686: (MISSING_BREAK)
>> 1261. unterminated_case: The case for value
>> "MIGR
Hi Dr. David,
On 2020/3/13 2:07, Dr. David Alan Gilbert wrote:
> * Keqian Zhu (zhukeqi...@huawei.com) wrote:
>> Currently, if the bytes_dirty_period is more than the 50% of
>> bytes_xfer_period, we start or increase throttling.
>>
>> If we make this percentage higher, then we can tolerate higher
>
Hi, Eric
On 2020/2/21 22:14, Eric Blake wrote:
> On 2/20/20 8:57 PM, Keqian Zhu wrote:
>> Currently, if the bytes_dirty_period is more than the 50% of
>> bytes_xfer_period, we start or increase throttling.
>>
>> If we make this percentage higher, then we can tolerate higher
>> dirty rate during mi
On 2020/2/14 20:28, Dr. David Alan Gilbert wrote:
> * Keqian Zhu (zhukeqi...@huawei.com) wrote:
>> At the tail stage of throttle, VM is very sensitive to
>> CPU percentage. We just throttle 30% of remaining CPU
>> when throttle is more than 80 percentage.
>
> This is a bit unusual; all of the
On 2020/2/14 19:46, Eric Blake wrote:
> On 2/13/20 9:27 PM, Keqian Zhu wrote:
>> At the tail stage of throttle, VM is very sensitive to
>> CPU percentage. We just throttle 30% of remaining CPU
>> when throttle is more than 80 percentage.
>>
>> This doesn't conflict with cpu_throttle_increment.
>
Hi, Juan
On 2020/2/14 20:37, Juan Quintela wrote:
> Keqian Zhu wrote:
>> At the tail stage of throttle, VM is very sensitive to
>> CPU percentage. We just throttle 30% of remaining CPU
>> when throttle is more than 80 percentage.
>
> Why?
>
My original idea is that if we throttle a fixed percen
On 2020/2/4 17:14, Juan Quintela wrote:
> Keqian Zhu wrote:
>> qemu_savevm_nr_failover_devices() is originally designed to
>> get the number of failover devices, but it actually returns
>> the number of "unplug-pending" failover devices now. Moreover,
>> what drives migration state to wait-unpl
On 2020/2/5 22:40, Jens Freimann wrote:
> On Tue, Feb 04, 2020 at 01:08:41PM +0800, Keqian Zhu wrote:
>> qemu_savevm_nr_failover_devices() is originally designed to
>> get the number of failover devices, but it actually returns
>> the number of "unplug-pending" failover devices now. Moreover,
>>
On 2020/1/17 19:07, Peter Maydell wrote:
> On Fri, 17 Jan 2020 at 06:41, Keqian Zhu wrote:
>>
>> From: zhukeqian
>>
>> There is extra indent in ACPI GED plug cb. And we can use
>> existing helper function to trigger hotplug handler plug.
>>
>>
43 matches
Mail list logo