答复: [RFC PATCH 0/5] hw/arm/virt: Introduce cpu topology support

2021-03-10 Thread fangying
-邮件原件- 发件人: Andrew Jones [mailto:drjo...@redhat.com] 发送时间: 2021年3月10日 17:21 收件人: fangying 主题: Re: [RFC PATCH 0/5] hw/arm/virt: Introduce cpu topology support > Hi Ying Fang, > > Do you plan to repost this soon? It'd be great if it got into 6.0. > > Thanks, >

Re: [PATCH v2 0/4] arm64: Add the cpufreq device to show cpufreq info to guest

2020-02-13 Thread fangying
On 2020/2/13 16:18, Andrew Jones wrote: On Thu, Feb 13, 2020 at 03:36:26PM +0800, Ying Fang wrote: On ARM64 platform, cpu frequency is retrieved via ACPI CPPC. A virtual cpufreq device based on ACPI CPPC is created to present cpu frequency info to the guest. The default frequency is set to h

Re: [Qemu-devel] [PATCH] qmp: Fix memory leak in migrate_params_test_apply

2019-09-04 Thread fangying
On 2019/9/4 0:46, Dr. David Alan Gilbert wrote: * Ying Fang (fangyi...@huawei.com) wrote: Address Sanitizer shows memory leak in migrate_params_test_apply migration/migration.c:1253 and the stack is as bellow: Direct leak of 45 byte(s) in 9 object(s) allocated from: #0 0xbd7fc1db in

Re: [Qemu-devel] Discussion: vnc: memory leak in zrle_compress_data

2019-09-01 Thread fangying
:zhouyi...@huawei.com>> 主题: Re: [Qemu-devel] Discussion: vnc: memory leak in zrle_compress_data 时间: 2019-08-31 23:48:10 fangying mailto:fangyi...@huawei.com>> 于2019年8月31日周六 上午8:45写道: Hi Gerd, Memory leak is observed in zrle_compress_data when we are doing some AddressSa

[Qemu-devel] Discussion: vnc: memory leak in zrle_compress_data

2019-08-30 Thread fangying
Hi Gerd, Memory leak is observed in zrle_compress_data when we are doing some AddressSanitizer tests. The leak stack is as bellow: = ==47887==ERROR: LeakSanitizer: detected memory leaks Direct leak of 29760 byte(s) in 5 object(s)

Re: [Qemu-devel] [PATCH] qmp: Fix memory leak in migrate_params_test_apply

2019-08-27 Thread fangying
On 2019/8/27 16:38, Li Qiang wrote: Ying Fang mailto:fangyi...@huawei.com>> 于2019年8月27日周 二 下午4:06写道: Address Sanitizer shows memory leak in migrate_params_test_apply migration/migration.c:1253 and the stack is as bellow: Direct leak of 45 byte(s) in 9 object(s) allocated from

[Qemu-devel] [Bug 1840865] Re: qemu crashes when doing iotest on virtio-9p filesystem

2019-08-21 Thread fangying
** Description changed: Qemu crashes when doing avocado-vt test on virtio-9p filesystem. - This bug can be reproduced running https://github.com/autotest/tp-qemu/blob/master/qemu/tests/9p.py. + This bug can be reproduced running https://github.com/autotest/tp-qemu/blob/master/qemu/tests/9p.py

[Qemu-devel] [Bug 1840865] [NEW] qemu crashes when doing iotest on virtio-9p filesystem

2019-08-20 Thread fangying
Public bug reported: Qemu crashes when doing avocado-vt test on virtio-9p filesystem. This bug can be reproduced running https://github.com/autotest/tp-qemu/blob/master/qemu/tests/9p.py. The crash stack goes like: Program terminated with signal SIGSEGV, Segmentation fault. #0 v9fs_mark_fids_unr

Re: [Qemu-devel] Discussion: redundant process during hotplug and missed process during unplug

2019-07-20 Thread fangying
Hi Michael, On Fri, Jul 19, 2019 at 02:35:14AM +, Zhangbo (Oscar) wrote: Hi All: I have 2 questions about (un)hotplug on pcie-root-port. First Question (hotplug failure because of redundant PCI_EXP_LNKSTA_DLLLA bit set): during VM boot, qemu sets PCI_EXP_LNKSTA_DLLLA according to thi

Re: [Qemu-devel] [RFC] Questions on the I/O performance of emulated host cdrom device

2019-01-21 Thread fangying
come up with a proper solution right now. But I think there may be two approches. One way that can check drive status equal to ioctl CDROM_DRIVE_STATUS but much faster. The other way that let qemu catch drive status via event triggered mechanism. Regards. 发件人: fangying 发送时间: 2019年1月22日 11:27 收件人

Re: [Qemu-devel] [RFC] Questions on the I/O performance of emulated host cdrom device

2019-01-21 Thread fangying
TUS, CDSL_CURRENT); >>>>> return ret == CDS_DISC_OK; >>>>> } >>>>> A flamegraph svg file (cdrom.svg) is attachieved in this email to show >>>>> the code timing profile we've tested. >>>>> >>>>

[Qemu-devel] [PATCH v4] vhost: Don't abort when vhost-user connection is lost during migration

2017-12-01 Thread fangying
QEMU will abort when vhost-user process is restarted during migration when vhost_log_global_start/stop is called. The reason is clear that vhost_dev_set_log returns -1 because network connection is lost. To handle this situation, let's cancel migration by setting migrate state to failure and repor

[Qemu-devel] [PATCH v3] vhost: Cancel migration when vhost-user process restarted during migration

2017-11-27 Thread fangying
QEMU will abort when vhost-user process is restarted during migration and vhost_log_global_start/stop is called. The reason is clear that vhost_dev_set_log returns -1 because network connection is temporarily lost. To handle this situation, let's cancel migration here. Signed-off-by: Ying Fang --

[Qemu-devel] [PATCH v2] vhost: Cancel migration when vhost-user process restarted during migration

2017-11-15 Thread fangying
From: Ying Fang QEMU will abort when vhost-user process is restarted during migration when vhost_log_global_start/stop is called. The reason is clear that vhost_dev_set_log returns -1 because network connection is temporarily lost. To handle this situation, let's cancel migration and report it to

[Qemu-devel] [PATCH] vhost: Cancel migration when vhost-user process restarted during migration

2017-11-14 Thread fangying
From: Ying Fang QEMU will abort when vhost-user process is restarted during migration when vhost_log_global_start/stop is called. The reason is clear that vhost_dev_set_log returns -1 because network connection is temporarily lost. To handle this situation, let's cancel migration and report it to

[Qemu-devel] QEMU abort when network serivce is restarted during live migration with vhost-user as the network backend

2017-11-13 Thread fangying
Hi all, We have a vm running migration with vhost-user as network backend, we notice that qemu will abort when openvswitch is restarted when MEMORY_LISTENER_CALL_GLOBAL(log_global_start, Forward) is called. The reasion is clear that vhost_dev_set_log returns -1 because the network connection is

Re: [Qemu-devel] kvm bug in __rmap_clear_dirty during live migration

2017-03-13 Thread fangying
Hi, Huang Kai After weeks of intensive testing, we think the problem is solved and this issue can be closed. On 2017/2/27 15:38, Huang, Kai wrote: On 2/25/2017 2:44 PM, Herongguang (Stephen) wrote: On 2017/2/24 23:14, Paolo Bonzini wrote: On 24/02/2017 16:10, Chris Friesen wrote: On