-邮件原件-
发件人: Andrew Jones [mailto:drjo...@redhat.com]
发送时间: 2021年3月10日 17:21
收件人: fangying
主题: Re: [RFC PATCH 0/5] hw/arm/virt: Introduce cpu topology support
> Hi Ying Fang,
>
> Do you plan to repost this soon? It'd be great if it got into 6.0.
>
> Thanks,
>
On 2020/2/13 16:18, Andrew Jones wrote:
On Thu, Feb 13, 2020 at 03:36:26PM +0800, Ying Fang wrote:
On ARM64 platform, cpu frequency is retrieved via ACPI CPPC.
A virtual cpufreq device based on ACPI CPPC is created to
present cpu frequency info to the guest.
The default frequency is set to h
On 2019/9/4 0:46, Dr. David Alan Gilbert wrote:
* Ying Fang (fangyi...@huawei.com) wrote:
Address Sanitizer shows memory leak in migrate_params_test_apply
migration/migration.c:1253 and the stack is as bellow:
Direct leak of 45 byte(s) in 9 object(s) allocated from:
#0 0xbd7fc1db in
:zhouyi...@huawei.com>>
主题: Re: [Qemu-devel] Discussion: vnc: memory leak in zrle_compress_data
时间: 2019-08-31 23:48:10
fangying mailto:fangyi...@huawei.com>> 于2019年8月31日周六
上午8:45写道:
Hi Gerd,
Memory leak is observed in zrle_compress_data when we are doing some
AddressSa
Hi Gerd,
Memory leak is observed in zrle_compress_data when we are doing some
AddressSanitizer tests. The leak stack is as bellow:
=
==47887==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 29760 byte(s) in 5 object(s)
On 2019/8/27 16:38, Li Qiang wrote:
Ying Fang mailto:fangyi...@huawei.com>> 于2019年8月27日周
二 下午4:06写道:
Address Sanitizer shows memory leak in migrate_params_test_apply
migration/migration.c:1253 and the stack is as bellow:
Direct leak of 45 byte(s) in 9 object(s) allocated from
** Description changed:
Qemu crashes when doing avocado-vt test on virtio-9p filesystem.
- This bug can be reproduced running
https://github.com/autotest/tp-qemu/blob/master/qemu/tests/9p.py.
+ This bug can be reproduced running
https://github.com/autotest/tp-qemu/blob/master/qemu/tests/9p.py
Public bug reported:
Qemu crashes when doing avocado-vt test on virtio-9p filesystem.
This bug can be reproduced running
https://github.com/autotest/tp-qemu/blob/master/qemu/tests/9p.py.
The crash stack goes like:
Program terminated with signal SIGSEGV, Segmentation fault.
#0 v9fs_mark_fids_unr
Hi Michael,
On Fri, Jul 19, 2019 at 02:35:14AM +, Zhangbo (Oscar) wrote:
Hi All:
I have 2 questions about (un)hotplug on pcie-root-port.
First Question (hotplug failure because of redundant PCI_EXP_LNKSTA_DLLLA bit
set):
during VM boot, qemu sets PCI_EXP_LNKSTA_DLLLA according to thi
come up with a proper solution right now. But I think there
may be two approches.
One way that can check drive status equal to ioctl CDROM_DRIVE_STATUS but much
faster. The other way
that let qemu catch drive status via event triggered mechanism.
Regards.
发件人: fangying
发送时间: 2019年1月22日 11:27
收件人
TUS, CDSL_CURRENT);
>>>>> return ret == CDS_DISC_OK;
>>>>> }
>>>>> A flamegraph svg file (cdrom.svg) is attachieved in this email to show
>>>>> the code timing profile we've tested.
>>>>>
>>>>
QEMU will abort when vhost-user process is restarted during migration
when vhost_log_global_start/stop is called. The reason is clear that
vhost_dev_set_log returns -1 because network connection is lost.
To handle this situation, let's cancel migration by setting migrate
state to failure and repor
QEMU will abort when vhost-user process is restarted during migration
and vhost_log_global_start/stop is called. The reason is clear that
vhost_dev_set_log returns -1 because network connection is temporarily
lost. To handle this situation, let's cancel migration here.
Signed-off-by: Ying Fang
--
From: Ying Fang
QEMU will abort when vhost-user process is restarted during migration when
vhost_log_global_start/stop is called. The reason is clear that
vhost_dev_set_log returns -1 because network connection is temporarily lost.
To handle this situation, let's cancel migration and report it to
From: Ying Fang
QEMU will abort when vhost-user process is restarted during migration when
vhost_log_global_start/stop is called. The reason is clear that
vhost_dev_set_log returns -1 because network connection is temporarily lost.
To handle this situation, let's cancel migration and report it to
Hi all,
We have a vm running migration with vhost-user as network backend, we notice
that qemu will abort when openvswitch is restarted
when MEMORY_LISTENER_CALL_GLOBAL(log_global_start, Forward) is called. The
reasion is clear that vhost_dev_set_log returns -1 because
the network connection is
Hi, Huang Kai
After weeks of intensive testing, we think the problem is solved and
this issue can be closed.
On 2017/2/27 15:38, Huang, Kai wrote:
On 2/25/2017 2:44 PM, Herongguang (Stephen) wrote:
On 2017/2/24 23:14, Paolo Bonzini wrote:
On 24/02/2017 16:10, Chris Friesen wrote:
On
17 matches
Mail list logo