SMAP specifically as a
feature that can control speculation [3].
I don't see an equivalent read-access control on ARM. It has PXN for
execute. Read access can probably also be controlled? But I think for
the non-CoCo case we should favor solutions that are less dependent on
hardware-specific
el.
What are your thoughts on a flag for KVM_CREATE_GUEST_MEMFD that only
removes from the host kernel's direct map, but leaves everything mapped
in userspace?
Derek
for Userspace-ASI? Based on
Sean's earlier reply to James it sounds like the vision of guest_memfd
aligns with ASI's goals.
Derek
ucing the TCB. I'd be interested to hear others' thoughts on pKVM vs
memfd_secret or general ASI.
Derek
Hi, Chen and Lei
Using Lei's patch is OK to me.
Please help to add "Signed-off-by: Derek Su " for merging
it.
Thank you. :)
Regards
Derek
Zhang, Chen 於 2020年9月22日 週二 下午1:37寫道:
> So, Derek, you will send new version patch?
>
>
>
> Thanks
>
> Zhang Ch
Hi, Lei
Got it. Thanks.
Regards,
Derek
Rao, Lei 於 2020年9月22日 週二 下午1:04寫道:
> Hi, Derek and Chen
>
> ram_bulk_stage is false by default before Hailiang's patch.
> For COLO, it does not seem to be used, so I think there is no need to
> reset it to true.
>
> Thanks,
Hi, Chen
Sure.
BTW, I just went through Lei's patch.
ram_bulk_stage() might need to reset to true after stopping COLO service as
my patch.
How about your opinion?
Thanks.
Best regards,
Derek
Zhang, Chen 於 2020年9月22日 週二 上午11:41寫道:
> Hi Derek and Lei,
>
>
>
> It looks
Hello, all
Ping...
Regards,
Derek Su
Derek Su 於 2020年9月10日 週四 下午6:47寫道:
> In secondary side, the colo_flush_ram_cache() calls
> migration_bitmap_find_dirty() to finding the dirty pages and
> flush them to host. But ram_state's ram_bulk_stage flag is always
> enabled in s
Hello, Chen
Zhang, Chen 於 2020年9月15日 週二 上午8:09寫道:
> Hi Derek,
>
>
>
> Looks qemu vl.c, migration/migration.c…etc use the “QEMU_CLOCK_HOST” too.
>
It will also affected by host NTP. Do you means we should change all the
> QEMU_CLOCK_HOST to QEMU_CLOCK_REALTIME?
>
No, I
these
vm changes.
In net/colo.c and net/colo-compare.c functions using timer_mod(),
using QEMU_CLOCK_HOST is dangerous if users change the host clock. The
timer might not be fired on time as expected. The original time_mod using
QEMU_CLOCK_VIRTUAL seems OK currently.
Thanks.
Regards,
Derek
Zhang
Zhang, Chen 於 2020年9月14日 週一,上午4:06寫道:
>
>
>
>
> > -Original Message-
>
> > From: Zhang, Chen
>
> > Sent: Monday, September 14, 2020 4:02 AM
>
> > To: 'Derek Su' ; qemu-devel@nongnu.org
>
> > Cc: lizhij...@cn.fujitsu.com; ja
Hi, Chen
Got it, thank you :)
Regards,
Derek
Zhang, Chen 於 2020年9月14日 週一,上午4:02寫道:
>
>
>
>
> > -Original Message-
>
> > From: Derek Su
>
> > Sent: Saturday, September 12, 2020 3:05 AM
>
> > To: qemu-devel@nongnu.org
>
> > C
Fix data type conversion of compare_timeout. The incorrect
conversion results in a random compare_timeout value and
unexpected stalls in packet comparison.
Signed-off-by: Derek Su
---
net/colo-compare.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/net/colo-compare.c
Record packet creation time by QEMU_CLOCK_REALTIME instead of
QEMU_CLOCK_HOST. The time difference between `now` and packet
`creation_ms` has the possibility of an unexpected negative value
and results in wrong comparison after changing the host clock.
Signed-off-by: Derek Su
---
net/colo
Hello,
The fixes are for the bugs found in colo-compare during our testing
and applications.
Please help to review, thanks a lot.
Regards,
Derek Su
Derek Su (2):
colo-compare: Fix incorrect data type conversion
colo-compare: Record packet creation time by QEMU_CLOCK_REALTIME
net/colo
ed to 10 ms averagely.
Please help to review and give comments, thanks a lot!
Derek Su (1):
COLO: only flush dirty ram pages from colo cache
migration/colo.c | 6 +-
migration/ram.c | 10 ++
migration/ram.h | 3 +++
3 files changed, 18 insertions(+), 1 deletion(-)
--
2.25.1
stage in secondary side is disabled in the
preparation of COLO incoming process to avoid the whole dirty
ram pages flush.
Signed-off-by: Derek Su
---
migration/colo.c | 6 +-
migration/ram.c | 10 ++
migration/ram.h | 3 +++
3 files changed, 18 insertions(+), 1 deletion(-)
diff --
Hi,
I also tested some emulated nic devices and virtio network devices (in
the attachment).
The VNC client's screen cannot be recovered while using all virtio
network devices and the emulated e1000e nic.
Thanks.
Regards,
Derek
** Attachment added: "截圖 2020-09-09 上午10.39.09.png&
t by myself.
BTW, it works well after killing SVM.
Here is my QEMU networking device
```
-device virtio-net-pci,id=e0,netdev=hn0 \
-netdev
tap,id=hn0,br=br0,vhost=off,helper=/usr/local/libexec/qemu-bridge-helper \
```
Thanks.
Regards,
Derek
To manage notifications abou
} }
5. kill PVM
6. On SVM, issues
```
{'execute': 'nbd-server-stop'}
{'execute': 'x-colo-lost-heartbeat'}
{'execute': 'object-del', 'arguments':{ 'id': 'f2' } }
{'execute': 'object-del', '
Hi, Lukas
It caused by the advanced watchdog (AWD) feature instead of COLO itself.
I will check it if my misuse or not, thanks.
Best regards,
Derek
Lukas Straub <1894...@bugs.launchpad.net> 於 2020年9月8日 週二 下午8:30寫道:
> On Tue, 08 Sep 2020 10:25:52 -
> Launchpad Bug T
ger. (I need to restart VNC client by myself.)
BTW, it works well after killing SVM.
+ Here is my QEMU networking device
+ ```
+ -device virtio-net-pci,id=e0,netdev=hn0 \
+ -netdev
tap,id=hn0,br=br0,vhost=off,helper=/usr/local/libexec/qemu-bridge-helper \
+ ```
+
Thanks.
Regards,
Derek
ver. (I've confirmed the VNC/RDP client can
+ reconnect automatically.)
+
+ But in my test, the VNC client's screen hangs and cannot be recovered no
+ longer. (I need to restart VNC client by myself.)
BTW, it works well after killing SVM.
Thanks.
Regards,
Derek
--
Y
y myself.)
BTW, it works well after killing SVM.
Thanks.
Regards,
Derek
** Affects: qemu
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/1894818
Title:
On Sat, Aug 15, 2020 at 9:42 AM Zhanghailiang
wrote:
>
> > -Original Message-
> > From: Derek Su [mailto:jwsu1...@gmail.com]
> > Sent: Thursday, August 13, 2020 6:28 PM
> > To: Lukas Straub
> > Cc: Derek Su ; qemu-devel@nongnu.org; Zhanghailiang
> >
On Fri, Jul 31, 2020 at 3:52 PM Lukas Straub wrote:
>
> On Sun, 21 Jun 2020 10:10:03 +0800
> Derek Su wrote:
>
> > This series is to reduce the guest's downtime during colo checkpoint
> > by migrating dirty ram pages as many as possible before colo checkpoint.
&
of things.
>
> - Can you try cache=none option in virtiofsd. That will bypass page
> cache in guest. It also gets rid of latencies related to
> file_remove_privs() as of now.
>
> - Also with direct=0, are we really driving iodepth of 64? With direct=0
> it is cached I
-pool-size=64 improves the rand 4KB read performance largely,
but doesn't increases the kvm-exit count too much.
In addition, the fio avg. clat of rand 4K write are 960us for
thread-pool-size=64 and 7700us for thread-pool-size=1.
Regards,
Derek
Stefan Hajnoczi 於 2020年7月28日 週二 下午9:49寫道:
>
Hello,
Ping...
Anyone have comments about this path?
To reduce the downtime during checkpoints, the patch tries to migrate
memory page as many as possible just before entering COLO state.
Thanks.
Regards,
Derek
On 2020/6/21 上午10:10, Derek Su wrote:
To reduce the guest's downtime d
Oops! Sorry, I dont’t notice this patch before.
Thanks.
Derek
Philippe Mathieu-Daudé 於 2020年6月24日 週三,下午6:12寫道:
> On 6/24/20 12:00 PM, Derek Su wrote:
> > The err is freed in check_report_connect_error() conditionally,
> > calling error_free() directly may lead to a double-fre
The err is freed in check_report_connect_error() conditionally,
calling error_free() directly may lead to a double-free bug.
Signed-off-by: Derek Su
---
chardev/char-socket.c | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/chardev/char-socket.c b/chardev/char-socket.c
he result shows the total primary VM downtime is decreased by ~40%.
Please help to review it and suggestions are welcomed.
Thanks.
Derek Su (1):
migration/colo.c: migrate dirty ram pages before colo checkpoint
migration/colo.c | 79 ++
migration/mi
kpoint.
Signed-off-by: Derek Su
---
migration/colo.c | 79 ++
migration/migration.c | 20 +++
migration/trace-events | 2 ++
monitor/hmp-cmds.c | 8 +
qapi/migration.json| 18 --
5 files changed, 125 insertions(+), 2 del
On 2020/5/22 上午4:53, Vladimir Sementsov-Ogievskiy wrote:
21.05.2020 21:19, John Snow wrote:
On 5/21/20 5:56 AM, Derek Su wrote:
Hi,
The cluster_size got from backup_calculate_cluster_size(),
MAX(BACKUP_CLUSTER_SIZE_DEFAULT, bdi.cluster_size), is 64K regardless
of the target image's cl
arget,
Error **errp)
{
...
ret = bdrv_get_info(target, &bdi);
...
return (bdi.cluster_size == 0 ?
BACKUP_CLUSTER_SIZE_DEFAULT : cluster_size);
}
```
Thanks.
Regards,
Derek
Hi, Berto
Excuse me, I'd like to test v5, but I failed to apply the series to
master branch. Which commit can I use?
Thanks.
Regards,
Derek
On 2020/5/6 上午1:38, Alberto Garcia wrote:
Hi,
here's the new version of the patches to add subcluster allocation
support to qcow2.
Pleas
issue, and have internal patch now. Is it OK to send the
internal patch for review?
Thanks.
Regards,
Derek
Thanks
Zhang Chen
Regards,
Lukas Straub
Thanks
Zhang Chen
Signed-off-by: Lukas Straub
---
net/colo-compare.c | 35 +--
net/colo-compare.h | 1
0x55cb478a699d in qemu_thread_start (args=0x55cb498035d0) at
util/qemu-thread-posix.c:519
#15 0x7f6e912376db in start_thread (arg=0x7f6da1ade700) at
pthread_create.c:463
#16 0x7f6e90f6088f in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:95
(gdb)
```
COLO works well
Hello,
This work is promising and interesting.
I'd like to try this new feature.
Could you please export a branch because the patches cannot be applied to
current master?
Thanks.
Regards,
Derek
On 2020/3/18 上午2:15, Alberto Garcia wrote:
Hi,
here's the new version of the patc
The patch is to fix the "pkt" memory leak in packet_enqueue().
The allocated "pkt" needs to be freed if the colo compare
primary or secondary queue is too big.
Replace the error_report of full queue with a trace event.
Signed-off-by: Derek Su
---
net
ove handling of the full primary or secondary queue which hurt
network throughput too much
V4:
- Remove redundant flush of packets
V3:
- handling of the full primary or secondary queue according to the
suggestion from Zhang Chen
V2:
- Fix incorrect patch format
Derek Su (1):
colo-compa
flood in log.
May I also make "MAX_QUEUE_SIZE" be user-configurable in this series?
Thanks,
Derek Su
Zhang, Chen 於 2020年4月9日 週四 下午2:59寫道:
>
>
>
> > -Original Message-
> > From: Lukas Straub
> > Sent: Thursday, April 9, 2020 3:19 A
kpoint and flush packets.
Signed-off-by: Derek Su
---
net/colo-compare.c | 39 ---
1 file changed, 28 insertions(+), 11 deletions(-)
diff --git a/net/colo-compare.c b/net/colo-compare.c
index cdd87b2aa8..fe8779cf2d 100644
--- a/net/colo-compare.c
+++ b/net/colo
ets, remove all queued secondary packets (flush packets) and
do checkpoint.
Please help to review, thanks.
V4:
- Remove redundant flush of packets
V3:
- handling of the full primary or secondary queue according to the
suggestion from Zhang Chen
V2:
- Fix incorrect patch format
The patch is to fix the "pkt" memory leak in packet_enqueue().
The allocated "pkt" needs to be freed if the colo compare
primary or secondary queue is too big.
Signed-off-by: Derek Su
---
net/colo-compare.c | 23 +++
1 file changed, 15 insertions(+),
Lukas Straub 於 2020年3月28日 週六 上午2:28寫道:
>
> On Sat, 28 Mar 2020 02:20:21 +0800
> Derek Su wrote:
>
> > Lukas Straub 於 2020年3月28日 週六 上午1:46寫道:
> > >
> > > On Wed, 25 Mar 2020 17:43:54 +0800
> > > Derek Su wrote:
> > >
> > > > T
Lukas Straub 於 2020年3月28日 週六 上午1:46寫道:
>
> On Wed, 25 Mar 2020 17:43:54 +0800
> Derek Su wrote:
>
> > The pervious handling of the full primary or queue is only dropping
> > the packet. If there are lots of clients to the guest VM,
> > the "drop"
all queued primary packets, remove all queued secondary
packets and do checkpoint.
Signed-off-by: Derek Su
---
net/colo-compare.c | 41 ++---
1 file changed, 30 insertions(+), 11 deletions(-)
diff --git a/net/colo-compare.c b/net/colo-compare.c
index cdd87b2aa8..
The patch is to fix the "pkt" memory leak in packet_enqueue().
The allocated "pkt" needs to be freed if the colo compare
primary or secondary queue is too big.
Signed-off-by: Derek Su
---
net/colo-compare.c | 23 +++
1 file changed, 15 insertions(+),
all queued secondary packets and do checkpoint.
Please review, thanks.
V3:
- handling of the full primary or secondary queue according to the
suggestion Zhang Chen
V2:
- Fix incorrect patch format
Derek Su (2):
net/colo-compare.c: Fix memory leak in packet_enqueue()
net/colo-compare.c: han
> Cc: qemu-devel@nongnu.org; lizhij...@cn.fujitsu.com;
> > > jasow...@redhat.com; dere...@qnap.com
> > > Subject: Re: [PATCH v2 1/1] net/colo-compare.c: Fix memory leak in
> > > packet_enqueue()
> > >
> > > Zhang, Chen 於 2020年3月24日 週二 上午3:24
> > >
t;iperf3 -s" in PVM
(3) Run "iperfs -c -t 7200"
The memory usage of qemu-system-x86_64 increases as the PVM's QMP
shows "qemu-system-x86_64: colo compare secondary queue size too big,
drop packet".
Please review, thanks.
V2:
- Fix incorrect patch format
Derek Su
The patch is to fix the "pkt" memory leak in packet_enqueue().
The allocated "pkt" needs to be freed if the colo compare
primary or secondary queue is too big.
Signed-off-by: Derek Su
---
net/colo-compare.c | 23 +++
1 file changed, 15 insertions(+),
The patch is to fix the "pkt" memory leak in packet_enqueue().
The allocated "pkt" needs to be freed if the colo compare
primary or secondary queue is too big.
Signed-off-by: Derek Su
---
net/colo-compare.c | 23 +++
1 file changed, 15 insertions(+),
t;iperf3 -s" in PVM
(3) Run "iperfs -c -t 7200"
The memory usage of qemu-system-x86_64 increases as
the PVM's QMP shows "qemu-system-x86_64: colo compare
secondary queue size too big,drop packet".
Derek Su (1):
net/colo-compare.c: Fix memory leak
The patch is to fix the "pkt" memory leak in packet_enqueue().
The allocated "pkt" needs to be freed if the colo compare
primary or secondary queue is too big.
Signed-off-by: Derek Su
---
net/colo-compare.c | 23 +++
1 file changed, 15 insertions(+),
I am running Ubuntu Wily (the 20150717 daily build) can reproduce this
problem, whatever the guest is Linux or Windows, after host got resumed
from suspend, the kvm (qemu-system-x86_64) process becomes a 100% cpu
usage,
user@ubuntu-mate:~$ kvm --version
QEMU emulator version 2.3.0 (Debian 1:2.3+df
On Mon, Sep 24, 2007 at 11:22:30PM +0200, Fabrice Bellard wrote:
> I realize that the other pixel formats are buggy too, so at least your
> patch is consistent with what is already coded !
>
> I guess the problem is in the VGA memory handlers. Otherwise it means
> that there is a (Cirrus)VGA con
On Sun, Apr 15, 2007 at 08:55:17PM +0100, Natalia Portillo wrote:
> Yes but...
>
> Currently no protected mode 286 guest OS runs under qemu.
Windows 3.1 Standard mode? (Delete / Rename KRNL386.EXE)
DF
>
> El dom, 15-04-2007 a las 14:46 +0100, Nigel Horne escribió:
> > Let me approach this in
On Mon, Mar 19, 2007 at 10:16:13PM +, Philip Boulain wrote:
> On 19 Mar 2007, at 20:23, Derek Fawcus wrote:
> > There was just a discussion relating to this on the darwin-kernel
> > list,
> > you may wish to review the archive.
> >
> > (The thread starts a
On Mon, Mar 19, 2007 at 06:54:35PM +, Philip Boulain wrote:
>
> Mmm, that's rather unhelpful. From my own reading, it looks like the
> Apple-approved way of doing this would be to use an
> IOMemoryDescriptor: initWithAddress() would initialise one which
There was just a discussion relat
61 matches
Mail list logo