Hi all
guest will hang when call system function in migration thread. The cpu usage of
vcpu thread is 100%.
the code like this:
static void *migration_thread(void *opaque)
{
MigrationState *s = opaque;
int64_t initial_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
int64_t setup_star
Hi
> +if (bootindex >= 0) {
> +node = g_malloc0(sizeof(FWBootEntry));
> +node->bootindex = bootindex;
> +if (suffix) {
> +node->suffix = g_strdup(suffix);
> +} else if (old_entry) {
> +node->suffix = g_strdup(old_entry->suffix);
> +
> Il 02/07/2014 13:57, ChenLiang ha scritto:
Hmm, dbs->in_cancel will be true always. Although this will avoid freeing
dbs by dma_comlete.
But it maybe a mistake.
>>>
>>> This was on purpose; I'm doing the free myself in dma_aio_cancel, so I
>>> wanted to avoid the qemu_aio_relea
> Il 02/07/2014 13:57, ChenLiang ha scritto:
Hmm, dbs->in_cancel will be true always. Although this will avoid freeing
dbs by dma_comlete.
But it maybe a mistake.
>>>
>>> This was on purpose; I'm doing the free myself in dma_aio_cancel, so I
>>> wanted to avoid the qemu_aio_relea
> On 06/25/2014 04:28 AM, ChenLiang wrote:
>> hi,all
>> Qemu can not modify boot index when vm is running.
>
> So? Boot index is used exactly once - when the domain is started. After
> that, you aren't booting any more, so what benefit is there to changing
> boot index on the fly?
>
>> It is in
> Il 25/06/2014 15:27, Eric Blake ha scritto:
Qemu can not modify boot index when vm is running.
>> So? Boot index is used exactly once - when the domain is started. After
>> that, you aren't booting any more, so what benefit is there to changing
>> boot index on the fly?
>>
>
> It's also u
> if a saved vm has unknown flags in the memory data qemu
> currently simply ignores this flag and continues which
> yields in an unpredictable result.
>
> this patch catches all unknown flags and
> aborts the loading of the vm.
>
> CC: qemu-sta...@nongnu.org
> Signed-off-by: Peter Lieven
> --
Hi,
The patch is correct. There is a small improved point.
>
> /* In doubt sent page as normal */
> bytes_sent = -1;
> @@ -990,16 +996,17 @@ static inline void *host_from_stream_offset(QEMUFile *f,
> int flags)
> {
> static RA
Hi
Do we have any plan to support migration by multi net card?
Thanks
ChenLiang
>
> Hi
>
> Please, send any topic that you are interested in covering.
>
> Thanks, Juan.
>
> Call details:
>
> 15:00 CEST
> 13:00 UTC
> 09:00 EDT
>
> Every two weeks
>
> If you need phone number details, conta
> On 04/15/14 01:55, Michael R. Hines wrote:
>> On 04/14/2014 05:19 PM, Laszlo Ersek wrote:
>>> On 04/14/14 04:27, Amos Kong wrote:
We already have a function buffer_is_zero() in util/cutils.c
Signed-off-by: Amos Kong
---
arch_init.c | 9 ++---
1 file changed,
Hi Jason,
Have you ever test that adds a bridge on the virtio-net in vm and migrate the
vm?
The bridge may don't send garp packet(in my testing). BTW, how about the other
net devices like e1000 and rtl8139? Is it better that qemu notifys qemu guest
agent
to force the net devices in the vm to send
> * (chenliang0...@icloud.com) wrote:
>>
>>> * (chenliang0...@icloud.com) wrote:
?? 2014??4??810:29??Dr. David Alan Gilbert (git)
??
> From: "Dr. David Alan Gilbert"
>
> Make qemu_peek_buffer repeatedly call fill_buffer until it gets
>>>
> * (chenliang0...@icloud.com) wrote:
>>
>> ?? 2014??4??810:29??Dr. David Alan Gilbert (git)
>> ??
>>
>>> From: "Dr. David Alan Gilbert"
>>>
>>> Make qemu_peek_buffer repeatedly call fill_buffer until it gets
>>> all the data it requires, or until there is an error.
>>>
>>>
> * arei.gong...@huawei.com (arei.gong...@huawei.com) wrote:
>> From: ChenLiang
>>
>> The logic of old code is correct. But Checking byte by byte will
>> consume time after an concurrency scene.
>>
>> Signed-off-by: ChenLiang
>> Signed-off-by: Gonglei
>> ---
>> xbzrle.c | 28 +
> * (chenliang0...@icloud.com) wrote:
>
>
>
>> Hi Dave, is it ok?
>>
>> -if (it->it_data &&
>> +if (it->it_data && it->it_addr != addr &&
>>it->it_age + CACHED_PAGE_LIFETIME > current_age) {
>>
>
> I've not had a chance to retry it yet; did you try google stressapptest
> I've got a world with just patches 1..5 on that's seeing corruptions, but
> I've not seen where the problem is. So far the world with 1..4 on hasn't
> hit those corruption, but maybe I need to test more.
>
> Have you tested this set with google stressapptest?
>
> Let it migrate for a few cycl
> On 03/29/2014 01:52 AM, arei.gong...@huawei.com wrote:
>> From: ChenLiang
>>
>> xbzrle_encode_buffer checks the value in the ram repeatedly.
>> It is risk if runs xbzrle_encode_buffer on changing data.
>> And it is not necessary.
>>
>> Reported-by: Dr. David Alan Gilbert
>> Signed-off-by: C
Hi, the function of migration_thread maybe is your want.
Best regards.
> Hi,
>
> I want to know the source code of qemu which is responsible for the migration
> of virtual machines, more precisely where the part of the code that describes
> the stages of memory transfer. is that you can help
> From: "Dr. David Alan Gilbert"
>
> This is a fix for a bug* triggered by a migration after hot unplugging
> a few virtio-net NICs, that caused migration never to converge, because
> 'migration_dirty_pages' is incorrectly initialised.
>
> 'migration_dirty_pages' is used as a tally of the numbe
nice catch
> From: "Dr. David Alan Gilbert"
>
> Markus Armbruster spotted that the XBZRLE.lock might get initalised
> multiple times in the case of a second attempted migration, and
> that's undefined behaviour for pthread_mutex_init.
>
> This patch adds a flag to stop re-initialisation - findi
Do you have some qemu log about the failure?
> Public bug reported:
>
> Environment:
>
> Host OS (ia32/ia32e/IA64):ia32e
> Guest OS (ia32/ia32e/IA64):ia32e
> Guest OS Type (Linux/Windows):Linux
> kvm.git Commit:8fbb1daf3e8254afc17fc4490b69db00920197ae
> qemu.git Commit: 6fffa26244737
I am glad to hear that you are also interested in migration. We will be pleasure
if you find some bugs or suggestion and discuss them together with us.
As I know, the qemu will print some log about the migration failure. It is
same to your patch.
> This adds more traces in the migration code.
>
> On Thursday 30 January 2014 13:23:04 Neil Skrypuch wrote:
>> First, let me briefly outline the way we use live migration, as it is
>> probably not typical. We use live migration (with block migration) to make
>> backups of VMs with zero downtime. The basic process goes like this:
>>
>> 1) migra
> * Gonglei (arei.gong...@huawei.com) wrote:
>> On 2014/2/28 17:19, Dr. David Alan Gilbert wrote:
>>
>>> * Gonglei (Arei) (arei.gong...@huawei.com) wrote:
>>>
>>> Hi,
>>>
a. Optimization the xbzrle remarkable decrease the cache misses.
The efficiency of compress increases more than
> On 02/27/2014 09:05 PM, Gonglei (Arei) wrote:
>> a. Optimization the xbzrle remarkable decrease the cache misses.
>>The efficiency of compress increases more than fifty times.
>>Before the patch set, the cache almost totally miss when the
>>number of cache item less than the dirty p
> On 02/27/2014 09:08 PM, Gonglei (Arei) wrote:
>> Add counters to log the times of updating the dirty bitmap.
>>
>> Signed-off-by: ChenLiang
>> Signed-off-by: Gonglei
>> ---
>> arch_init.c | 20
>> 1 file changed, 20 insertions(+)
>
> Is it also worth updating MigrationSta
26 matches
Mail list logo