Daniel P. Berrangé wrote:
> On Thu, Jun 01, 2023 at 11:06:42PM +0200, Juan Quintela wrote:
>>
>> Hi
>>
>> Before I continue investigating this further, do you have any clue what
>> is going on here. I am running qemu-system-aarch64 on x86_64.
>
> FYI, the trigger for this behaviour appears to b
On Thu, Jun 01, 2023 at 11:06:42PM +0200, Juan Quintela wrote:
>
> Hi
>
> Before I continue investigating this further, do you have any clue what
> is going on here. I am running qemu-system-aarch64 on x86_64.
FYI, the trigger for this behaviour appears to be your recent change
to stats account
On 02/06/2023 11.34, Juan Quintela wrote:
...
The compression on precopy is a complete different beast:
- It is *VERY* buggy (no races fixed there)
- It is *VERY* inneficient
copy page to thread
thread compress page in a different buffer
go back to main thread
copy page to migration s
Peter Maydell writes:
> On Fri, 2 Jun 2023 at 10:10, Daniel P. Berrangé wrote:
>> I suspect that the zstd logic takes a little bit longer in setup,
>> which allows often allows the guest dirty workload to get ahead of
>> it, resulting in a huge amount of data to transfer. Every now and
>> then
On Fri, Jun 02, 2023 at 10:22:28AM +0100, Peter Maydell wrote:
> On Fri, 2 Jun 2023 at 10:10, Daniel P. Berrangé wrote:
> > I suspect that the zstd logic takes a little bit longer in setup,
> > which allows often allows the guest dirty workload to get ahead of
> > it, resulting in a huge amount of
Thomas Huth wrote:
> On 02/06/2023 11.10, Daniel P. Berrangé wrote:
> ...
>> IMHO this feels like just another example of compression being largely
>> useless. The CPU overhead of compression can't keep up with the guest
>> dirty workload, making the supposedly network bandwidth saving irrelevant.
Daniel P. Berrangé wrote:
> On Thu, Jun 01, 2023 at 11:06:42PM +0200, Juan Quintela wrote:
>>
>> Hi
>>
>> Before I continue investigating this further, do you have any clue what
>> is going on here. I am running qemu-system-aarch64 on x86_64.
>>
>> $ time ./tests/qtest/migration-test -p
>> /a
On 02/06/2023 11.10, Daniel P. Berrangé wrote:
...
IMHO this feels like just another example of compression being largely
useless. The CPU overhead of compression can't keep up with the guest
dirty workload, making the supposedly network bandwidth saving irrelevant.
Has anybody ever shown that
On Fri, 2 Jun 2023 at 10:10, Daniel P. Berrangé wrote:
> I suspect that the zstd logic takes a little bit longer in setup,
> which allows often allows the guest dirty workload to get ahead of
> it, resulting in a huge amount of data to transfer. Every now and
> then the compression code gets ahead
On Thu, Jun 01, 2023 at 11:06:42PM +0200, Juan Quintela wrote:
>
> Hi
>
> Before I continue investigating this further, do you have any clue what
> is going on here. I am running qemu-system-aarch64 on x86_64.
>
> $ time ./tests/qtest/migration-test -p
> /aarch64/migration/multifd/tcp/plain/no
Hi
Before I continue investigating this further, do you have any clue what
is going on here. I am running qemu-system-aarch64 on x86_64.
$ time ./tests/qtest/migration-test -p /aarch64/migration/multifd/tcp/plain/none
TAP version 13
# random seed: R02S3d50a0e874b28727af4b862a3cc4214e
# Start o
11 matches
Mail list logo