Peter Maydell <peter.mayd...@linaro.org> writes:
> On Fri, 2 Jun 2023 at 10:10, Daniel P. Berrangé <berra...@redhat.com> wrote: >> I suspect that the zstd logic takes a little bit longer in setup, >> which allows often allows the guest dirty workload to get ahead of >> it, resulting in a huge amount of data to transfer. Every now and >> then the compression code gets ahead of the workload and thus most >> data is zeros and skipped. >> >> IMHO this feels like just another example of compression being largely >> useless. The CPU overhead of compression can't keep up with the guest >> dirty workload, making the supposedly network bandwidth saving irrelevant. > > It seems a bit surprising if compression can't keep up with > a TCG guest workload, though... Actual running code doesn't see much of a look in on the perf data: 4.17% CPU 0/TCG qemu-system-aarch64 [.] tlb_set_dirty 3.55% CPU 0/TCG qemu-system-aarch64 [.] helper_ldub_mmu 1.58% live_migration qemu-system-aarch64 [.] buffer_zero_avx2 1.35% CPU 0/TCG qemu-system-aarch64 [.] tlb_set_page_full 1.11% multifdsend_2 libc.so.6 [.] __memmove_avx_unaligned_erms 1.07% multifdsend_13 libc.so.6 [.] __memmove_avx_unaligned_erms 1.07% multifdsend_6 libc.so.6 [.] __memmove_avx_unaligned_erms 1.07% multifdsend_8 libc.so.6 [.] __memmove_avx_unaligned_erms 1.06% multifdsend_10 libc.so.6 [.] __memmove_avx_unaligned_erms 1.06% multifdsend_3 libc.so.6 [.] __memmove_avx_unaligned_erms 1.05% multifdsend_7 libc.so.6 [.] __memmove_avx_unaligned_erms 1.04% multifdsend_11 libc.so.6 [.] __memmove_avx_unaligned_erms 1.04% multifdsend_15 libc.so.6 [.] __memmove_avx_unaligned_erms 1.04% multifdsend_9 libc.so.6 [.] __memmove_avx_unaligned_erms 1.03% multifdsend_1 libc.so.6 [.] __memmove_avx_unaligned_erms 1.03% multifdsend_0 libc.so.6 [.] __memmove_avx_unaligned_erms 1.02% multifdsend_4 libc.so.6 [.] __memmove_avx_unaligned_erms 1.02% multifdsend_14 libc.so.6 [.] __memmove_avx_unaligned_erms 1.02% multifdsend_12 libc.so.6 [.] __memmove_avx_unaligned_erms 1.01% multifdsend_5 libc.so.6 [.] __memmove_avx_unaligned_erms 0.96% multifdrecv_3 libc.so.6 [.] __memmove_avx_unaligned_erms 0.94% multifdrecv_13 libc.so.6 [.] __memmove_avx_unaligned_erms 0.94% multifdrecv_2 libc.so.6 [.] __memmove_avx_unaligned_erms 0.93% multifdrecv_15 libc.so.6 [.] __memmove_avx_unaligned_erms 0.93% multifdrecv_10 libc.so.6 [.] __memmove_avx_unaligned_erms 0.93% multifdrecv_12 libc.so.6 [.] __memmove_avx_unaligned_erms 0.92% multifdrecv_0 libc.so.6 [.] __memmove_avx_unaligned_erms 0.92% multifdrecv_1 libc.so.6 [.] __memmove_avx_unaligned_erms 0.92% multifdrecv_8 libc.so.6 [.] __memmove_avx_unaligned_erms 0.91% multifdrecv_6 libc.so.6 [.] __memmove_avx_unaligned_erms 0.91% multifdrecv_7 libc.so.6 [.] __memmove_avx_unaligned_erms 0.91% multifdrecv_4 libc.so.6 [.] __memmove_avx_unaligned_erms 0.91% multifdrecv_11 libc.so.6 [.] __memmove_avx_unaligned_erms 0.90% multifdrecv_14 libc.so.6 [.] __memmove_avx_unaligned_erms 0.90% multifdrecv_5 libc.so.6 [.] __memmove_avx_unaligned_erms 0.89% multifdrecv_9 libc.so.6 [.] __memmove_avx_unaligned_erms 0.77% CPU 0/TCG qemu-system-aarch64 [.] cpu_physical_memory_get_dirty.constprop.0 0.59% migration-test [kernel.vmlinux] [k] syscall_exit_to_user_mode 0.55% multifdrecv_12 libzstd.so.1.5.4 [.] 0x000000000008ec20 0.54% multifdrecv_4 libzstd.so.1.5.4 [.] 0x000000000008ec20 0.51% multifdrecv_5 libzstd.so.1.5.4 [.] 0x000000000008ec20 0.51% multifdrecv_14 libzstd.so.1.5.4 [.] 0x000000000008ec20 0.49% multifdrecv_2 libzstd.so.1.5.4 [.] 0x000000000008ec20 0.45% multifdrecv_1 libzstd.so.1.5.4 [.] 0x000000000008ec20 0.45% multifdrecv_9 libzstd.so.1.5.4 [.] 0x000000000008ec20 0.42% multifdrecv_10 libzstd.so.1.5.4 [.] 0x000000000008ec20 0.40% multifdrecv_6 libzstd.so.1.5.4 [.] 0x000000000008ec20 0.40% multifdrecv_3 libzstd.so.1.5.4 [.] 0x000000000008ec20 0.40% multifdrecv_8 libzstd.so.1.5.4 [.] 0x000000000008ec20 0.39% multifdrecv_7 libzstd.so.1.5.4 [.] 0x000000000008ec20 > > -- PMM -- Alex Bennée Virtualisation Tech Lead @ Linaro