On Wed, Jul 17, 2024 at 11:59:50AM -0300, Fabiano Rosas wrote: > Yichen Wang <yichen.w...@bytedance.com> writes: > > > From: Hao Xiang <hao.xi...@linux.dev> > > > > During live migration, if the latency between sender and receiver is > > high and bandwidth is also high (a long and fat pipe), using a bigger > > packet size can help reduce migration total time. The current multifd > > packet size is 128 * 4kb. In addition, Intel DSA offloading performs > > better with a large batch task. > > Last time we measured, mapped-ram also performed slightly better with a > larger packet size: > > 2 MiB 1 MiB 512 KiB 256 KiB 128 KiB > AVG(10) 50814 50396 48732 46423 34574 > DEV 736 552 619 473 1430
I wonder whether we could make the new parameter to be pages-per-packet, rather than in the form of packet-size, just to make our lifes easier for a possibly static offset[] buffer in the future for the MultiFDPages_t. With that, we throttle it with MAX_N_PAGES, we can have MultiFDPages_t statically allocated always with the max buffer. After all, it won't consume a lot of memory anyway; for MAX_N_PAGES=1K pages it's 8KB per channel. -- Peter Xu