Peter Xu <pet...@redhat.com> wrote: > On Thu, Oct 19, 2023 at 03:52:14PM +0100, Daniel P. Berrangé wrote: >> On Thu, Oct 19, 2023 at 01:40:23PM +0200, Juan Quintela wrote: >> > Yuan Liu <yuan1....@intel.com> wrote: >> > > Hi, >> > > >> > > I am writing to submit a code change aimed at enhancing live migration >> > > acceleration by leveraging the compression capability of the Intel >> > > In-Memory Analytics Accelerator (IAA). >> > > >> > > Enabling compression functionality during the live migration process can >> > > enhance performance, thereby reducing downtime and network bandwidth >> > > requirements. However, this improvement comes at the cost of additional >> > > CPU resources, posing a challenge for cloud service providers in terms of >> > > resource allocation. To address this challenge, I have focused on >> > > offloading >> > > the compression overhead to the IAA hardware, resulting in performance >> > > gains. >> > > >> > > The implementation of the IAA (de)compression code is based on Intel >> > > Query >> > > Processing Library (QPL), an open-source software project designed for >> > > IAA high-level software programming. >> > > >> > > Best regards, >> > > Yuan Liu >> > >> > After reviewing the patches: >> > >> > - why are you doing this on top of old compression code, that is >> > obsolete, deprecated and buggy >> > >> > - why are you not doing it on top of multifd. >> > >> > You just need to add another compression method on top of multifd. >> > See how it was done for zstd: >> >> I'm not sure that is ideal approach. IIUC, the IAA/QPL library >> is not defining a new compression format. Rather it is providing >> a hardware accelerator for 'deflate' format, as can be made >> compatible with zlib: >> >> >> https://intel.github.io/qpl/documentation/dev_guide_docs/c_use_cases/deflate/c_deflate_zlib_gzip.html#zlib-and-gzip-compatibility-reference-link >> >> With multifd we already have a 'zlib' compression format, and so >> this IAA/QPL logic would effectively just be a providing a second >> implementation of zlib. >> >> Given the use of a standard format, I would expect to be able >> to use software zlib on the src, mixed with IAA/QPL zlib on >> the target, or vica-verca. >> >> IOW, rather than defining a new compression format for this, >> I think we could look at a new migration parameter for >> >> "compression-accelerator": ["auto", "none", "qpl"] >> >> with 'auto' the default, such that we can automatically enable >> IAA/QPL when 'zlib' format is requested, if running on a suitable >> host. > > I was also curious about the format of compression comparing to software > ones when reading. > > Would there be a use case that one would prefer soft compression even if > hardware accelerator existed, no matter on src/dst? > > I'm wondering whether we can avoid that one more parameter but always use > hardware accelerations as long as possible.
I asked for some benchmarks. But they need to be againtst not using compression (i.e. plain precopy) or against using multifd-zlib. For a single page, I don't know if the added latency will be a winner in general. Later, Juan.