On 10/31/2014 06:40 AM, Liliang wrote: > From: Li Liang <liang.z...@intel.com> > > Give some details about how to use the multiple compress threads feature > in live migration. > > Signed-off-by: Li Liang <liang.z...@intel.com> > --- > docs/multiple-compression-threads.txt | 57 > +++++++++++++++++++++++++++++++++++ > 1 file changed, 57 insertions(+) > create mode 100644 docs/multiple-compression-threads.txt > > diff --git a/docs/multiple-compression-threads.txt > b/docs/multiple-compression-threads.txt > new file mode 100644 > index 0000000..83e9e5b > --- /dev/null > +++ b/docs/multiple-compression-threads.txt > @@ -0,0 +1,57 @@ > +Use multiple compression(decompression) threads in live migration
Missing a copyright notice (yeah, not all files in our docs directory are good examples, but we're starting to get better at it). > +================================================================= > +Instead of sending the guest memory directly, this solution will > +compress the ram page before sending, after receiving, the data how to > +us will be decompressed. Using compression in live migration can help s/how to us will be/has to be/ ? > +to reduce the data transferred about 60%, this is very useful when the > +bandwidth is limited, and the migration time can also be reduced about > +80% in a typical case. What network bandwidth did you have in your tests? I suspect that the speedups you saw may be more pronounced on low-bandwidth links, and that you would be wise to give more details about reproducing your test setup, since not everyone will see the same speedups. > + > +The process of compression will consume additional CPU cycles, and the > +extra CPU cycles will increase the migration time. In another hand, s/In another/On the other/ > +the amount of data transferred will reduced, this factor can reduce > +the migration time. If the process of the compression is quickly s/quickly/quick/ > +enough, then the total migration time can be reduced, multiple s/multiple/and multiple/ > +compression threads can be used to accelerate the compression process. > + > +Compression level can be used to control the compression speed and the > +compression ratio. High compression ratio will take more time, level 0 > +stands for no compression, level 1 stands for the best compression > +speed,and level 9 stands for the best compression ratio. Users can > +select a level number between 0 and 9. > + > + > +When to use the multiple compression threads in live migration > +============================================================== > +Compression of data will consume lot of extra CPU cycles, in a system > +with high overhead of CPU, avoid using this feature. When the network > +bandwidth is very limited and the CPU resource is adequate, use the > +multiple compression threads will be very helpful. If both the CPU and > +the network bandwidth are adequate, use multiple compression threads > +can still help to reduce the migration time. > + > + > +Usage > +====== > +1. Verify the destination QEMU version is able to support the multiple > +compression threads migration: > + {qemu} info_migrate_capablilites > + {qemu} ... compress: off ... > + > +2. Activate compression on the souce: > + {qemu} migrate_set_capability compress on > + > +3. Set the compression thread count on source: > + {qemu} migrate_set_compress_threads 10 > + > +4. Set the compression level on the source: > + {qemu} migrate_set_compress_level 1 > + > +5. Set the decompression thread count on destination: > + {qemu} migrate_set_decompress_threads 5 > + > +6. Start outgoing migration: > + {qemu} migrate -d tcp:destination.host:4444 > + {qemu} info migrate > + Capablilties: ... compress: on s/Capablilties/Capabilities/ -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org
signature.asc
Description: OpenPGP digital signature