Il 12/03/2013 12:20, Peter Lieven ha scritto: >> * zero pages remain zero, and thus are only processed once > > you are right this will be the case. > >> >> * non-zero pages are modified often, and thus are processed multiple times. >> >> Your patch adds overhead in the case where a page is non-zero, which >> will be the common case in any non-artificial benchmark. It _is_ >> possible that the net result is positive because you warm the cache with >> the first 128 bytes of the page. But without more benchmarking, it is >> reasonable to optimize is_dup_page for the case where the for loop rolls >> very few times. > > Ok, good point. However, it will only enter the zero check if the first byte > (or maybe could change > this to first 32 or 64 bit) is zero.
On big-endian architectures, I expect that the first byte will be zero very often. (32- or 64-bit, much less indeed). > What about using this patch for buffer_is_zero optimization? buffer_is_zero is used in somewhat special cases (block streaming/copy-on-read) where throughput doesn't really matter, unlike is_dup_page/find_zero_bit which are used in migration. But you can use similar code for is_dup_page and buffer_is_zero. BTW, I would like to change is_dup_page to is_zero_page. Non-zero pages with a repeated value are virtually non-existent, and perhaps we can improve the migration format by packing multiple pages (up to 64) in a single "chunk" (i.e. a small header followed by up to 256K bytes of data). I would like to see Orit's patches to optimize RAM migration first, since this only makes sense after you remove all userspace copies. Otherwise, the cost of copying the 4k of data to a buffer will dominate almost every optimization you can make. Paolo