On (05/13/16 15:58), Sergey Senozhatsky wrote:
> On (05/13/16 15:23), Minchan Kim wrote:
> [..]
> > @@ -737,12 +737,12 @@ static int zram_bvec_write(struct zram *zram, struct 
> > bio_vec *bvec, u32 index,
> >             zcomp_strm_release(zram->comp, zstrm);
> >             zstrm = NULL;
> >  
> > -           atomic64_inc(&zram->stats.num_recompress);
> > -
> >             handle = zs_malloc(meta->mem_pool, clen,
> >                             GFP_NOIO | __GFP_HIGHMEM);
> > -           if (handle)
> > +           if (handle) {
> > +                   atomic64_inc(&zram->stats.num_recompress);
> >                     goto compress_again;
> > +           }
> 
> not like a real concern...
> 
> the main (and only) purpose of num_recompress is to match performance
> slowdowns and failed fast write paths (when the first zs_malloc() fails).
> this matching is depending on successful second zs_malloc(), but if it's
> also unsuccessful we would only increase failed_writes; w/o increasing
> the failed fast write counter, while we actually would have failed fast
> write and extra zs_malloc() [unaccounted in this case]. yet it's probably
> a bit unlikely to happen, but still. well, just saying.

here I assume that the biggest contributor to re-compress latency is
enabled preemption after zcomp_strm_release() and this second zs_malloc().
the compression itself of a PAGE_SIZE buffer should be fast enough. so IOW
we would pass down the slow path, but would not account it.

        -ss

Reply via email to