On Mon, Nov 02, 2020 at 01:21:39PM +0800, Rong Chen wrote:
> On 10/30/20 10:58 PM, Matthew Wilcox wrote:
> > Can you reproduce this?  Here's my results:
[snipped]
> 
> Hi Matthew,
> 
> IIUC, yes, we can reproduce it, here is the result from the server:
> 
> $ stress-ng --timeout 100 --times --verify --metrics-brief --sequential 96
> --class memory --minimize --exclude 
> spawn,exec,swap,stack,atomic,bad-altstack,bsearch,context,full,heapsort,hsearch,judy,lockbus,lsearch,malloc,matrix-3d,matrix,mcontend,membarrier,memcpy,memfd,memrate,memthrash,mergesort,mincore,null,numa,pipe,pipeherd,qsort,radixsort,remap,resources,rmap,shellsort,skiplist,stackmmap,str,stream,tlb-shootdown,tree,tsearch,vm-addr,vm-rw,vm-segv,vm,wcs,zero,zlib
> 
> stress-ng: info:  [2765] disabled 'oom-pipe' as it may hang or reboot the
> machine (enable it with the --pathological option)
> stress-ng: info:  [2765] dispatching hogs: 96 tmpfs
> stress-ng: info:  [2765] successful run completed in 104.67s (1 min, 44.67
> secs)
> stress-ng: info:  [2765] stressor       bogo ops real time  usr time  sys
> time   bogo ops/s   bogo ops/s
> stress-ng: info:  [2765]                           (secs) (secs)    (secs)  
> (real time) (usr+sys time)
> stress-ng: info:  [2765] tmpfs               363    103.02 622.07  
> 7289.85         3.52         0.05
> stress-ng: info:  [2765] for a 104.67s run time:
> stress-ng: info:  [2765]   10047.98s available CPU time
> stress-ng: info:  [2765]     622.46s user time   (  6.19%)
> stress-ng: info:  [2765]    7290.66s system time ( 72.56%)
> stress-ng: info:  [2765]    7913.12s total time  ( 78.75%)
> stress-ng: info:  [2765] load average: 79.62 28.89 10.45
> 
> we compared the tmpfs.ops_per_sec: (363 / 103.02) between this commit and
> parent commit.

Ah, so this was the 60-70% regression reported in the subject, not the 100%
reported in the body.  I'd assumed the latter was the intended message.

I'll have another look at this later today.  At first glance, I don't
see why it _should_ regress.  It would seem to be doing fewer atomic
operations (avoiding getting the page reference) and walks the XArray more
efficiently.  I wonder if it's walking the XArray _too_ efficiently --
holding the rcu read lock for too long.  I'll try putting a rescheduling
point in the middle and see what happens.

Reply via email to