here is what i see before prefetch_disable is set, i'm currently moving (mv
/tank/games /tank/fs1 /tank/fs2)  .5 GB and larger files from a deduped pool
to another... file copy seems fine but delete's kill performance. b130 OSOL
/dev


   0     0     0     6     0      7     0     2    88     1   116 zfs
    0     0     0     2     0      4     0     0     0     1   116 zfs
    0     0     0     0     0      0     0     0     0     1   116 zfs
    0     0     0     6     0     14     0     0     0     1   116 zfs
    0     0     0     0     0      0     0     0     0     1   116 zfs
    0     0     0     0     0      0     0     0     0     1   116 zfs
    0     0     0     0     0      1     0     0     0     4 24.0M zfs
    0     0     0     0     0      0     0     0     0     3 16.0M zfs
    0     0     0     0     0      0     0     0     0     1   116 zfs
    0     0     0     0     0     18     0     0     0     1   116 zfs
 new  name   name  attr  attr lookup rddir  read read  write write
 file remov  chng   get   set    ops   ops   ops bytes   ops bytes
    0     0     0     0     0      0     0     0     0     1   116 zfs
    0     0     0     0     0      0     0     0     0     1   260 zfs
    0     0     0     0     0      0     0     0     0     1   116 zfs
    0     0     0     0     0      0     0     0     0     1   116 zfs
    0     0     0     0     0      0     0     0     0     1   116 zfs
    0     0     0     0     0      0     0     0     0     4 24.0M zfs
    0     0     0     2     0      4     0     0     0     4 24.0M zfs
    0     0     0     0     0      2     0     0     0     1   116 zfs


with it enabled i see a more consistent result, but probably not any
faster.

 new  name   name  attr  attr lookup rddir  read read  write write
 file remov  chng   get   set    ops   ops   ops bytes   ops bytes
    0     0     0     0     0      0     0     0     0     2 8.00M zfs
    0     0     0     0     0      0     0     0     0     1   260 zfs
    0     0     0     0     0      0     0     0     0     1   116 zfs
    0     0     0     0     0      0     0     0     0     1   116 zfs
    0     0     0     0     0      0     0     0     0     2 8.00M zfs
    0     0     0     6     0      7     0     2    88     2 8.00M zfs
    0     0     0     2     0      4     0     0     0     1   116 zfs
    0     0     0     0     0      0     0     0     0     2 8.00M zfs
    0     0     0     0     0      0     0     0     0     1   116 zfs
    0     0     0     0     0      0     0     0     0     2 8.00M zfs
    0     0     0     0     0      1     0     0     0     1   116 zfs
    0     0     0     0     0      3     0     0     0     2 8.00M zfs
    0     0     0     0     0      0     0     0     0     2 8.00M zfs
 new  name   name  attr  attr lookup rddir  read read  write write
 file remov  chng   get   set    ops   ops   ops bytes   ops bytes
    0     0     0     0     0      0     0     0     0     1   116 zfs
    0     0     0     0     0      0     0     0     0     2 8.00M zfs
    0     0     0     0     0      0     0     0     0     2 8.00M zfs
    1     0     0     5     0      5     0     2  9.9K     1   116 zfs
    0     0     0     0     0      3     0     0     0     1   116 zfs
    0     0     0     4     0      7     2     0     0     2 8.00M zfs



James Dickens


On Thu, Dec 24, 2009 at 11:22 PM, Michael Herf <mbh...@gmail.com> wrote:

> FWIW, I just disabled prefetch, and my dedup + zfs recv seems to be
> running visibly faster (somewhere around 3-5x faster).
>
> echo zfs_prefetch_disable/W0t1 | mdb -kw
>
> Anyone else see a result like this?
>
> I'm using the "read" bandwidth from the sending pool from "zpool
> iostat -x 5" to estimate transfer rate, since I assume the write rate
> would be lower when dedup is working.
>
> mike
>
> p.s. Note to set it back to the default behavior:
> echo zfs_prefetch_disable/W0t0 | mdb -kw
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to