On Sun, Oct 3, 2010 at 6:11 PM, Dan Langille <d...@langille.org> wrote:
> I'm rerunning my test after I had a drive go offline[1].  But I'm not
> getting anything like the previous test:
>
> time zfs send storage/bac...@transfer | mbuffer | zfs receive
> storage/compressed/bacula-buffer
>
> $ zpool iostat 10 10
>               capacity     operations    bandwidth
> pool         used  avail   read  write   read  write
> ----------  -----  -----  -----  -----  -----  -----
> storage     6.83T  5.86T      8     31  1.00M  2.11M
> storage     6.83T  5.86T    207    481  25.7M  17.8M

It may be worth checking individual disk activity using gstat -f 'da.$'

Some time back I had one drive that was noticeably slower than the
rest of the  drives in RAID-Z2 vdev and was holding everything back.
SMART looked OK, there were no obvious errors and yet performance was
much worse than what I'd expect. gstat clearly showed that one drive
was almost constantly busy with much lower number of reads and writes
per second than its peers.

Perhaps previously fast transfer rates were due to caching effects.
I.e. if all metadata already made it into ARC, subsequent "zfs send"
commands would avoid a lot of random seeks and would show much better
throughput.

--Artem
_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to