Carsten Aulbert schrieb:
> Hi all,
> 
> although I'm running all this in a Sol10u5 X4500, I hope I may ask this
> question here. If not, please let me know where to head to.
> 
> We are running several X4500 with only 3 raidz2 zpools since we want
> quite a bit of storage space[*], but the performance we get when using
> zfs send is sometimes really lousy. Of course this depends what's in the
> file system, but when doing a few backups today I have seen the following:
> 
> receiving full stream of atlashome/[EMAIL PROTECTED] into
> atlashome/BACKUP/[EMAIL PROTECTED]
> in @ 11.1 MB/s, out @ 11.1 MB/s, 14.9 GB total, buffer   0% full
> summary: 14.9 GByte in 45 min 42.8 sec - average of 5708 kB/s
> 
> So, a mere 15 GB were transferred in 45 minutes, another user's home
> which is quite large (7TB) took more than 42 hours to be transferred.
> Since all this is going a 10 Gb/s network and the CPUs are all idle I
> would really like to know why
> 
> * zfs send is so slow and
> * how can I improve the speed?
> 
> Thanks a lot for any hint
> 
> Cheers
> 
> Carsten
> 
> [*] we have some quite a few tests with more zpools but were not able to
> improve the speeds substantially. For this particular bad file system I
> still need to histogram the file sizes.
> 


Carsten,

the summary looks like you are using mbuffer. Can you elaborate on what
options you are passing to mbuffer? Maybe changing the blocksize to be
consistent with the recordsize of the zpool could improve performance.
Is the buffer running full or is it empty most of the time? Are you sure
that the network connection is 10Gb/s all the way through from machine
to machine?

- Thomas
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to