Carsten Aulbert schrieb:
> Hi again,
> 
> Thomas Maier-Komor wrote:
>> Carsten Aulbert schrieb:
>>> Hi Thomas,
>> I don't know socat or what benefit it gives you, but have you tried
>> using mbuffer to send and receive directly (options -I and -O)?
> 
> I thought we tried that in the past and with socat it seemed faster, but
> I just made a brief test and I got (/dev/zero -> remote /dev/null) 330
> MB/s with mbuffer+socat and 430MB/s with mbuffer alone.
> 
>> Additionally, try to set the block size of mbuffer to the recordsize of
>> zfs (usually 128k):
>> receiver$ mbuffer -I sender:10000 -s 128k -m 2048M | zfs receive
>> sender$ zfs send blabla | mbuffer -s 128k -m 2048M -O receiver:10000
> 
> We are using 32k since many of our user use tiny files (and then I need
> to reduce the buffer size because of this 'funny' error):
> 
> mbuffer: fatal: Cannot address so much memory
> (32768*65536=2147483648>1544040742911).
> 
> Does this qualify for a bug report?
> 
> Thanks for the hint of looking into this again!
> 
> Cheers
> 
> Carsten
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Yes this qualifies for a bug report. As a workaround for now, you can
compile in 64 bit mode.
I.e.:
$ ./configure CFLAGS="-g -O -m64"
$ make && make install

This works for Sun Studio 12 and gcc. For older version of Sun Studio,
you need to pass -xarch=v9 instead of -m64.

I am planning to release an updated version mbuffer this week. I'll
include a patch for this issue.

Cheers,
Thomas
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to