I forgot to mention to check your switches.  I have a multitude of them
here and my managed one had the maximum frame size limited to 1518.  Other
switches may call this feature Jumbo Frames or something similar.  A lot of
cheap desktop switches do not even support this.

I would also check the buffer size of the interface.  I have been
experimenting with latency again so the txqueuelen variable to ip gets
messed with pretty regularly.  All this impacts total throughput.

Hth,

On Tue, Apr 3, 2012 at 4:01 AM, Gustin Johnson <gus...@meganerd.ca> wrote:

> Before you blame Samba, do an actual test of disk to disk throughput.
>  Something like:
> Host A (sender):
> buffer  -s 500k -p 75 -i bigfile.tar.bz2 | ncat -vv <receiver ip> 1111
>
> Host B (receiver):
> ncat -vv -l 1111 | buffer  -s 500k -S 5000k -o bigfile.tar.bz2
>
> By using the buffer program you should be able to get a best case of the
> entire chain.  Bonus fun, test different memory settings of the buffer
> program (-m flag).
>
> If you are feeling particularly hard core these links might be interesting:
> http://svn.pan-starrs.ifa.hawaii.edu/trac/ipp/wiki/RecommendedLinuxSysctls
>
> http://www.aarnet.edu.au/blog/archive/2008/04/17/Linux_tuning_for_TCP_performance.aspx
>
> A default Ubuntu install should be pretty close to what these sites
> suggest.
>
> My favourite starting point for buffer is "buffer  -p 75 -m 10240000 -s
> 500k -S 100000k"
>
> In my house I get results that are pretty close to yours, even with ncat
> so the bottle neck is probably not Samba.
>
> Hth,
>
> On Thu, Mar 29, 2012 at 12:45 PM, Jeff Clement <j...@nddn.net> wrote:
>
>> I would have thought my netcat test would only be limited by the GigE card
>> and PCI-X bus (which should have enough bandwidth to saturate GigE).
>>
>> Using ncat, instead of ncat, I get 118 MB/s dumping from /dev/zero and 95
>> MB/s from my array.  I never would have guesses that nc had so much
>> overhead!
>>
>> Perhaps I was mistaken and I really don't have the problem I think I have
>> with network performance.  It's Samba...
>>
>> 12:40:24-root@screamer:/mnt/**tmp/ISOs $ dd if=linuxmint-12-gnome-dvd-**
>> 64bit.iso
>> bs=8k of=/dev/null
>> 130190+1 records in
>> 130190+1 records out
>> 1066518528 bytes (1.1 GB) copied, 29.3757 s, 36.3 MB/s
>>
>> Thanks for all the help.  Now I have some direction as to what I need to
>> be
>> looking at more.
>>
>> Jeff
>>
>> * Gustin Johnson <gus...@meganerd.ca> [2012-03-29 11:18:14 -0600]:
>>
>>  netcat (or ncat) would still be subjected to PCI/PCI-X bus limitations.
>>>
>>> So basically when troubleshooting I would change the cables, then the
>>> switch, then the NICs.  The regular PCI bus tops out at a gigabit, so you
>>> should still be able to test with a standard PCI (though PCI-E would be
>>> better) NIC.  Intels are pretty nice but pricey for PCI (~$50).  I have
>>> used SMC2-1211TX which are cheap and pretty good Gig-E NICs.
>>>
>>> Install atop to help figure out why a CPU/core gets pinned.
>>>
>>> Use ncat (part of nmap) as it is a cleaner more modern implementation.  I
>>> would build it from source.
>>>
>>> If you have the memory, try creating a RAM disk and put a real 1 or 2 GiB
>>> file in it.  Use that to transfer as /dev/zero can give weird results
>>> sometimes and /dev/urandom puts load on the CPU and bus.
>>>
>>> Hth,
>>>
>>> On Thu, Mar 29, 2012 at 9:12 AM, Stolen <sto...@thecave.net> wrote:
>>>
>>>   Try using iperf to test *just* the network.
>>>> http://sourceforge.net/**projects/iperf/?_test=b<http://sourceforge.net/projects/iperf/?_test=b>
>>>>
>>>>
>>>> On 12-03-29 08:50 AM, Jeff Clement wrote:
>>>>
>>>> I don't think that's the problem though.  I can get > GigE read speeds
>>>> from my array.
>>>>
>>>> 08:46:27-root@goliath:/etc/**service/dropbox-jsc $ hdparm -t
>>>> /dev/lvm-raid1/photos
>>>>
>>>> /dev/lvm-raid1/photos:
>>>>  Timing buffered disk reads: 512 MB in  3.00 seconds = 170.49 MB/sec
>>>>
>>>> Write speeds are obviously slower but decent.
>>>>
>>>> 08:47:48-root@goliath:/mnt/**photos $ dd if=/dev/zero of=test bs=8k
>>>> count=100000
>>>> 100000+0 records in
>>>> 100000+0 records out
>>>> 819200000 bytes (819 MB) copied, 10.3039 s, 79.5 MB/s
>>>>
>>>> So I would expect that I should be able to saturate GigE on the reads
>>>> and
>>>> do
>>>> ~80 MB/s on the writes.
>>>> However what I'm seeing whether I'm doing IO to disk or just piping from
>>>> /dev/zero to /dev/null is around 40MB/s.  It looks like my bottleneck is
>>>> actually the network.  The netcat test should eliminate disk IO and also
>>>> eliminate the PCI-X bus as the bottle neck.  I think...
>>>>
>>>> Jeff
>>>>
>>>> * Andrew J. Kopciuch <akopci...@bddf.ca> <akopci...@bddf.ca>
>>>> [2012-03-29
>>>>
>>>> 08:18:14 -0600]:
>>>>
>>>>
>>>> Anyone have any ideas what I should be looking at in more detail.
>>>>
>>>> Thanks,
>>>> Jeff
>>>>
>>>>
>>>>
>>>> You are probably limited by the i/o speeds of the hard drives.   Your
>>>> LAN
>>>> can
>>>> sustain around 125MB/s, but your hard drives will not be able to read /
>>>> write
>>>> that fast, you will be bound to their maximums.
>>>>
>>>> HTH
>>>>
>>>>
>>>> Andy
>>>>
>>>>
>>>>
>>>>
>>>> ______________________________**_________________
>>>> clug-talk mailing list
>>>> clug-talk@clug.ca
>>>> http://clug.ca/mailman/**listinfo/clug-talk_clug.ca<http://clug.ca/mailman/listinfo/clug-talk_clug.ca>
>>>> Mailing List Guidelines 
>>>> (http://clug.ca/ml_guidelines.**php<http://clug.ca/ml_guidelines.php>
>>>> )
>>>> **Please remove these lines when replying
>>>>
>>>>
>>>>
>>>>
>>>> ______________________________**_________________
>>>> clug-talk mailing listclug-talk@clug.cahttp://cl**
>>>> ug.ca/mailman/listinfo/clug-**talk_clug.ca<http://clug.ca/mailman/listinfo/clug-talk_clug.ca>
>>>>
>>>> Mailing List Guidelines 
>>>> (http://clug.ca/ml_guidelines.**php<http://clug.ca/ml_guidelines.php>
>>>> )
>>>> **Please remove these lines when replying
>>>>
>>>>
>>>> ______________________________**_________________
>>>> clug-talk mailing list
>>>> clug-talk@clug.ca
>>>> http://clug.ca/mailman/**listinfo/clug-talk_clug.ca<http://clug.ca/mailman/listinfo/clug-talk_clug.ca>
>>>> Mailing List Guidelines 
>>>> (http://clug.ca/ml_guidelines.**php<http://clug.ca/ml_guidelines.php>
>>>> )
>>>> **Please remove these lines when replying
>>>>
>>>>
>>  ______________________________**_________________
>>> clug-talk mailing list
>>> clug-talk@clug.ca
>>> http://clug.ca/mailman/**listinfo/clug-talk_clug.ca<http://clug.ca/mailman/listinfo/clug-talk_clug.ca>
>>> Mailing List Guidelines 
>>> (http://clug.ca/ml_guidelines.**php<http://clug.ca/ml_guidelines.php>
>>> )
>>> **Please remove these lines when replying
>>>
>>
>>
>> _______________________________________________
>> clug-talk mailing list
>> clug-talk@clug.ca
>> http://clug.ca/mailman/listinfo/clug-talk_clug.ca
>> Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
>> **Please remove these lines when replying
>>
>
>
_______________________________________________
clug-talk mailing list
clug-talk@clug.ca
http://clug.ca/mailman/listinfo/clug-talk_clug.ca
Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
**Please remove these lines when replying

Reply via email to