http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/speed.html

As with any software tuning operation, make small changes then test.  Then
test again.

On Thu, Mar 29, 2012 at 1:32 PM, Mark Carlson <carlsonm...@gmail.com> wrote:

> My guess is that it is related to the window size. Try different
> window sizes and you will see different results.
>
> Actually, didn't someone else bring this up a month or so ago with
> regards to windows file shares?
>
> -Mark C.
>
> On Thu, Mar 29, 2012 at 12:45 PM, Jeff Clement <j...@nddn.net> wrote:
> > I would have thought my netcat test would only be limited by the GigE
> card
> > and PCI-X bus (which should have enough bandwidth to saturate GigE).
> >
> > Using ncat, instead of ncat, I get 118 MB/s dumping from /dev/zero and 95
> > MB/s from my array.  I never would have guesses that nc had so much
> > overhead!
> >
> > Perhaps I was mistaken and I really don't have the problem I think I have
> > with network performance.  It's Samba...
> >
> > 12:40:24-root@screamer:/mnt/tmp/ISOs $ dd
> > if=linuxmint-12-gnome-dvd-64bit.iso
> > bs=8k of=/dev/null
> > 130190+1 records in
> > 130190+1 records out
> > 1066518528 bytes (1.1 GB) copied, 29.3757 s, 36.3 MB/s
> >
> > Thanks for all the help.  Now I have some direction as to what I need to
> be
> > looking at more.
> >
> > Jeff
> >
> > * Gustin Johnson <gus...@meganerd.ca> [2012-03-29 11:18:14 -0600]:
> >
> >> netcat (or ncat) would still be subjected to PCI/PCI-X bus limitations.
> >>
> >> So basically when troubleshooting I would change the cables, then the
> >> switch, then the NICs.  The regular PCI bus tops out at a gigabit, so
> you
> >> should still be able to test with a standard PCI (though PCI-E would be
> >> better) NIC.  Intels are pretty nice but pricey for PCI (~$50).  I have
> >> used SMC2-1211TX which are cheap and pretty good Gig-E NICs.
> >>
> >> Install atop to help figure out why a CPU/core gets pinned.
> >>
> >> Use ncat (part of nmap) as it is a cleaner more modern implementation.
>  I
> >> would build it from source.
> >>
> >> If you have the memory, try creating a RAM disk and put a real 1 or 2
> GiB
> >> file in it.  Use that to transfer as /dev/zero can give weird results
> >> sometimes and /dev/urandom puts load on the CPU and bus.
> >>
> >> Hth,
> >>
> >> On Thu, Mar 29, 2012 at 9:12 AM, Stolen <sto...@thecave.net> wrote:
> >>
> >>>  Try using iperf to test *just* the network.
> >>> http://sourceforge.net/projects/iperf/?_test=b
> >>>
> >>>
> >>> On 12-03-29 08:50 AM, Jeff Clement wrote:
> >>>
> >>> I don't think that's the problem though.  I can get > GigE read speeds
> >>> from my array.
> >>>
> >>> 08:46:27-root@goliath:/etc/service/dropbox-jsc $ hdparm -t
> >>> /dev/lvm-raid1/photos
> >>>
> >>> /dev/lvm-raid1/photos:
> >>>  Timing buffered disk reads: 512 MB in  3.00 seconds = 170.49 MB/sec
> >>>
> >>> Write speeds are obviously slower but decent.
> >>>
> >>> 08:47:48-root@goliath:/mnt/photos $ dd if=/dev/zero of=test bs=8k
> >>> count=100000
> >>> 100000+0 records in
> >>> 100000+0 records out
> >>> 819200000 bytes (819 MB) copied, 10.3039 s, 79.5 MB/s
> >>>
> >>> So I would expect that I should be able to saturate GigE on the reads
> and
> >>> do
> >>> ~80 MB/s on the writes.
> >>> However what I'm seeing whether I'm doing IO to disk or just piping
> from
> >>> /dev/zero to /dev/null is around 40MB/s.  It looks like my bottleneck
> is
> >>> actually the network.  The netcat test should eliminate disk IO and
> also
> >>> eliminate the PCI-X bus as the bottle neck.  I think...
> >>>
> >>> Jeff
> >>>
> >>> * Andrew J. Kopciuch <akopci...@bddf.ca> <akopci...@bddf.ca>
> [2012-03-29
> >>>
> >>> 08:18:14 -0600]:
> >>>
> >>>
> >>> Anyone have any ideas what I should be looking at in more detail.
> >>>
> >>> Thanks,
> >>> Jeff
> >>>
> >>>
> >>>
> >>> You are probably limited by the i/o speeds of the hard drives.   Your
> LAN
> >>> can
> >>> sustain around 125MB/s, but your hard drives will not be able to read /
> >>> write
> >>> that fast, you will be bound to their maximums.
> >>>
> >>> HTH
> >>>
> >>>
> >>> Andy
> >>>
> >>>
> >>>
> >>>
> >>> _______________________________________________
> >>> clug-talk mailing list
> >>> clug-talk@clug.ca
> >>> http://clug.ca/mailman/listinfo/clug-talk_clug.ca
> >>> Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
> >>> **Please remove these lines when replying
> >>>
> >>>
> >>>
> >>>
> >>> _______________________________________________
> >>> clug-talk mailing
> >>> listclug-talk@clug.cahttp://clug.ca/mailman/listinfo/clug-talk_clug.ca
> >>>
> >>> Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
> >>> **Please remove these lines when replying
> >>>
> >>>
> >>> _______________________________________________
> >>> clug-talk mailing list
> >>> clug-talk@clug.ca
> >>> http://clug.ca/mailman/listinfo/clug-talk_clug.ca
> >>> Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
> >>> **Please remove these lines when replying
> >>>
> >
> >> _______________________________________________
> >> clug-talk mailing list
> >> clug-talk@clug.ca
> >> http://clug.ca/mailman/listinfo/clug-talk_clug.ca
> >> Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
> >> **Please remove these lines when replying
> >
> >
> >
> > _______________________________________________
> > clug-talk mailing list
> > clug-talk@clug.ca
> > http://clug.ca/mailman/listinfo/clug-talk_clug.ca
> > Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
> > **Please remove these lines when replying
>
> _______________________________________________
> clug-talk mailing list
> clug-talk@clug.ca
> http://clug.ca/mailman/listinfo/clug-talk_clug.ca
> Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
> **Please remove these lines when replying
>
_______________________________________________
clug-talk mailing list
clug-talk@clug.ca
http://clug.ca/mailman/listinfo/clug-talk_clug.ca
Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
**Please remove these lines when replying

Reply via email to