netcat (or ncat) would still be subjected to PCI/PCI-X bus limitations. So basically when troubleshooting I would change the cables, then the switch, then the NICs. The regular PCI bus tops out at a gigabit, so you should still be able to test with a standard PCI (though PCI-E would be better) NIC. Intels are pretty nice but pricey for PCI (~$50). I have used SMC2-1211TX which are cheap and pretty good Gig-E NICs.
Install atop to help figure out why a CPU/core gets pinned. Use ncat (part of nmap) as it is a cleaner more modern implementation. I would build it from source. If you have the memory, try creating a RAM disk and put a real 1 or 2 GiB file in it. Use that to transfer as /dev/zero can give weird results sometimes and /dev/urandom puts load on the CPU and bus. Hth, On Thu, Mar 29, 2012 at 9:12 AM, Stolen <sto...@thecave.net> wrote: > Try using iperf to test *just* the network. > http://sourceforge.net/projects/iperf/?_test=b > > > On 12-03-29 08:50 AM, Jeff Clement wrote: > > I don't think that's the problem though. I can get > GigE read speeds > from my array. > > 08:46:27-root@goliath:/etc/service/dropbox-jsc $ hdparm -t > /dev/lvm-raid1/photos > > /dev/lvm-raid1/photos: > Timing buffered disk reads: 512 MB in 3.00 seconds = 170.49 MB/sec > > Write speeds are obviously slower but decent. > > 08:47:48-root@goliath:/mnt/photos $ dd if=/dev/zero of=test bs=8k > count=100000 > 100000+0 records in > 100000+0 records out > 819200000 bytes (819 MB) copied, 10.3039 s, 79.5 MB/s > > So I would expect that I should be able to saturate GigE on the reads and > do > ~80 MB/s on the writes. > However what I'm seeing whether I'm doing IO to disk or just piping from > /dev/zero to /dev/null is around 40MB/s. It looks like my bottleneck is > actually the network. The netcat test should eliminate disk IO and also > eliminate the PCI-X bus as the bottle neck. I think... > > Jeff > > * Andrew J. Kopciuch <akopci...@bddf.ca> <akopci...@bddf.ca> [2012-03-29 > 08:18:14 -0600]: > > > Anyone have any ideas what I should be looking at in more detail. > > Thanks, > Jeff > > > > You are probably limited by the i/o speeds of the hard drives. Your LAN > can > sustain around 125MB/s, but your hard drives will not be able to read / > write > that fast, you will be bound to their maximums. > > HTH > > > Andy > > > > > _______________________________________________ > clug-talk mailing list > clug-talk@clug.ca > http://clug.ca/mailman/listinfo/clug-talk_clug.ca > Mailing List Guidelines (http://clug.ca/ml_guidelines.php) > **Please remove these lines when replying > > > > > _______________________________________________ > clug-talk mailing > listclug-talk@clug.cahttp://clug.ca/mailman/listinfo/clug-talk_clug.ca > Mailing List Guidelines (http://clug.ca/ml_guidelines.php) > **Please remove these lines when replying > > > _______________________________________________ > clug-talk mailing list > clug-talk@clug.ca > http://clug.ca/mailman/listinfo/clug-talk_clug.ca > Mailing List Guidelines (http://clug.ca/ml_guidelines.php) > **Please remove these lines when replying >
_______________________________________________ clug-talk mailing list clug-talk@clug.ca http://clug.ca/mailman/listinfo/clug-talk_clug.ca Mailing List Guidelines (http://clug.ca/ml_guidelines.php) **Please remove these lines when replying