Try using iperf to test *just* the network.
http://sourceforge.net/projects/iperf/?_test=b
On 12-03-29 08:50 AM, Jeff Clement wrote:
I don't think that's the problem though. I can get > GigE read speeds
from my array.
08:46:27-root@goliath:/etc/service/dropbox-jsc $ hdparm -t
/dev/lvm-raid1/photos
/dev/lvm-raid1/photos:
Timing buffered disk reads: 512 MB in 3.00 seconds = 170.49 MB/sec
Write speeds are obviously slower but decent.
08:47:48-root@goliath:/mnt/photos $ dd if=/dev/zero of=test bs=8k
count=100000
100000+0 records in
100000+0 records out
819200000 bytes (819 MB) copied, 10.3039 s, 79.5 MB/s
So I would expect that I should be able to saturate GigE on the reads
and do
~80 MB/s on the writes.
However what I'm seeing whether I'm doing IO to disk or just piping from
/dev/zero to /dev/null is around 40MB/s. It looks like my bottleneck is
actually the network. The netcat test should eliminate disk IO and also
eliminate the PCI-X bus as the bottle neck. I think...
Jeff
* Andrew J. Kopciuch <akopci...@bddf.ca> [2012-03-29 08:18:14 -0600]:
Anyone have any ideas what I should be looking at in more detail.
Thanks,
Jeff
You are probably limited by the i/o speeds of the hard drives. Your
LAN can
sustain around 125MB/s, but your hard drives will not be able to read
/ write
that fast, you will be bound to their maximums.
HTH
Andy
_______________________________________________
clug-talk mailing list
clug-talk@clug.ca
http://clug.ca/mailman/listinfo/clug-talk_clug.ca
Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
**Please remove these lines when replying
_______________________________________________
clug-talk mailing list
clug-talk@clug.ca
http://clug.ca/mailman/listinfo/clug-talk_clug.ca
Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
**Please remove these lines when replying
_______________________________________________
clug-talk mailing list
clug-talk@clug.ca
http://clug.ca/mailman/listinfo/clug-talk_clug.ca
Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
**Please remove these lines when replying