I second the iperf suggestion. It works great for network benchmarking!

Try it with a normal window and a large window, just in case.

Small window - TCP:

Machine 1 (Server)
# iperf -s

Machine 2 (Client)
# iperf -c server_ip_address


Large window - TCP:

Machine 1 (Server)
# iperf -s -w 208k

Machine 2 (Client)
# iperf -c server_ip_address -w 208k



The best part about iperf, IMO, is that you can do UDP tests with
specific packet sizes and rates. That way, you can see if large
packets make things better or worse, or if your machine can even
generate or consume packets at the rate you want it to!

I would run a few tests with the bandwidth set to 1000 (Mbps) and try
different packet sizes.

Try the follwing with packet_size set to different values, e.g. 500,
1470, 2000, and 14700. (Note: the server does not need "-l
packet_size" for sizes < 1500)

Server:
# iperf -s -u -l packet_size
Client:
# iperf -c server_ip_address -u -b 1000 -l packet_size

For each packet_size, make note of the stream bandwidth (what the
client was able to generate), the actual bandwidth (factoring in lost
packets), and the percentage of lost packets.

-Mark C.

On Thu, Mar 29, 2012 at 9:12 AM, Stolen <sto...@thecave.net> wrote:
> Try using iperf to test *just* the network.
> http://sourceforge.net/projects/iperf/?_test=b
>
>
> On 12-03-29 08:50 AM, Jeff Clement wrote:
>
> I don't think that's the problem though.  I can get > GigE read speeds
> from my array.
>
> 08:46:27-root@goliath:/etc/service/dropbox-jsc $ hdparm -t
> /dev/lvm-raid1/photos
>
> /dev/lvm-raid1/photos:
>  Timing buffered disk reads: 512 MB in  3.00 seconds = 170.49 MB/sec
>
> Write speeds are obviously slower but decent.
>
> 08:47:48-root@goliath:/mnt/photos $ dd if=/dev/zero of=test bs=8k
> count=100000
> 100000+0 records in
> 100000+0 records out
> 819200000 bytes (819 MB) copied, 10.3039 s, 79.5 MB/s
>
> So I would expect that I should be able to saturate GigE on the reads and do
> ~80 MB/s on the writes.
> However what I'm seeing whether I'm doing IO to disk or just piping from
> /dev/zero to /dev/null is around 40MB/s.  It looks like my bottleneck is
> actually the network.  The netcat test should eliminate disk IO and also
> eliminate the PCI-X bus as the bottle neck.  I think...
>
> Jeff
>
> * Andrew J. Kopciuch <akopci...@bddf.ca> [2012-03-29 08:18:14 -0600]:
>
>
> Anyone have any ideas what I should be looking at in more detail.
>
> Thanks,
> Jeff
>
>
>
> You are probably limited by the i/o speeds of the hard drives.   Your LAN
> can
> sustain around 125MB/s, but your hard drives will not be able to read /
> write
> that fast, you will be bound to their maximums.
>
> HTH
>
>
> Andy
>
>
>
>
> _______________________________________________
> clug-talk mailing list
> clug-talk@clug.ca
> http://clug.ca/mailman/listinfo/clug-talk_clug.ca
> Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
> **Please remove these lines when replying
>
>
>
>
> _______________________________________________
> clug-talk mailing list
> clug-talk@clug.ca
> http://clug.ca/mailman/listinfo/clug-talk_clug.ca
> Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
> **Please remove these lines when replying
>
>
> _______________________________________________
> clug-talk mailing list
> clug-talk@clug.ca
> http://clug.ca/mailman/listinfo/clug-talk_clug.ca
> Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
> **Please remove these lines when replying

_______________________________________________
clug-talk mailing list
clug-talk@clug.ca
http://clug.ca/mailman/listinfo/clug-talk_clug.ca
Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
**Please remove these lines when replying

Reply via email to