Re: [ceph-users] slow read speeds from kernel rbd (Firefly 0.80.4)

2014-08-26 Thread Steve Anthony
Ok, after some delays and the move to new network hardware I have an update. I'm still seeing the same low bandwidth and high retransmissions from iperf after moving to the Cisco 6001 (10Gb) and 2960 (1Gb). I've narrowed it down to transmissions from a 10Gb connected host to a 1Gb connected host. T

Re: [ceph-users] slow read speeds from kernel rbd (Firefly 0.80.4)

2014-07-28 Thread Mark Nelson
On 07/28/2014 11:28 AM, Steve Anthony wrote: While searching for more information I happened across the following post (http://dachary.org/?p=2961) which vaguely resembled the symptoms I've been experiencing. I ran tcpdump and noticed what appeared to be a high number of retransmissions on the ho

Re: [ceph-users] slow read speeds from kernel rbd (Firefly 0.80.4)

2014-07-28 Thread Steve Anthony
While searching for more information I happened across the following post (http://dachary.org/?p=2961) which vaguely resembled the symptoms I've been experiencing. I ran tcpdump and noticed what appeared to be a high number of retransmissions on the host where the images are mounted during a read f

Re: [ceph-users] slow read speeds from kernel rbd (Firefly 0.80.4)

2014-07-26 Thread Udo Lembke
Hi, don't see an improvement with tcp_window_scaling=0 with my configuration. More the other way: the iperf-performance are much less: root@ceph-03:~# iperf -c 172.20.2.14 Client connecting to 172.20.2.14, TCP port 5001 TCP window size:

Re: [ceph-users] slow read speeds from kernel rbd (Firefly 0.80.4)

2014-07-24 Thread Steve Anthony
Thanks for the information! Based on my reading of http://ceph.com/docs/next/rbd/rbd-config-ref I was under the impression that rbd cache options wouldn't apply, since presumably the kernel is handling the caching. I'll have to toggle some of those values and see it they make a difference in my se

Re: [ceph-users] slow read speeds from kernel rbd (Firefly 0.80.4)

2014-07-24 Thread Jean-Tiare LE BIGOT
What is your kernel version ? On kernel >= 3.11 sysctl -w "net.ipv4.tcp_window_scaling=0" seems to improve the situation a lot. It also helped a lot to mitigate processes going (and sticking) in 'D' state. Le 24/07/2014 22:08, Udo Lembke a écrit : Hi again, forget to say - I'm still on 0.72.2!

Re: [ceph-users] slow read speeds from kernel rbd (Firefly 0.80.4)

2014-07-24 Thread Udo Lembke
Hi again, forget to say - I'm still on 0.72.2! Udo ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] slow read speeds from kernel rbd (Firefly 0.80.4)

2014-07-24 Thread Udo Lembke
Hi Steve, I'm also looking for improvements of single-thread-reads. A little bit higher values (twice?) should be possible with your config. I have 5 nodes with 60 4-TB hdds and got following: rados -p test bench -b 4194304 60 seq -t 1 --no-cleanup Total time run:60.066934 Total reads made

Re: [ceph-users] slow read speeds from kernel rbd (Firefly 0.80.4)

2014-07-23 Thread Steve Anthony
Ah, ok. That makes sense. With one concurrent operation I see numbers more in line with the read speeds I'm seeing from the filesystems on the rbd images. # rados -p bench bench 300 seq --no-cleanup -t 1 Total time run:300.114589 Total reads made: 2795 Read size:4194304 Ban

Re: [ceph-users] slow read speeds from kernel rbd (Firefly 0.80.4)

2014-07-23 Thread Sage Weil
On Wed, 23 Jul 2014, Steve Anthony wrote: > Hello, > > Recently I've started seeing very slow read speeds from the rbd images I > have mounted. After some analysis, I suspect the root cause is related > to krbd; if I run the rados benchmark, I see read bandwith in the > 400-600MB/s range, however

[ceph-users] slow read speeds from kernel rbd (Firefly 0.80.4)

2014-07-23 Thread Steve Anthony
Hello, Recently I've started seeing very slow read speeds from the rbd images I have mounted. After some analysis, I suspect the root cause is related to krbd; if I run the rados benchmark, I see read bandwith in the 400-600MB/s range, however if I attempt to read directly from the block device wi