Ok, after some delays and the move to new network hardware I have an
update. I'm still seeing the same low bandwidth and high retransmissions
from iperf after moving to the Cisco 6001 (10Gb) and 2960 (1Gb). I've
narrowed it down to transmissions from a 10Gb connected host to a 1Gb
connected host. T
On 07/28/2014 11:28 AM, Steve Anthony wrote:
While searching for more information I happened across the following
post (http://dachary.org/?p=2961) which vaguely resembled the symptoms
I've been experiencing. I ran tcpdump and noticed what appeared to be a
high number of retransmissions on the ho
While searching for more information I happened across the following
post (http://dachary.org/?p=2961) which vaguely resembled the symptoms
I've been experiencing. I ran tcpdump and noticed what appeared to be a
high number of retransmissions on the host where the images are mounted
during a read f
Hi,
don't see an improvement with tcp_window_scaling=0 with my configuration.
More the other way: the iperf-performance are much less:
root@ceph-03:~# iperf -c 172.20.2.14
Client connecting to 172.20.2.14, TCP port 5001
TCP window size:
Thanks for the information!
Based on my reading of http://ceph.com/docs/next/rbd/rbd-config-ref I
was under the impression that rbd cache options wouldn't apply, since
presumably the kernel is handling the caching. I'll have to toggle some
of those values and see it they make a difference in my se
What is your kernel version ? On kernel >= 3.11 sysctl -w
"net.ipv4.tcp_window_scaling=0" seems to improve the situation a lot. It
also helped a lot to mitigate processes going (and sticking) in 'D' state.
Le 24/07/2014 22:08, Udo Lembke a écrit :
Hi again,
forget to say - I'm still on 0.72.2!
Hi again,
forget to say - I'm still on 0.72.2!
Udo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi Steve,
I'm also looking for improvements of single-thread-reads.
A little bit higher values (twice?) should be possible with your config.
I have 5 nodes with 60 4-TB hdds and got following:
rados -p test bench -b 4194304 60 seq -t 1 --no-cleanup
Total time run:60.066934
Total reads made
Ah, ok. That makes sense. With one concurrent operation I see numbers
more in line with the read speeds I'm seeing from the filesystems on the
rbd images.
# rados -p bench bench 300 seq --no-cleanup -t 1
Total time run:300.114589
Total reads made: 2795
Read size:4194304
Ban
On Wed, 23 Jul 2014, Steve Anthony wrote:
> Hello,
>
> Recently I've started seeing very slow read speeds from the rbd images I
> have mounted. After some analysis, I suspect the root cause is related
> to krbd; if I run the rados benchmark, I see read bandwith in the
> 400-600MB/s range, however
Hello,
Recently I've started seeing very slow read speeds from the rbd images I
have mounted. After some analysis, I suspect the root cause is related
to krbd; if I run the rados benchmark, I see read bandwith in the
400-600MB/s range, however if I attempt to read directly from the block
device wi
11 matches
Mail list logo