Re: [ceph-users] Typical 10GbE latency

2014-11-14 Thread Wido den Hollander
On 13-11-14 19:39, Stephan Seitz wrote: > Indeed, there must be something! But I can't figure it out yet. Same controllers, tried the same OS, direct cables, but the latency is 40% higher. > > Wido, > > just an educated guess: > > Did you check the offload settings of your

Re: [ceph-users] Typical 10GbE latency

2014-11-13 Thread German Anders
[fixed] l2-fwd-offload: off busy-poll: on [fixed] ceph@cephosd01:~$ German Anders --- Original message --- Asunto: Re: [ceph-users] Typical 10GbE latency De: Stephan Seitz Para: Wido den Hollander Cc: Fecha: Thursday, 13/11/2014 15:39 Indeed, there must be something

Re: [ceph-users] Typical 10GbE latency

2014-11-13 Thread Stephan Seitz
> >> Indeed, there must be something! But I can't figure it out yet. Same > >> controllers, tried the same OS, direct cables, but the latency is 40% > >> higher. Wido, just an educated guess: Did you check the offload settings of your NIC? ethtool -k should you provide that. - Stepha

Re: [ceph-users] Typical 10GbE latency

2014-11-13 Thread Wido den Hollander
On 12-11-14 21:12, Udo Lembke wrote: > Hi Wido, > On 12.11.2014 12:55, Wido den Hollander wrote: >> (back to list) >> >> >> Indeed, there must be something! But I can't figure it out yet. Same >> controllers, tried the same OS, direct cables, but the latency is 40% >> higher. >> >> > perhaps someth

Re: [ceph-users] Typical 10GbE latency

2014-11-12 Thread Udo Lembke
Hi Wido, On 12.11.2014 12:55, Wido den Hollander wrote: > (back to list) > > > Indeed, there must be something! But I can't figure it out yet. Same > controllers, tried the same OS, direct cables, but the latency is 40% > higher. > > perhaps something with pci-e order / interupts? have you checked

Re: [ceph-users] Typical 10GbE latency

2014-11-12 Thread Wido den Hollander
(back to list) On 11/10/2014 06:57 PM, Gary M wrote: > Hi Wido, > > That is a bit weird.. I'd also check the Ethernet controller firmware > version and settings between the other configurations. There must be > something different. > Indeed, there must be something! But I can't figure it out ye

Re: [ceph-users] Typical 10GbE latency

2014-11-12 Thread Alexandre DERUMIER
ardi 11 Novembre 2014 23:13:17 Objet: Re: [ceph-users] Typical 10GbE latency Is this with a 8192 byte payload? Theoretical transfer time of 1 Gbps (you are only sending one packet so LACP won't help) one direction is 0.061 ms, double that and you are at 0.122 ms of bits in flight, then there is

Re: [ceph-users] Typical 10GbE latency

2014-11-11 Thread Robert LeBlanc
h a cisco 6500 > > rtt min/avg/max/mdev = 0.179/0.202/0.221/0.019 ms > > > (Seem to be lower than your 10gbe nexus) > > > - Mail original - > > De: "Wido den Hollander" > À: ceph-users@lists.ceph.com > Envoyé: Lundi 10 Novembre 2014 17:22:04 >

Re: [ceph-users] Typical 10GbE latency

2014-11-11 Thread Alexandre DERUMIER
nvoyé: Lundi 10 Novembre 2014 17:22:04 Objet: Re: [ceph-users] Typical 10GbE latency On 08-11-14 02:42, Gary M wrote: > Wido, > > Take the switch out of the path between nodes and remeasure.. ICMP-echo > requests are very low priority traffic for switches and network stacks. > I

Re: [ceph-users] Typical 10GbE latency

2014-11-10 Thread Wido den Hollander
On 08-11-14 02:42, Gary M wrote: > Wido, > > Take the switch out of the path between nodes and remeasure.. ICMP-echo > requests are very low priority traffic for switches and network stacks. > I tried with a direct TwinAx and fiber cable. No difference. > If you really want to know, place a ne

Re: [ceph-users] Typical 10GbE latency

2014-11-07 Thread Gary M
Wido, Take the switch out of the path between nodes and remeasure.. ICMP-echo requests are very low priority traffic for switches and network stacks. If you really want to know, place a network analyzer between the nodes to measure the request packet to response packet latency.. The ICMP traffic

Re: [ceph-users] Typical 10GbE latency

2014-11-07 Thread Łukasz Jagiełło
Hi, rtt min/avg/max/mdev = 0.070/0.177/0.272/0.049 ms 04:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01) at both hosts and Arista 7050S-64 between. Both hosts were part of active ceph cluster. On Thu, Nov 6, 2014 at 5:18 AM, Wido den Holland

Re: [ceph-users] Typical 10GbE latency

2014-11-07 Thread Alexandre DERUMIER
De: "Robert LeBlanc" À: "Stefan Priebe" Cc: ceph-users@lists.ceph.com Envoyé: Vendredi 7 Novembre 2014 16:00:40 Objet: Re: [ceph-users] Typical 10GbE latency Infiniband has much lower latencies when performing RDMA and native IB traffic. Doing IPoIB adds all the

Re: [ceph-users] Typical 10GbE latency

2014-11-07 Thread Robert LeBlanc
Infiniband has much lower latencies when performing RDMA and native IB traffic. Doing IPoIB adds all the Ethernet stuff that has to be done in software. Still it is comparable to Ethernet even with this disadvantage. Once Ceph has the ability to do native RDMA, Infiniband should have an edge. Robe

Re: [ceph-users] Typical 10GbE latency

2014-11-07 Thread Stefan Priebe - Profihost AG
Hi, this is with intel 10GBE bondet (2x10Gbit/s) network. rtt min/avg/max/mdev = 0.053/0.107/0.184/0.034 ms I thought that the mellanox stuff had lower latencies. Stefan Am 06.11.2014 um 18:09 schrieb Robert LeBlanc: > rtt min/avg/max/mdev = 0.130/0.157/0.190/0.016 ms > > IPoIB Mellanox Connec

Re: [ceph-users] Typical 10GbE latency

2014-11-06 Thread Robert LeBlanc
rtt min/avg/max/mdev = 0.130/0.157/0.190/0.016 ms IPoIB Mellanox ConnectX-3 MT27500 FDR adapter and Mellanox IS5022 QDR switch MTU set to 65520. CentOS 7.0.1406 running 3.17.2-1.el7.elrepo.x86_64 on Intel(R) Atom(TM) CPU C2750 with 32 GB of RAM. On Thu, Nov 6, 2014 at 9:46 AM, Udo Lembke wrote:

Re: [ceph-users] Typical 10GbE latency

2014-11-06 Thread Udo Lembke
Hi, no special optimizations on the host. In this case the pings are from an proxmox-ve host to ceph-osds (ubuntu + debian). The pings from one osd to the others are comparable. Udo On 06.11.2014 15:00, Irek Fasikhov wrote: > Hi,Udo. > Good value :) > > Whether an additional optimization on the

Re: [ceph-users] Typical 10GbE latency

2014-11-06 Thread Wido den Hollander
On 11/06/2014 02:58 PM, Luis Periquito wrote: > What is the COPP? > Nothing special, default settings. 200 ICMP packets/second. But we also tested with a direct TwinAx cable between two hosts, so no switch involved. That did not improve the latency. So this seems to be a kernel/driver issue som

Re: [ceph-users] Typical 10GbE latency

2014-11-06 Thread Robert Sander
Hi, 2 LACP bonded Intel Corporation Ethernet 10G 2P X520 Adapters, no jumbo frames, here: rtt min/avg/max/mdev = 0.141/0.207/0.313/0.040 ms rtt min/avg/max/mdev = 0.124/0.223/0.289/0.044 ms rtt min/avg/max/mdev = 0.302/0.378/0.460/0.038 ms rtt min/avg/max/mdev = 0.282/0.389/0.473/0.035 ms All ho

Re: [ceph-users] Typical 10GbE latency

2014-11-06 Thread Irek Fasikhov
Hi,Udo. Good value :) Whether an additional optimization on the host? Thanks. Thu Nov 06 2014 at 16:57:36, Udo Lembke : > Hi, > from one host to five OSD-hosts. > > NIC Intel 82599EB; jumbo-frames; single Switch IBM G8124 (blade network). > > rtt min/avg/max/mdev = 0.075/0.114/0.231/0.037 ms > r

Re: [ceph-users] Typical 10GbE latency

2014-11-06 Thread Luis Periquito
What is the COPP? On Thu, Nov 6, 2014 at 1:53 PM, Wido den Hollander wrote: > On 11/06/2014 02:38 PM, Luis Periquito wrote: > > Hi Wido, > > > > What is the full topology? Are you using a north-south or east-west? So > far > > I've seen the east-west are slightly slower. What are the fabric mode

Re: [ceph-users] Typical 10GbE latency

2014-11-06 Thread Udo Lembke
Hi, from one host to five OSD-hosts. NIC Intel 82599EB; jumbo-frames; single Switch IBM G8124 (blade network). rtt min/avg/max/mdev = 0.075/0.114/0.231/0.037 ms rtt min/avg/max/mdev = 0.088/0.164/0.739/0.072 ms rtt min/avg/max/mdev = 0.081/0.141/0.229/0.030 ms rtt min/avg/max/mdev = 0.083/0.115/0

Re: [ceph-users] Typical 10GbE latency

2014-11-06 Thread Wido den Hollander
On 11/06/2014 02:38 PM, Luis Periquito wrote: > Hi Wido, > > What is the full topology? Are you using a north-south or east-west? So far > I've seen the east-west are slightly slower. What are the fabric modes you > have configured? How is everything connected? Also you have no information > on th

Re: [ceph-users] Typical 10GbE latency

2014-11-06 Thread German Anders
also, between two hosts on a NetGear SW model at 10GbE: rtt min/avg/max/mdev = 0.104/0.196/0.288/0.055 ms German Anders --- Original message --- Asunto: [ceph-users] Typical 10GbE latency De: Wido den Hollander Para: Fecha: Thursday, 06/11/2014 10:18 Hello, While workin

Re: [ceph-users] Typical 10GbE latency

2014-11-06 Thread Luis Periquito
Hi Wido, What is the full topology? Are you using a north-south or east-west? So far I've seen the east-west are slightly slower. What are the fabric modes you have configured? How is everything connected? Also you have no information on the OS - if I remember correctly there was a lot of improvem

Re: [ceph-users] Typical 10GbE latency

2014-11-06 Thread Dan van der Ster
Between two hosts on an HP Procurve 6600, no jumbo frames: rtt min/avg/max/mdev = 0.096/0.128/0.151/0.019 ms Cheers, Dan On Thu Nov 06 2014 at 2:19:07 PM Wido den Hollander wrote: > Hello, > > While working at a customer I've ran into a 10GbE latency which seems > high to me. > > I have access