On 13-11-14 19:39, Stephan Seitz wrote:
>
Indeed, there must be something! But I can't figure it out yet. Same
controllers, tried the same OS, direct cables, but the latency is 40%
higher.
>
> Wido,
>
> just an educated guess:
>
> Did you check the offload settings of your
[fixed]
l2-fwd-offload: off
busy-poll: on [fixed]
ceph@cephosd01:~$
German Anders
--- Original message ---
Asunto: Re: [ceph-users] Typical 10GbE latency
De: Stephan Seitz
Para: Wido den Hollander
Cc:
Fecha: Thursday, 13/11/2014 15:39
Indeed, there must be something
> >> Indeed, there must be something! But I can't figure it out yet. Same
> >> controllers, tried the same OS, direct cables, but the latency is 40%
> >> higher.
Wido,
just an educated guess:
Did you check the offload settings of your NIC?
ethtool -k should you provide that.
- Stepha
On 12-11-14 21:12, Udo Lembke wrote:
> Hi Wido,
> On 12.11.2014 12:55, Wido den Hollander wrote:
>> (back to list)
>>
>>
>> Indeed, there must be something! But I can't figure it out yet. Same
>> controllers, tried the same OS, direct cables, but the latency is 40%
>> higher.
>>
>>
> perhaps someth
Hi Wido,
On 12.11.2014 12:55, Wido den Hollander wrote:
> (back to list)
>
>
> Indeed, there must be something! But I can't figure it out yet. Same
> controllers, tried the same OS, direct cables, but the latency is 40%
> higher.
>
>
perhaps something with pci-e order / interupts?
have you checked
(back to list)
On 11/10/2014 06:57 PM, Gary M wrote:
> Hi Wido,
>
> That is a bit weird.. I'd also check the Ethernet controller firmware
> version and settings between the other configurations. There must be
> something different.
>
Indeed, there must be something! But I can't figure it out ye
ardi 11 Novembre 2014 23:13:17
Objet: Re: [ceph-users] Typical 10GbE latency
Is this with a 8192 byte payload? Theoretical transfer time of 1 Gbps (you are
only sending one packet so LACP won't help) one direction is 0.061 ms, double
that and you are at 0.122 ms of bits in flight, then there is
h a cisco 6500
>
> rtt min/avg/max/mdev = 0.179/0.202/0.221/0.019 ms
>
>
> (Seem to be lower than your 10gbe nexus)
>
>
> - Mail original -
>
> De: "Wido den Hollander"
> À: ceph-users@lists.ceph.com
> Envoyé: Lundi 10 Novembre 2014 17:22:04
>
nvoyé: Lundi 10 Novembre 2014 17:22:04
Objet: Re: [ceph-users] Typical 10GbE latency
On 08-11-14 02:42, Gary M wrote:
> Wido,
>
> Take the switch out of the path between nodes and remeasure.. ICMP-echo
> requests are very low priority traffic for switches and network stacks.
>
I
On 08-11-14 02:42, Gary M wrote:
> Wido,
>
> Take the switch out of the path between nodes and remeasure.. ICMP-echo
> requests are very low priority traffic for switches and network stacks.
>
I tried with a direct TwinAx and fiber cable. No difference.
> If you really want to know, place a ne
Wido,
Take the switch out of the path between nodes and remeasure.. ICMP-echo
requests are very low priority traffic for switches and network stacks.
If you really want to know, place a network analyzer between the nodes to
measure the request packet to response packet latency.. The ICMP traffic
Hi,
rtt min/avg/max/mdev = 0.070/0.177/0.272/0.049 ms
04:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+
Network Connection (rev 01)
at both hosts and Arista 7050S-64 between.
Both hosts were part of active ceph cluster.
On Thu, Nov 6, 2014 at 5:18 AM, Wido den Holland
De: "Robert LeBlanc"
À: "Stefan Priebe"
Cc: ceph-users@lists.ceph.com
Envoyé: Vendredi 7 Novembre 2014 16:00:40
Objet: Re: [ceph-users] Typical 10GbE latency
Infiniband has much lower latencies when performing RDMA and native IB traffic.
Doing IPoIB adds all the
Infiniband has much lower latencies when performing RDMA and native IB
traffic. Doing IPoIB adds all the Ethernet stuff that has to be done in
software. Still it is comparable to Ethernet even with this disadvantage.
Once Ceph has the ability to do native RDMA, Infiniband should have an
edge.
Robe
Hi,
this is with intel 10GBE bondet (2x10Gbit/s) network.
rtt min/avg/max/mdev = 0.053/0.107/0.184/0.034 ms
I thought that the mellanox stuff had lower latencies.
Stefan
Am 06.11.2014 um 18:09 schrieb Robert LeBlanc:
> rtt min/avg/max/mdev = 0.130/0.157/0.190/0.016 ms
>
> IPoIB Mellanox Connec
rtt min/avg/max/mdev = 0.130/0.157/0.190/0.016 ms
IPoIB Mellanox ConnectX-3 MT27500 FDR adapter and Mellanox IS5022 QDR
switch MTU set to 65520. CentOS 7.0.1406 running 3.17.2-1.el7.elrepo.x86_64
on Intel(R) Atom(TM) CPU C2750 with 32 GB of RAM.
On Thu, Nov 6, 2014 at 9:46 AM, Udo Lembke wrote:
Hi,
no special optimizations on the host.
In this case the pings are from an proxmox-ve host to ceph-osds (ubuntu
+ debian).
The pings from one osd to the others are comparable.
Udo
On 06.11.2014 15:00, Irek Fasikhov wrote:
> Hi,Udo.
> Good value :)
>
> Whether an additional optimization on the
On 11/06/2014 02:58 PM, Luis Periquito wrote:
> What is the COPP?
>
Nothing special, default settings. 200 ICMP packets/second.
But we also tested with a direct TwinAx cable between two hosts, so no
switch involved. That did not improve the latency.
So this seems to be a kernel/driver issue som
Hi,
2 LACP bonded Intel Corporation Ethernet 10G 2P X520 Adapters, no jumbo
frames, here:
rtt min/avg/max/mdev = 0.141/0.207/0.313/0.040 ms
rtt min/avg/max/mdev = 0.124/0.223/0.289/0.044 ms
rtt min/avg/max/mdev = 0.302/0.378/0.460/0.038 ms
rtt min/avg/max/mdev = 0.282/0.389/0.473/0.035 ms
All ho
Hi,Udo.
Good value :)
Whether an additional optimization on the host?
Thanks.
Thu Nov 06 2014 at 16:57:36, Udo Lembke :
> Hi,
> from one host to five OSD-hosts.
>
> NIC Intel 82599EB; jumbo-frames; single Switch IBM G8124 (blade network).
>
> rtt min/avg/max/mdev = 0.075/0.114/0.231/0.037 ms
> r
What is the COPP?
On Thu, Nov 6, 2014 at 1:53 PM, Wido den Hollander wrote:
> On 11/06/2014 02:38 PM, Luis Periquito wrote:
> > Hi Wido,
> >
> > What is the full topology? Are you using a north-south or east-west? So
> far
> > I've seen the east-west are slightly slower. What are the fabric mode
Hi,
from one host to five OSD-hosts.
NIC Intel 82599EB; jumbo-frames; single Switch IBM G8124 (blade network).
rtt min/avg/max/mdev = 0.075/0.114/0.231/0.037 ms
rtt min/avg/max/mdev = 0.088/0.164/0.739/0.072 ms
rtt min/avg/max/mdev = 0.081/0.141/0.229/0.030 ms
rtt min/avg/max/mdev = 0.083/0.115/0
On 11/06/2014 02:38 PM, Luis Periquito wrote:
> Hi Wido,
>
> What is the full topology? Are you using a north-south or east-west? So far
> I've seen the east-west are slightly slower. What are the fabric modes you
> have configured? How is everything connected? Also you have no information
> on th
also, between two hosts on a NetGear SW model at 10GbE:
rtt min/avg/max/mdev = 0.104/0.196/0.288/0.055 ms
German Anders
--- Original message ---
Asunto: [ceph-users] Typical 10GbE latency
De: Wido den Hollander
Para:
Fecha: Thursday, 06/11/2014 10:18
Hello,
While workin
Hi Wido,
What is the full topology? Are you using a north-south or east-west? So far
I've seen the east-west are slightly slower. What are the fabric modes you
have configured? How is everything connected? Also you have no information
on the OS - if I remember correctly there was a lot of improvem
Between two hosts on an HP Procurve 6600, no jumbo frames:
rtt min/avg/max/mdev = 0.096/0.128/0.151/0.019 ms
Cheers, Dan
On Thu Nov 06 2014 at 2:19:07 PM Wido den Hollander wrote:
> Hello,
>
> While working at a customer I've ran into a 10GbE latency which seems
> high to me.
>
> I have access
26 matches
Mail list logo