[ceph-users] low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

2020-01-15 Thread Stefan Bauer
OSDs and could reduce latency from 2.5ms to 0.7ms now. :p Cheers Stefan -Ursprüngliche Nachricht- Von: Виталий Филиппов  Gesendet: Dienstag 14 Januar 2020 10:28 An: Wido den Hollander ; Stefan Bauer CC: ceph-users@lists.ceph.com Betreff: Re: [ceph-users] low io with

Re: [ceph-users] low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

2020-01-14 Thread Stefan Bauer
Thank you all, performance is indeed better now. Can now go back to sleep ;) KR Stefan -Ursprüngliche Nachricht- Von: Виталий Филиппов  Gesendet: Dienstag 14 Januar 2020 10:28 An: Wido den Hollander ; Stefan Bauer CC: ceph-users@lists.ceph.com Betreff: Re: [ceph-users] low io

Re: [ceph-users] low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

2020-01-14 Thread Stefan Bauer
Hi Vitaliy, thank you for your time. Do you mean cephx sign messages = false with "diable signatures" ? KR Stefan -Ursprüngliche Nachricht- Von: Виталий Филиппов  Gesendet: Dienstag 14 Januar 2020 10:28 An: Wido den Hollander ; Stefan Bauer CC: ceph-users@list

Re: [ceph-users] low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

2020-01-14 Thread Stefan Bauer
Hi Stefan, thank you for your time. "temporary write through" does not seem to be a legit parameter. However write through is already set: root@proxmox61:~# echo "temporary write through" > /sys/block/sdb/device/scsi_disk/*/cache_type root@proxmox61:~# cat /sys/block/sdb/device/scsi_di

[ceph-users] low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

2020-01-13 Thread Stefan Bauer
Hi, we're playing around with ceph but are not quite happy with the IOs. 3 node ceph / proxmox cluster with each: LSI HBA 3008 controller 4 x MZILT960HAHQ/007 Samsung SSD Transport protocol:   SAS (SPL-3) 40G fibre Intel 520 Network controller on Unifi Switch Ping roundtrip to partner

Re: [ceph-users] how to find the lazy egg - poor performance - interesting observations [klartext]

2019-11-13 Thread Stefan Bauer
    "avgtime": 0.004992133 because the communication partner is slow in writing/commiting? Dont want to follow the red hering :/ We have the following times on our 11 osds. Attached image. -Ursprüngliche Nachricht- Von: Paul Emmerich  Gesendet: Donnerstag 7 Novemb

Re: [ceph-users] how to find the lazy egg - poor performance - interesting observations [klartext]

2019-11-07 Thread Stefan Bauer
Thank you Paul. I'm not sure if these low values will be of any help: osd commit_latency(ms) apply_latency(ms)   0  0 0   1  0 0   5  0 0   4  0 0   3 

[ceph-users] how to find the lazy egg - poor performance - interesting observations [klartext]

2019-11-07 Thread Stefan Bauer
Hi folks, we are running a 3 node proxmox-cluster with - of corse - ceph :) ceph version 12.2.12 (39cfebf25a7011204a9876d2950e4b28aba66d11) luminous (stable) 10G network. iperf reports almost 10G between all nodes. We are using mixed standard SSDs (crucial / samsung). We are aware, that