Re: [ceph-users] Fwd: High IOWait Issue

2018-03-25 Thread Budai Laszlo
      8:128  0 185.8G  0 disk  > ├─sdi1                             8:129  0  16.6G  0 part  > ├─sdi2                             8:130  0  16.6G  0 part  > ├─sdi3                             8:131  0  16.6G  0 part  > ├─sdi4                             8:132 

Re: [ceph-users] Fwd: High IOWait Issue

2018-03-24 Thread Budai Laszlo
could you post the result of "ceph -s" ? besides the health status there are other details that could help, like the status of your PGs., also the result of "ceph-disk list" would be useful to understand how your disks are organized. For instance with 1 SSD for 7 HDD the SSD could be the bottlen

Re: [ceph-users] Fwd: High IOWait Issue

2018-03-24 Thread Budai Laszlo
Hi, what version of ceph are you using? what is HW config of your OSD nodes? Have you checked your disks for errors (dmesg, smartctl). What status is the ceph reporting? (ceph -s) What is the saturation level of your ceph ? (ceph dt) Kind regards, Laszlo _

Re: [ceph-users] Bluestore bluestore_prefer_deferred_size and WAL size

2018-03-09 Thread Budai Laszlo
that this potentially hugely increases the required size for the WAL, but I'm not sure if that has any implications beyond simply needing a larger WAL/DB device or if there's other config changes that you'd need to do. Rich On 09/03/18 09:35, Budai Laszlo wrote: Dear all, I am wond

Re: [ceph-users] Bluestore bluestore_prefer_deferred_size and WAL size

2018-03-09 Thread Budai Laszlo
Dear all, I am wondering whether it helps to increase the bluestore_prefer_deferred_size to 4MB so the RBD chunks are first written to the WAL, and only later to the spinning disks. Any opinions/experiences about this? Kind regards, Laszlo On 08.03.2018 18:15, Budai Laszlo wrote: Dear all

[ceph-users] Bluestore bluestore_prefer_deferred_size and WAL size

2018-03-08 Thread Budai Laszlo
Dear all, I'm reading about the bluestore_prefer_deferred_size parameter for Bluestore. Are there any hints about its size when using a dedicated SSD for bock.wal and block.db ? Thank you in advance! Laszlo ___ ceph-users mailing list ceph-users@

[ceph-users] Cache tier

2018-03-05 Thread Budai Laszlo
Dear all, I have some questions about cache tier in ceph: 1. Can someone share experiences with cache tiering? What are the sensitive things to pay attention regarding the cache tier? Can one use the same ssd for both cache and 2. Is cache tiering supported with bluestore? Any advices for usin

Re: [ceph-users] Luminous and Calamari

2018-03-02 Thread Budai Laszlo
stien VIGNERON CRIANN, Ingénieur / Engineer Technopôle du Madrillet 745, avenue de l'Université 76800 Saint-Etienne du Rouvray - France tél. +33 2 32 91 42 91 fax. +33 2 32 91 42 92 http://www.criann.fr mailto:sebastien.vigne...@criann.fr support: supp...@criann.fr Le 2 mars 2018 à 15:06, Bu

[ceph-users] Luminous and Calamari

2018-03-02 Thread Budai Laszlo
Dear all, is it possible to use Calamari with Luminous (I know about the manager dashboard, but that is "read only", I need a tool for also managing ceph). Kind regards, Laszlo ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.co