[ceph-users] Re: Slow ops on OSDs

2020-10-06 Thread Danni Setiawan
We have similar with this issue last week. We have sluggish disk (10TB SAS in RAID 0 mode) in half of node which affect performance of cluster. These disk has high CPU usage and very high latency. Turns out there is a process *patrol read* from RAID card that running automatically every week. W

[ceph-users] Benchmark WAL/DB on SSD and HDD for RGW RBD CephFS

2020-09-15 Thread Danni Setiawan
Hi all, I'm trying to find performance penalty with OSD HDD when using WAL/DB in faster device (SSD/NVMe) vs WAL/DB in same device (HDD) for different workload (RBD, RGW with index bucket in SSD pool, and CephFS with metadata in SSD pool). I want to know if giving up disk slot for WAL/DB devi

[ceph-users] Re: Benchmark WAL/DB on SSD and HDD for RGW RBD CephFS

2020-09-16 Thread Danni Setiawan
20 kl 06:27 skrev Danni Setiawan mailto:danni.n.setia...@gmail.com>>: Hi all, I'm trying to find performance penalty with OSD HDD when using WAL/DB in faster device (SSD/NVMe) vs WAL/DB in same device (HDD) for different workload (RBD, RGW with index bucket in SSD