[ceph-users] Ceph RBD - High IOWait during the Writes

2020-11-10 Thread athreyavc
Hi All, We have recently deployed a new CEPH cluster Octopus 15.2.4 which consists of 12 OSD Nodes(16 Core + 200GB RAM, 30x14TB disks, CentOS 8) 3 Mon Nodes (8 Cores + 15GB, CentOS 8) We use Erasure Coded Pool and RBD block devices. 3 Ceph clients use the RBD devices, each has 25 RBDs and Eac

[ceph-users] Re: Ceph RBD - High IOWait during the Writes

2020-11-10 Thread athreyavc
increase the throughput ? Thanks and regards, Athreya On Tue, Nov 10, 2020 at 7:10 PM Jason Dillaman wrote: > On Tue, Nov 10, 2020 at 1:52 PM athreyavc wrote: > > > > Hi All, > > > > We have recently deployed a new CEPH cluster Octopus 15.2.4 which > consists >

[ceph-users] Ceph RBD - High IOWait during the Writes

2020-11-10 Thread athreyavc
Hi, We have recently deployed a Ceph cluster with 12 OSD nodes(16 Core + 200GB RAM + 30 disks each of 14TB) Running CentOS 8 3 Monitoring Nodes (8 Core + 16GB RAM) Running CentOS 8 We are using Ceph Octopus and we are using RBD block devices. We have three Ceph client nodes(16core + 30GB RAM,

[ceph-users] Re: Ceph RBD - High IOWait during the Writes

2020-11-12 Thread athreyavc
>From different search results I read, disabling cephx can help. Also https://static.linaro.org/connect/san19/presentations/san19-120.pdf recommended some settings changes for the bluestore cache. [osd] bluestore cache autotune = 0 bluestore_cache_kv_ratio = 0.2 bluestore_cache_meta_ratio = 0.8 b

[ceph-users] Re: Ceph RBD - High IOWait during the Writes

2020-11-12 Thread athreyavc
> ^south bridge +raid controller to disks ops and latency. > > -Edward Kalk > Datacenter Virtualization > Performance Engineering > Socket Telecom > Columbia, MO, USA > ek...@socket.net > > > On Nov 12, 2020, at 4:45 AM, athreyavc wrote: > > > > Jumbo frame

[ceph-users] Re: Ceph RBD - High IOWait during the Writes

2020-11-17 Thread athreyavc
or concern. Thanks and regards, Athreya On Thu, Nov 12, 2020 at 1:30 PM athreyavc wrote: > Hi, > > Thanks for the email, But we are not using RAID at all, we are using HBAs > LSI HBA 9400-8e. Eash HDD is configured as an OSD. > > On Thu, Nov 12, 2020 at 12:19 PM Edward kalk

[ceph-users] ceph-volume - AttributeError: module 'ceph_volume.api.lvm'

2021-08-02 Thread athreyavc
Hi, I am trying to Re-Add a OSD after replacing the disk, I am running, ceph-volume lvm create --bluestore --osd-id 41 --data ceph-dm-41/block-dm-41 And I get, --> AttributeError: module 'ceph_volume.api.lvm' has no attribute 'is_lv' My Ceph Version is "osd": { "ceph version 15.2.9