[ceph-users] RocksDB device selection (performance requirements)

2019-11-04 Thread Huseyin Cotuk
Hi all, The only recommendation I can find about db device selection is about the capacity (4% of the data disk) on the documents. Is there any suggestions about technical specs like throughput, IOPS and db device per data disk? While designing a specific infrastructure with filestore, we were

[ceph-users] Balancer configuration fails with Error EINVAL: unrecognized config option 'mgr/balancer/max_misplaced'

2019-11-04 Thread Thomas Schneider
Hi, I want to adjust balancer throttling and executed this command that returns an error: root@ld3955:~# ceph config set mgr mgr/balancer/max_misplaced .01 Error EINVAL: unrecognized config option 'mgr/balancer/max_misplaced' root@ld3955:~# ceph balancer status {     "active": true,     "plans":

[ceph-users] Re: mgr daemons becoming unresponsive

2019-11-04 Thread Janek Bevendorff
On 02.11.19 18:35, Oliver Freyermuth wrote: Dear Janek, in my case, the mgr daemon itself remains "running", it just stops reporting to the mon. It even still serves the dashboard, but with outdated information. This is not so different. The MGRs in my case are running, but stop responding.

[ceph-users] Run optimizer to create a new plan on specific pool fails

2019-11-04 Thread Thomas Schneider
Hi, I want to create an optimizer plan on each pool. My cluster has multiple crush roots, and multiple pools each representing a specific drive (HDD, SSD, NVME). Some pools are balanced, some are not. Therefore I want to run  optimizer to create a new plan on specific pool. However this fails fo

[ceph-users] Re: Device Health Metrics on EL 7

2019-11-04 Thread Benjeman Meekhof
Hi Oliver, The ceph-osd RPM packages include a config in /etc/sudoers.d/ceph-osd-smartctl that looks something like this: ceph ALL=NOPASSWD: /usr/sbin/smartctl -a --json /dev/* ceph ALL=NOPASSWD: /usr/sbin/nvme * smart-log-add --json /dev/* If you are using SElinux you will have to adjust capabil

[ceph-users] Ceph + Rook Day San Diego - November 18

2019-11-04 Thread Mike Perez
Hi Cephers, I'm happy to announce the availability of the schedule for Ceph + Rook Day San Diego. Registration for this event will be free until Tuesday 11:59 UTC, so register now: https://ceph.io/cephdays/ceph-rook-day-san-diego-2019/ We still have some open spots in the schedule, but the lineu

[ceph-users] [ceph-user] Upload objects failed on FIPS enable ceph cluster

2019-11-04 Thread Amit Ghadge
Hi All, I'm using ceph-14.2.4 and testing in FIPS enable cluster. Downloading objects are works but ceph raised segmentation exception while uploading. Please help me here. And please provide debugging stage, So I could take in development environment. Thanks, Amit G ___

[ceph-users] Slow write speed on 3-node cluster with 6* SATA Harddisks (~ 3.5 MB/s)

2019-11-04 Thread Hermann Himmelbauer
Hi, I recently upgraded my 3-node cluster to proxmox 6 / debian-10 and recreated my ceph cluster with a new release (14.2.4 bluestore) - basically hoping to gain some I/O speed. The installation went flawlessly, reading is faster than before (~ 80 MB/s), however, the write speed is still really sl

[ceph-users] Re: Slow write speed on 3-node cluster with 6* SATA Harddisks (~ 3.5 MB/s)

2019-11-04 Thread Martin Verges
Hello, harddisks are awful slow. That's normal and expected and result of random io as it would be in a Ceph cluster. You can speed up raw bandwidth performance using EC but not on such small clusters and not when having high io load. As you mentioned Proxmox, when it comes to VM workloads spinni