Hi all,
The only recommendation I can find about db device selection is about the
capacity (4% of the data disk) on the documents. Is there any suggestions
about technical specs like throughput, IOPS and db device per data disk?
While designing a specific infrastructure with filestore, we were
Hi,
I want to adjust balancer throttling and executed this command that
returns an error:
root@ld3955:~# ceph config set mgr mgr/balancer/max_misplaced .01
Error EINVAL: unrecognized config option 'mgr/balancer/max_misplaced'
root@ld3955:~# ceph balancer status
{
"active": true,
"plans":
On 02.11.19 18:35, Oliver Freyermuth wrote:
Dear Janek,
in my case, the mgr daemon itself remains "running", it just stops reporting to
the mon. It even still serves the dashboard, but with outdated information.
This is not so different. The MGRs in my case are running, but stop
responding.
Hi,
I want to create an optimizer plan on each pool.
My cluster has multiple crush roots, and multiple pools each
representing a specific drive (HDD, SSD, NVME).
Some pools are balanced, some are not.
Therefore I want to run optimizer to create a new plan on specific pool.
However this fails fo
Hi Oliver,
The ceph-osd RPM packages include a config in
/etc/sudoers.d/ceph-osd-smartctl that looks something like this:
ceph ALL=NOPASSWD: /usr/sbin/smartctl -a --json /dev/*
ceph ALL=NOPASSWD: /usr/sbin/nvme * smart-log-add --json /dev/*
If you are using SElinux you will have to adjust capabil
Hi Cephers,
I'm happy to announce the availability of the schedule for Ceph + Rook Day
San Diego. Registration for this event will be free until Tuesday 11:59
UTC, so register now:
https://ceph.io/cephdays/ceph-rook-day-san-diego-2019/
We still have some open spots in the schedule, but the lineu
Hi All,
I'm using ceph-14.2.4 and testing in FIPS enable cluster. Downloading
objects are works but ceph raised segmentation exception while uploading.
Please help me here. And please provide debugging stage, So I could take
in development environment.
Thanks,
Amit G
___
Hi,
I recently upgraded my 3-node cluster to proxmox 6 / debian-10 and
recreated my ceph cluster with a new release (14.2.4 bluestore) -
basically hoping to gain some I/O speed.
The installation went flawlessly, reading is faster than before (~ 80
MB/s), however, the write speed is still really sl
Hello,
harddisks are awful slow. That's normal and expected and result of random
io as it would be in a Ceph cluster.
You can speed up raw bandwidth performance using EC but not on such small
clusters and not when having high io load.
As you mentioned Proxmox, when it comes to VM workloads spinni