I requested a “scrubd” back before mgr was a thing, just sayin’ ;) Those of you
who didn’t run, say, Dumpling or Firefly don’t know what you missed. Part of
the problem has always been that OSDs — and not mons — schedule scrubs, so they
are by nature solipsists and cannot orchestrate among each
Hi all,
I have set up a cluster for use with cephfs. Trying to follow the recommendations for the MDS service, I picked two machines which provide SSD-based
disk space, 2 TB each, to put the cephfs- metadata pool there.
My ~20 HDD-based OSDs in the cluster have 43 TB each.
I created a crush ru
Your first assumption was correct. You can set the 'size' parameter of the
pool to 2 (ceph osd pool set size 2), but you'll also want to either
want to drop min_size to 1 or accept the fact that you cannot ever have
either metadata OSD go down. It's fine for a toy cluster, but for any
production u
On Sun, 19 Jun 2022 at 02:29, Satish Patel wrote:
> Greeting folks,
>
> We are planning to build Ceph storage for mostly cephFS for HPC workload
> and in future we are planning to expand to S3 style but that is yet to be
> decided. Because we need mass storage, we bought the following HW.
>
> 15