[ceph-users] Re: RFC: (deep-)scrub manager module

2022-06-19 Thread Anthony D'Atri
I requested a “scrubd” back before mgr was a thing, just sayin’ ;) Those of you who didn’t run, say, Dumpling or Firefly don’t know what you missed. Part of the problem has always been that OSDs — and not mons — schedule scrubs, so they are by nature solipsists and cannot orchestrate among each

[ceph-users] active+undersized+degraded due to OSD size differences?

2022-06-19 Thread Thomas Roth
Hi all, I have set up a cluster for use with cephfs. Trying to follow the recommendations for the MDS service, I picked two machines which provide SSD-based disk space, 2 TB each, to put the cephfs- metadata pool there. My ~20 HDD-based OSDs in the cluster have 43 TB each. I created a crush ru

[ceph-users] Re: active+undersized+degraded due to OSD size differences?

2022-06-19 Thread Tyler Stachecki
Your first assumption was correct. You can set the 'size' parameter of the pool to 2 (ceph osd pool set size 2), but you'll also want to either want to drop min_size to 1 or accept the fact that you cannot ever have either metadata OSD go down. It's fine for a toy cluster, but for any production u

[ceph-users] Re: Suggestion to build ceph storage

2022-06-19 Thread Christian Wuerdig
On Sun, 19 Jun 2022 at 02:29, Satish Patel wrote: > Greeting folks, > > We are planning to build Ceph storage for mostly cephFS for HPC workload > and in future we are planning to expand to S3 style but that is yet to be > decided. Because we need mass storage, we bought the following HW. > > 15