I don't have a ton of experience troubleshooting ceph issues and running into
an issue where the OSD's are filling up to 100% and then getting downed in the
cluster. My overall ceph status is in HEALTH_WARN right now and I'm not sure
exactly where I should be looking:
user@ceph01:~$ sudo ceph
Forgot to mention my version:
user@ceph01:~$ sudo ceph --version
ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)
--
Thanks,
Joshua Schaeffer
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email
On 8/20/22 13:49, Anthony D'Atri wrote:
Tiny OSDs? PoC cluster?
It's a PoC of sorts. There is actual data, but it is for a very small project
on older hardware.
`Ceph OSD DF`
user@ceph01:~$ sudo ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL
%
If you can get them back online, you could then reweight each one manually
until you see a good balance, then turn on auto-balancing. An entire host
being down could be a problem though. Also, you didn’t mention the highest
size(replication) you have active.
-Brent
-Original Message-
On Fri, 2022-08-19 at 13:04 +, Frank Schilder wrote:
> Hi Chris,
>
> looks like your e-mail stampede is over :) I will cherry-pick some
> questions to answer, other things either follow or you will figure it
> out with the docs and trial-and-error. The cluster set-up is actually
> not that bad
On Fri, 2022-08-19 at 15:48 +0200, Stefan Kooman wrote:
> On 8/19/22 15:04, Frank Schilder wrote:
> > Hi Chris,
> >
> > looks like your e-mail stampede is over :) I will cherry-pick some
> > questions to answer, other things either follow or you will figure
> > it out with the docs and trial-and-e