Hello,
I'm new to Ceph and I recently inherited a 4 node cluster with 32 OSDs and
about 116TB raw space, which shows low available space, which I'm trying to
increase by enabling the balancer and lowering priority for the most-used
OSDs. My questions are: is what I did correct with the current sta
Hello Stefan,
Thank you for your answer.
On Fri, Sep 2, 2022 at 5:27 PM Stefan Kooman wrote:
> On 9/2/22 15:55, Oebele Drijfhout wrote:
> > Hello,
> >
> > I'm new to Ceph and I recently inherited a 4 node cluster with 32 OSDs
> and
> > about 116TB raw spa
Hello Mehmet,
On Sat, Sep 3, 2022 at 1:50 PM wrote:
> Is ceph still backfilling? What is the actual output of ceph -s?
>
Yes:
[trui...@ceph02.eun ~]$ sudo ceph --cluster xxx -s
cluster:
id: 91ba1ea6-bfec-4ddb-a8b5-9faf842f22c3
health: HEALTH_WARN
1 backfillfull osd(s)
th the stuck PGs and
over-utilized OSD?
- What should we expect w.r.t. load on the cluster?
- Do the 1024 PGs in xxx-pool have any influence given they are empty?
On Sat, Sep 3, 2022 at 11:41 AM Oebele Drijfhout
wrote:
> Hello Stefan,
>
> Thank you for your answer.
>
> On Fri, Sep