Hi all,
I've almost got my ceph back to normal after a triple drive failure.
But it seems my lost+found folder is corrupted.
I've followed the process in
https://docs.ceph.com/en/latest/cephfs/disaster-recovery-experts/#disaster-recovery-experts
However doing an online scrub, as there is still o
> On May 20, 2024, at 2:24 PM, Matthew Vernon wrote:
>
> Hi,
>
> Thanks for your help!
>
> On 20/05/2024 18:13, Anthony D'Atri wrote:
>
>> You do that with the CRUSH rule, not with osd_crush_chooseleaf_type. Set
>> that back to the default value of `1`. This option is marked `dev` for a
Hi,
Thanks for your help!
On 20/05/2024 18:13, Anthony D'Atri wrote:
You do that with the CRUSH rule, not with osd_crush_chooseleaf_type. Set that
back to the default value of `1`. This option is marked `dev` for a reason ;)
OK [though not obviously at
https://docs.ceph.com/en/reef/rados
>
>>> This has left me with a single sad pg:
>>> [WRN] PG_AVAILABILITY: Reduced data availability: 1 pg inactive
>>>pg 1.0 is stuck inactive for 33m, current state unknown, last acting []
>>>
>> .mgr pool perhaps.
>
> I think so
>
>>> ceph osd tree shows that CRUSH picked up my racks OK, e
Hi,
On 20/05/2024 17:29, Anthony D'Atri wrote:
On May 20, 2024, at 12:21 PM, Matthew Vernon wrote:
This has left me with a single sad pg:
[WRN] PG_AVAILABILITY: Reduced data availability: 1 pg inactive
pg 1.0 is stuck inactive for 33m, current state unknown, last acting []
.mgr pool
> On May 20, 2024, at 12:21 PM, Matthew Vernon wrote:
>
> Hi,
>
> I'm probably Doing It Wrong here, but. My hosts are in racks, and I wanted
> ceph to use that information from the get-go, so I tried to achieve this
> during bootstrap.
>
> This has left me with a single sad pg:
> [WRN] PG_A
Hi,
I'm probably Doing It Wrong here, but. My hosts are in racks, and I
wanted ceph to use that information from the get-go, so I tried to
achieve this during bootstrap.
This has left me with a single sad pg:
[WRN] PG_AVAILABILITY: Reduced data availability: 1 pg inactive
pg 1.0 is stuck
> Hi all,
> Due to so many reasons (political, heating problems, lack of space
> aso.) we have to
> plan for our ceph cluster to be hosted externaly.
> The planned version to setup is reef.
> Reading up on documentation we found that it was possible to run in
> secure mode.
>
> Our ceph.conf file
Hi all,
Due to so many reasons (political, heating problems, lack of space
aso.) we have to
plan for our ceph cluster to be hosted externaly.
The planned version to setup is reef.
Reading up on documentation we found that it was possible to run in
secure mode.
Our ceph.conf file will state bo
Hi everyone,
I'm managing a Ceph Quincy 17.2.5 cluster, waiting to upgrade it to
version 17.2.7, composed and configured as follows:
- 16 identical nodes 256 GB RAM, 32 CPU Cores (64 threads), 12 x rotary
HDD (BLOCK) + 4 x Sata SSD (RocksDB/WAL)
- Erasure Code 11+4 (Jerasure)
- 10 x S3 RGW on
10 matches
Mail list logo