I tried with disk based swap on a SATA SSD.

I think that might be the last option. I have exported already all the down
PG's from the OSD that they are waiting for.

Kind Regards

Lee

On Thu, 6 Jan 2022 at 20:00, Alexander E. Patrakov <patra...@gmail.com>
wrote:

> пт, 7 янв. 2022 г. в 00:50, Alexander E. Patrakov <patra...@gmail.com>:
>
>> чт, 6 янв. 2022 г. в 12:21, Lee <lqui...@gmail.com>:
>>
>>> I've tried add a swap and that fails also.
>>>
>>
>> How exactly did it fail? Did you put it on some disk, or in zram?
>>
>> In the past I had to help a customer who hit memory over-use when
>> upgrading Ceph (due to shallow_fsck), and we were able to fix it by adding
>> 64 GB GB of zram-based swap on each server (with 128 GB of physical RAM in
>> this type of server).
>>
>>
> On the other hand, if you have some spare disks for temporary storage and
> for new OSDs, and this failed OSD is not a part of an erasure-coded pool,
> another approach might be to export all PGs using ceph-objectstore-tool as
> files onto the temporary storage (in hope that it doesn't suffer from the
> same memory explosion), and then import them all into a new temporary OSD.
>
> --
> Alexander E. Patrakov
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to