___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Ah scratch that, my first paragraph about replicated pools is actually
incorrect. If it’s a replicated pool and it shows incomplete, it means the most
recent copy of the PG is missing. So ideal would be to recover the PG from dead
OSDs in any case if possible.
Matthias Grandl
Head Storage
the dead OSDs and whether they are at all recoverable.
Matthias Grandl
Head Storage Engineer
matthias.gra...@croit.io
> On 17. Jun 2024, at 16:46, David C. wrote:
>
> Hi Pablo,
>
> Could you tell us a little more about how that happened?
>
> Do you have a min_size &g
!
--
Matthias Grandl
Head Storage Engineer
matthias.gra...@croit.io <mailto:matthias.gra...@croit.io>
Looking for help with your Ceph cluster? Contact us at https://croit
<https://croit/>.io
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register:
center? In that case, once you correct the CRUSH
layout, you would be running misplaced without a way to rebalance pools that
are you using a datacenter crush rule.
Cheers!
--
Matthias Grandl
Head Storage Engineer
matthias.gra...@croit.io <mailto:matthias.gra...@croit.io>
Looking for he
We have also encountered this exact backtrace on 17.2.6 also in combination
with Veeam Backups.
I suspect a regression as we had no issues before the update and all other
clusters still running 17.2.5 and Veeam Backups don’t appear to be affected.
--
Matthias Grandl
matthias.gra...@croit.io
pseudo clean state.
https://github.com/HeinleinSupport/cern-ceph-scripts/blob/master/tools/upmap/upmap-remapped.py
Matthias Grandl
Head of UX
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io
On