Hi Dave,
It's been a few days and I haven't seen any follow up in the list so
I'm wondering if the issue is that there was a typo in your osd list?
It appears that you have 16 included again in the destination instead of 26?
"24,25,16,27,28"
I'm not familiar with the pgremapper script so I may be
m
On Sat, Nov 4, 2023, 6:44 AM Matthew Booth wrote:
> I have a 3 node ceph cluster in my home lab. One of the pools spans 3
> hdds, one on each node, and has size 2, min size 1. One of my nodes is
> currently down, and I have 160 pgs in 'unknown' state. The other 2
> hosts are up and the cluster ha
Hi Alex,
Thank you very much, yes it was a time sync issue after fixing time sync
OSD service started.
regards,
Amudhan
On Sat, Nov 4, 2023 at 9:07 PM Alex Gorbachev
wrote:
> Hi Amudhan,
>
> Have you checked the time sync? This could be an issue:
>
> https://tracker.ceph.com/issues/17170
> -
Hi,
this is another example why min_size 1/size 2 are a bad choice (if you
value your data). There have been plenty discussions on this list
about that, I'm not going into detail about that. I'm not familiar
with rook, but activating existing OSDs usually works fine [1].
Regards,
Eugen
[