[ceph-users] Re: Recreate Destroyed OSD

2024-10-31 Thread Tim Holloway
I migrated from gluster when I found out it's going unsupported shortly. I'm really not big enough for Ceph proper, but there were only so many supported distributed filesystems with triple redundancy. Where I got into trouble was that I started off with Octopus and Octopus had some teething p

[ceph-users] Re: MDS and stretched clusters

2024-10-31 Thread Sake Ceph
We're looking for the multiple mds daemons to be active in zone A and standby(-replay) in zone B. This scenario would also benefit people who have more powerfull hardware in zone A than zone B. Kind regards, Sake > Op 31-10-2024 15:50 CET schreef Adam King : > > > Just noticed this threa

[ceph-users] Re: Recreate Destroyed OSD

2024-10-31 Thread Eugen Block
I completely understand your point of view. Our own main cluster is also a bit "wild" in its OSD layout, that's why its OSDs are "unmanaged" as well. When we adopted it via cephadm, I started to create suitable osd specs for all those hosts and OSDs and I gave up. :-D But since we sometimes

[ceph-users] Re: Recreate Destroyed OSD

2024-10-31 Thread Tim Holloway
I have been slowly migrating towards spec files as I prefer declarative management as a rule. However, I think that we may have a dichotomy in the user base. On the one hand, users with dozens/hundreds of server/drives of basically identical character. On the other, I'm one who's running few

[ceph-users] Re: Recreate Destroyed OSD

2024-10-31 Thread Eugen Block
Hi, the preferred method to deploy OSDs in cephadm managed clusters are spec files, see this part of the docs [0] for more information. I would just not use the '--all-available-devices' flag, except in test clusters, or if you're really sure that this is what you want. If you use 'ceph o

[ceph-users] Recreate Destroyed OSD

2024-10-31 Thread Dave Hall
Hello. Sorry if it appears that I am reposting the same issue under a different topic. However, I feel that the problem has moved and I now have different questions. At this point I have, I believe, removed all traces of OSD.12 from my cluster - based on steps in the Reef docs at https://docs.ce

[ceph-users] Re: Recreate Destroyed OSD

2024-10-31 Thread Tim Holloway
As I understand it, the manual OSD setup is only for legacy (non-container) OSDs. Directory locations are wrong for managed (containerized) OSDs, for one. Actually, the whole manual setup docs ought to be moved out of the mainline documentation. In their present arrangement, they make legacy

[ceph-users] Re: Squid 19.2.0 balancer causes restful requests to be lost

2024-10-31 Thread Laura Flores
Hi Chris, As other users have pointed out, we are fixing an issue tracked in https://tracker.ceph.com/issues/68657 that seems related to what you're experiencing. However, can you raise a new tracker describing your problem so we can confirm? Can you please include: 1. Steps to reproduce (includi

[ceph-users] Re: MDS and stretched clusters

2024-10-31 Thread Adam King
Just noticed this thread. A couple questions. Is what we want to have MDS daemons in say zone A and zone B, but the ones in zone A are prioritized to be active and ones in zone B remain as standby unless absolutely necessary (all the ones in zone A are down) or is it that we want to have some subse

[ceph-users] Re: KRBD: downside of setting alloc_size=4M for discard alignment?

2024-10-31 Thread Friedrich Weber
Hi Ilya, Thank you for your illuminating response! I thought I had checked `ceph df` during my experiments before, but apparently not carefully enough. :) On 25/10/2024 18:43, Ilya Dryomov wrote: > "rbd du" can be very imprecise even with --exact flag: one can > construct an image that would use

[ceph-users] Re: 9 out of 11 missing shards of shadow object in ERC 8:3 pool.

2024-10-31 Thread Robert Kihlberg
Thanks Josh and Eugen, I did not manage to trace this object to an S3 object. Instead I read all files in the suspected S3 bucket and actually hit a bad one. Since we had a known good mirror I deleted the broken S3 object (which succeeded fine) and uploaded the good one (also succeeded). Data wise

[ceph-users] Re: "ceph orch" not working anymore

2024-10-31 Thread Eugen Block
Thanks for the update. Too bad you didn't find a way around it. I guess it would require a real deep dive into systems to understand what really happened there, which unfortunately can be a bit difficult via mails. And of course, there's a chance you might hit this issue again, which I hope