I migrated from gluster when I found out it's going unsupported shortly.
I'm really not big enough for Ceph proper, but there were only so many
supported distributed filesystems with triple redundancy.
Where I got into trouble was that I started off with Octopus and Octopus
had some teething p
We're looking for the multiple mds daemons to be active in zone A and
standby(-replay) in zone B.
This scenario would also benefit people who have more powerfull hardware in
zone A than zone B.
Kind regards,
Sake
> Op 31-10-2024 15:50 CET schreef Adam King :
>
>
> Just noticed this threa
I completely understand your point of view. Our own main cluster is
also a bit "wild" in its OSD layout, that's why its OSDs are
"unmanaged" as well. When we adopted it via cephadm, I started to
create suitable osd specs for all those hosts and OSDs and I gave up.
:-D But since we sometimes
I have been slowly migrating towards spec files as I prefer declarative
management as a rule.
However, I think that we may have a dichotomy in the user base.
On the one hand, users with dozens/hundreds of server/drives of
basically identical character.
On the other, I'm one who's running few
Hi,
the preferred method to deploy OSDs in cephadm managed clusters are
spec files, see this part of the docs [0] for more information. I
would just not use the '--all-available-devices' flag, except in test
clusters, or if you're really sure that this is what you want.
If you use 'ceph o
Hello.
Sorry if it appears that I am reposting the same issue under a different
topic. However, I feel that the problem has moved and I now have different
questions.
At this point I have, I believe, removed all traces of OSD.12 from my
cluster - based on steps in the Reef docs at
https://docs.ce
As I understand it, the manual OSD setup is only for legacy
(non-container) OSDs. Directory locations are wrong for managed
(containerized) OSDs, for one.
Actually, the whole manual setup docs ought to be moved out of the
mainline documentation. In their present arrangement, they make legacy
Hi Chris,
As other users have pointed out, we are fixing an issue tracked in
https://tracker.ceph.com/issues/68657 that seems related to what you're
experiencing. However, can you raise a new tracker describing your problem
so we can confirm?
Can you please include:
1. Steps to reproduce (includi
Just noticed this thread. A couple questions. Is what we want to have MDS
daemons in say zone A and zone B, but the ones in zone A are prioritized to
be active and ones in zone B remain as standby unless absolutely necessary
(all the ones in zone A are down) or is it that we want to have some subse
Hi Ilya,
Thank you for your illuminating response!
I thought I had checked `ceph df` during my experiments before, but
apparently not carefully enough. :)
On 25/10/2024 18:43, Ilya Dryomov wrote:
> "rbd du" can be very imprecise even with --exact flag: one can
> construct an image that would use
Thanks Josh and Eugen,
I did not manage to trace this object to an S3 object. Instead I read all
files in the suspected S3 bucket and
actually hit a bad one. Since we had a known good mirror I deleted the
broken S3 object (which succeeded fine)
and uploaded the good one (also succeeded). Data wise
Thanks for the update. Too bad you didn't find a way around it. I
guess it would require a real deep dive into systems to understand
what really happened there, which unfortunately can be a bit difficult
via mails. And of course, there's a chance you might hit this issue
again, which I hope
12 matches
Mail list logo