Hey Joffrey,
try to switch back to the wpq scheduler in ceph.conf:
osd_op_queue = wpq
...and restart all OSDs.
I also had issues where recovery was very very slow (10kb/s).
Best Regards,
Alex Walender
Am 17.10.24 um 11:44 schrieb Joffrey:
HI,
This is my cluster:
cluster:
id:
Dave,
If there's one bitter lesson I learned from IBM's OS/2 OS it was that
one should never store critical information in two different
repositories. There Should Be Only One, and you may replicate it, but
at the end of the day, if you don't have a single point of Authority,
you'll suffer.
Regre
On Wednesday, October 30, 2024 2:00:56 PM EDT Darrell Enns wrote:
> Is there a simple way to deploy a custom (in-house) mgr module to an
> orchestrator managed cluster? I assume the module code would need to be
> included in the mgr container image. However, there doesn't seem to be a
> straightfor
Build your own Image based on the ceph container Image.
Joachim Kraftmayer
CEO
joachim.kraftma...@clyso.com
www.clyso.com
Hohenzollernstr. 27, 80801 Munich
Utting a. A. | HR: Augsburg | HRB: 25866 | USt. ID-Nr.: DE2754306
Darrell Enns schrieb am Mi., 30. Okt. 2024, 19:01:
> Is there a si
Speaking abstractly, I can see 3 possible approaches.
1. You can create a separate container and invoke it from the mgr
container as a micro-service. As to how, I don't know. This is likely
the cleanest approach.
2. You can create a Dockerfile based on the stock mgr but with your
extensions added
Hello Eugen,
thanks a lot. We got our down time today to work on the cluster.
However, nothing worked. Even with Ceph 19.
All ceph orch commands do not work.
Error ENOENT: No orchestrator configured (try `ceph orch set backend`)
This has nothing to do with osd_remove_queue.
Getting back the
Is there a simple way to deploy a custom (in-house) mgr module to an
orchestrator managed cluster? I assume the module code would need to be
included in the mgr container image. However, there doesn't seem to be a
straightforward way to do this without having the module merged to upstream
ceph
I've just upgraded a test cluster from 18.2.4 to 19.2.0. Package
install on centos 9 stream. Very smooth upgrade. Only one problem so far...
The MGR restful api calls work fine. EXCEPT whenever the balancer kicks
in to find any new plans. During the few seconds that the balancer takes
to run,
Hi,
Laura posted [0],[1] two days ago that she likely found the root cause
of the balancer crashing the MGR. It sounds like what you're
describing could be related to that.
[0]
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/STR2UCS2KDZQAXOLH3GPCCWN4GBR3CJG/
[1] https://t
Dear Ceph Community,
I hope this message finds you well.
I am encountering an out-of-memory (OOM) issue with one of my Ceph OSDs,
which is repeatedly getting killed by the OOM killer on my system. Below
are the relevant details from the log:
*OOM Log*:
[Wed Oct 30 13:14:48 2024]
oom-kill:constra
On Wed, Oct 30, 2024, 8:24 AM Chris Palmer wrote:
> I've just upgraded a test cluster from 18.2.4 to 19.2.0. Package
> install on centos 9 stream. Very smooth upgrade. Only one problem so far...
>
> The MGR restful api calls work fine. EXCEPT whenever the balancer kicks
> in to find any new plan
On 10/30/24 14:58, Tim Holloway wrote:
Speaking abstractly, I can see 3 possible approaches.
...
2. You can create a Dockerfile based on the stock mgr but with your
extensions added. The main problem with this is that from what I can
see, the cephadm tool has the names and repositories of the
Hi Mosharaf,
read this article to identify if you are facing this issue:
https://docs.clyso.com/blog/osds-with-unlimited-ram-growth/
Regards, Joachim
www.clyso.com
Hohenzollernstr. 27, 80801 Munich
Utting | HR: Augsburg | HRB: 25866 | USt. ID-Nr.: DE275430677
Am Mi., 30. Okt. 2024 um 0
13 matches
Mail list logo