As a workaround you can use: ceph config set mgr
mgr/prometheus/exclude_perf_counters false
However I understand that deploying a ceph-exporter daemon on each host is
the proper fix. You may still be missing some configuration for it?
On Thu, 5 Sept 2024 at 08:25, Sake Ceph wrote:
> Hi Frank,
>
Hi,
regarding the scraping endpoints, I wonder if it would make sense to
handle it the same way as with the dashboard redirect:
ceph config get mgr mgr/dashboard/standby_behaviour
redirect
If you try to access the dashboard via one of the standby MGRs, you're
redirected to the active one.
Hi guys,
what distro would you prefer and why for the production Ceph? We use
Ubuntu on most of our Ceph clusters and some are Debian. Now we are
thinking about unifying it by using only Debian or Ubuntu.
I personally prefer Debian mainly for its stability and easy
upgrade-in-place. What are
Didn't you already got the answer from the reddit thread?
https://www.reddit.com/r/ceph/comments/1f88u6m/prefered_distro_for_ceph/
I always point here:
https://docs.ceph.com/en/latest/start/os-recommendations/ and we are
running very well with Ubuntu with, and without, the orchestrator.
Am Do., 5
I would like to stay away from using the workaround.
First I did a redeployment of prometheus and later ceph-exporter, but still no
data. After the deployment of ceph-exporter I saw the following messages (2
times, for each host running prometheus): Reconfiguring daemon
prometheus..
You mention
Hi,
The port 8765 is the "service discovery" (an internal server that runs in
the mgr... you can change the port by changing the
variable service_discovery_port of cephadm). Normally it is opened in the
active mgr and the service is used by prometheus (server) to get the
targets by using the http
Hi, I never tried anything else than debian.
On 9/5/24 12:33 PM, Boris wrote:
Didn't you already got the answer from the reddit thread?
https://www.reddit.com/r/ceph/comments/1f88u6m/prefered_distro_for_ceph/
I always point here:
https://docs.ceph.com/en/latest/start/os-recommendations/ and we
To add to this I’ve noticed that for RGW with s3website api enabled it’s even
worse - sometimes it needs a hard reset. Tested on 17.2.6.
Ondra
> On 26. 8. 2024, at 4:48, Huy Nguyen wrote:
>
> Hi community,
> I'm using ceph v18.2.4. Each time I commit a period, all of my radosgw
> instances pa
Have been running on Centos/RHEL for ~5 years and not had any issues either.
From: Roberto Maggi @ Debian
Sent: Thursday, September 5, 2024 12:58 PM
To: ceph-users@ceph.io
Subject: [EXTERNAL] [ceph-users] Re: Prefered distro for Ceph
Hi, I never tried anything e
I personally prefer Ubuntu.
I like RPM / YUM better than APT / DEB, but Ubuntu provides a far richer set of
prebuilt packages, obviating the mess that is EPEL and most of the need to
compile and package myself. Ubuntu's kernels are also far more current than
those of the RHEL family. I've bee
Le 05/09/2024 à 12:25:24+0200, Denis Polom a écrit
Hi,
>
> what distro would you prefer and why for the production Ceph? We use Ubuntu
> on most of our Ceph clusters and some are Debian. Now we are thinking about
> unifying it by using only Debian or Ubuntu.
>
> I personally prefer Debian mainly
Hi,
On 05/09/2024 12:49, Redouane Kachach wrote:
The port 8765 is the "service discovery" (an internal server that runs in
the mgr... you can change the port by changing the
variable service_discovery_port of cephadm). Normally it is opened in the
active mgr and the service is used by prometheu
On 05/09/2024 15:03, Matthew Vernon wrote:
Hi,
On 05/09/2024 12:49, Redouane Kachach wrote:
The port 8765 is the "service discovery" (an internal server that runs in
the mgr... you can change the port by changing the
variable service_discovery_port of cephadm). Normally it is opened in the
act
I started with arch-linux & Mimic but it was all manual deployment and
development. It takes too much time and requires knowledge.
My custom distro worked 4-5 years flawlessly until I hit a rolling-release
update problem at one point.
These days I use Ubuntu for easy setup and enjoy the pre-tested
The bare metal has to run *something*, whether Ceph is run from packages or
containers.
>> what distro would you prefer and why for the production Ceph? We use Ubuntu
>> on most of our Ceph clusters and some are Debian. Now we are thinking about
>> unifying it by using only Debian or Ubuntu.
>>
Le 05/09/2024 à 11:06:27-0400, Anthony D'Atri a écrit
Hi,
> The bare metal has to run *something*, whether Ceph is run from packages or
> containers.
Yes absolutly. But If you use podman/docker you don't have to really care
about the compatibility problem between your linux flavor with Ceph (
Now you've got me worried. As I said, there is absolutely no traffic
using port 8765 on my LAN.
Am I missing a service? Since my distro is based on stock Prometheus,
I'd have to assume that the port 8765 server would be part of the Ceph
generic container image and isn't being switched on for some
Not at all, you're doing the right thing. That's exactly how I would do things
if I were setting out to deploy Ceph on bare metal today. Pick a very stable
underlying distribution and run Ceph in containers. That's exactly what I'm
doing on a massive scale, and it's been one of the best decision
I am trying to configure ceph for the first time manually. I followed the
instructions and at this point I have only ceph-mon and ceph-mgr installed.
Here is my ceph status:
root@ceph-n1:~# ceph status
> cluster:
> id: a93114e4-b0af-4b56-b019-0900310a14f8
> health: HEALTH_WARN
>
Hello Ceph Users,
* Problem: we get the following errors when using krbd, we are using rbd
for vms.
* Workaround: by switching to librbd the errors disappear.
* Software:
** Kernel: 6.8.8-2 (parameters: intel_iommu=on iommu=pt
pcie_aspm.policy=performance)
** Ceph: 18.2.2
Description/Detail
Ceph reef 18.2.4
We have pool with size 3 (2 copies in first dc , 1 copy in second ) replicated
between datacenter. When we put host in maintanance in different datacenter
some data is unavailable - why ? How to prevent it or fix ?
2 nodes in each dc + witness
pool 13 'VolumesStandardW2' repli
Dear team,
hi,I met a swift tempurl problem in my prod ceph cluster, I found rgw in 18.2.1
cannot handle list_bucket correctly and led to rgw crash
I created an issue: https://tracker.ceph.com/issues/67825
Can someone help to solve the issue?
Best regards
Henry
___
Hi,
I'm using Ceph v18.2.4, non-cephadm. Over time, my RGW cluster receives more
requests,
and when it reaches a certain threshold, I will manually scale more Radosgw
instances to
handle the increased traffic.
- Is this a normal practice?
- Are there any tunings I can do with RGW to make it han
As far as i understood your requirements both crushrules are wrong
Joachim Kraftmayer
CEO
joachim.kraftma...@clyso.com
www.clyso.com
Hohenzollernstr. 27, 80801 Munich
Utting a. A. | HR: Augsburg | HRB: 25866 | USt. ID-Nr.: DE2754306
schrieb am Fr., 6. Sept. 2024, 10:57:
> Ceph reef 18.2.
David I agree.
I always recommend using the stable Linux distro you normally use and run
ceph in containers.
David Orman schrieb am Fr., 6. Sept. 2024, 09:33:
> Not at all, you're doing the right thing. That's exactly how I would do
> things if I were setting out to deploy Ceph on bare metal tod
Turns out, that cluster didn't have new snapshots enabled, so the
tracker issue is invalid. Although I'd like to point out that the
error messages in the dashboard and cli could be improved, they don't
really give any clue why it's failing. The actual root cause was
hidden in debug level 5:
26 matches
Mail list logo