Hello list member
after subsequent installation of Ceph (17.2.4) monitoring stuff we got this
error: The mgr/prometheus module at ceph1n020.int.infoserve.de:9283 is
unreachable . (and also for second prometheus module). Prometheus
module is activated indeed...
[root@ceph1n020 ~]# ss -ant |e
Hi,
Do you set "mgr/prometheus//server_addr" ipv4 address in config?
k
> On 10 Oct 2022, at 16:56, Ackermann, Christoph
> wrote:
>
> Hello list member
>
> after subsequent installation of Ceph (17.2.4) monitoring stuff we got this
> error: The mgr/prometheus module at ceph1n020.int.infoserv
That output suggests that the mgr is configured to only listen on the
loopback address.
I don't think that's a default... does a `ceph config dump | grep mgr`
suggest it's been configured that way?
On 2022-10-10 10:56, Ackermann, Christoph wrote:
Hello list member
after subsequent installa
Well, we have a well running ceph base system i've pimped this morning by
using cephadm method for monitoring addon:
https://docs.ceph.com/en/quincy/cephadm/services/monitoring/#deploying-monitoring-with-cephadm
All three manager can be accessed via IPv4 address from other hosts. The
configuratio
Oh, see this...
mgr advanced mgr/prometheus/server_addr localhost
BANG!
Am Mo., 10. Okt. 2022 um 16:24 Uhr schrieb Ackermann, Christoph <
c.ackerm...@infoserve.de>:
> Well, we have a well running ceph base system i've pimped this morning by
> using cephadm method for moni
Hello all,
setting "*ceph config set mgr mgr/prometheus/server_addr 0.0.0.0*" as
described in the manual config documentation and restarting all manager
daemons solved the problem so far. :-)
Thanks and best regards,
Christoph Ackermann
Am Mo., 10. Okt. 2022 um 16:25 Uhr schrieb Ackermann, Ch
Hello Yoann,
On Fri, Oct 7, 2022 at 10:51 AM Yoann Moulin wrote:
>
> Hello,
>
> >> Is 256 good value in our case ? We have 80TB of data with more than 300M
> >> files.
> >
> > You want at least as many PGs that each of the OSDs host a portion of the
> > OMAP data. You want to spread out OMAP to
Hello,
I am using pacific 16.2.10 on Rocky 8.6 Linux.
After setting upmap_max_deviation to 1 on the ceph balancer in ceph-mgr, I
achieved a near perfect balance of PGs and space on my OSDs. This is great.
However, I started getting the following errors on my ceph-mon logs, every
three minutes,
Hi,
Here's a similar bug: https://tracker.ceph.com/issues/47361
Back then, upmap would generate mappings that invalidate the crush rule. I
don't know if that is still the case, but indeed you'll want to correct
your rule.
Something else you can do before applying the new crush map is use
osdmapt
Are there any suggestions/tips on how we can debug this type of
multisite/replication issues?
From: At: 10/04/22 19:08:56 UTC-4:00To: ceph-users@ceph.io
Subject: [ceph-users] Re: multisite replication issue with Quincy
We are able to consistently reproduce the replication issue now.
The follow
Hi Igor.
The problem of OSD crashes was resolved after migrating just a little bit of
the meta-data pool to other disks (we decided to evacuate the small OSDs onto
larger disks to make space). Therefore, I don't think its an LVM or disk issue.
The cluster is working perfectly now after migratin
Good Morning,
Please I have a concerne about rbd mirrored image.
I have two clusters A and B, under 16.2.10 and I have implemented one way
mirrored rbd (A to B)
When client01 write data on image on cluster A, it is successfully mirrored to
image Under cluster B.
My issue is that, I want to
Hi,
No, you must stop the image on the primary site (A) and make the image on
the non primary site (B) primary. It's possible to clone a snapshot though.
See
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/P6BHPUZEMSCK4NJY5BZSYOB5XBWVT424/
https://lists.ceph.io/hyperkitty/list/c
13 matches
Mail list logo