Hi,
The port 8765 is the "service discovery" (an internal server that runs in
the mgr... you can change the port by changing the
variable service_discovery_port of cephadm). Normally it is opened in the
active mgr and the service is used by prometheus (server) to get the
targets by using the http
3, Matthew Vernon wrote:
> > > Hi,
> > >
> > > On 05/09/2024 12:49, Redouane Kachach wrote:
> > >
> > > > The port 8765 is the "service discovery" (an internal server that
> > > > runs in
> > > > the mgr... you can
Seems like a BUG in cephadm, the ceph-exporter when deployed doesn't
specify its port that's why it's not being opened automatically. You can
see that in the cephadm logs (ports list is empty):
2024-09-09 04:39:48,986 7fc2993d7740 DEBUG Loaded deploy configuration:
{'fsid': '250b9d7c-6e65-11ef-8e0
Hi Yuri,
I've just backported to reef several fixes that I introduced in the last
months for the rook orchestrator. Most of them are fixes for dashboard
issues/crashes that only happen on Rook environments. The PR [1] has all
the changes and it was merged into reef this morning. We really
need the
gt;> > On Mon, Nov 13, 2023 at 12:14 PM Yuri Weinstein
>> wrote:
>> >>
>> >> Redouane
>> >>
>> >> What would be a sufficient level of testing (tautology suite(s))
>> >> assuming this PR is approved to be added?
>> >&
Looks good to me. Testing went OK without any issues.
Thanks,
Redo.
On Tue, Mar 5, 2024 at 5:22 PM Travis Nielsen wrote:
> Looks great to me, Redo has tested this thoroughly.
>
> Thanks!
> Travis
>
> On Tue, Mar 5, 2024 at 8:48 AM Yuri Weinstein wrote:
>
>> Details of this release are summariz
Dear ceph community,
As you are aware, cephadm has become the default tool for installing Ceph
on bare-metal systems. Currently, during the bootstrap process of a new
cluster, if the user interrupts the process manually or if there are any
issues causing the bootstrap process to fail, cephadm leav
part of the learning experience. So my
> answer to "how do I start over" would be "go figure it out, its an
> important lesson".
>
> Best regards,
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
>
Hi Malte,
Did you try:
ceph mgr module disable cephadm; ceph mgr module enable cephadm --force;
Can you see any error in the mgr logs?
for this just try to find the mgr systemd service by running in the node
where you active mgr is running:
> systemctl | grep mgr
then:
> journalct -f -u
So basically it's failing here:
> self.to_remove_osds.load_from_store()
This function is responsible of loading Specs from the mon-store. The
information is stored in json format and it seems the
stored json for the OSD(s) is not valid for some reason. You can see what's
stored in the mon-store
Ceph dashboard should automatically get the Prometheus user/password so
there's no need to configure anything there. If you want to change the
default user/password then you should follow instructions from the docs as
pointed out by Eugen
BTW: when security is enabled that will affect the whole mo
Yeah... that's right, the way certificates are managed and there's no
documentation on how to set the new ones mainly because it's not easy to do
that manually. I'm
working on some detailed instructions (hosted in the below repo) to help
with that. I tested the script on my test cluster and it work
Just FYI: cephadm does support providing/using a custom template (see the
docs on [1]). For example using the following cmd you can override the
prometheus template:
> ceph config-key set mgr/cephadm/services/prometheus/prometheus.yml
After changing the template you have to reconfigure the servi
Just to comment on the ceph.target. Technically in a containerized ceph a
node can host daemons from *many ceph clusters* (each with its own
ceph_fsid).
The ceph.target is a global unit and it's the root for all the clusters
running in the node. There's another target which is specific to
each clu
hough this command works as well
> (trying to override the defaults):
>
> ceph config-key set
> mgr/cephadm/services/prometheus/alerting/ceph_alerts.yml -i
> ceph_alerts.yml
>
> The default 30% value is not overridden. So the question is, how to
> actually change the o
You are getting the double option because "ssl: true" ... try to disable
ssl since you are passing the arguments and certificates by hand!
Another option is to have cephadm generate the certificates for you by
setting the `generate_cert` field in the spec to true. But I'm not sure if
that works fo
g the certificates by
yourself and having cephadm doing that for you (unless your Org has strict
rules around that!)?
On Fri, Jan 17, 2025 at 10:19 AM Redouane Kachach
wrote:
> I see, unfortunately I can't see an easy way to avoid that. With the
> current code you will get either p
y complicate using this also.
>
> I'm probably going to shy away from this for now. I still think a sensible
> fix here is to not add the defaulted ssl_certificate option if it's passed
> as an extra_frontend_arg, which allows my use case without a mainline
> change to behaviou
rivate/server.key
>
> So I don't have a combination that allows me to do just HTTPS with my
> cert/key provided as a file path.
>
> Thanks,
> Alex
>
> --
> *From:* Redouane Kachach
> *Sent:* Thursday, January 16, 2025 8:00 PM
> *To:* Alex
at 5:22 PM Florian Haas wrote:
> On 02/01/2025 16:37, Redouane Kachach wrote:
> > Just to comment on the ceph.target. Technically in a containerized ceph a
> > node can host daemons from *many ceph clusters* (each with its own
> > ceph_fsid).
> >
> > The ceph.target
*Sent:* Thursday, January 16, 2025 5:59 PM
> *To:* Redouane Kachach
> *Cc:* ceph-users
> *Subject:* Re: [EXTERNAL] Re: [ceph-users] Cephadm: Specifying RGW Certs
> & Keys By Filepath
>
> Amazing. How did I miss that.
>
> Dropping "ssl: true" and adding
From the stack-trace it seems Grafana certificates are broken.
Maybe the recommendations from the thread can help:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/RX7BREBAQBWFVZZ6ADXC33PZNNT5IY5H/
Best,
Redo.
On Tue, Mar 4, 2025 at 1:20 PM Laimis Juzeliūnas <
laimis.juzeliu...@ox
s deterministic.
>
> The linked cermgr is something that will come to tentacle? I've stumbled
> uppon it in the /latest documentation, but it looks like it is not yet in
> squid.
>
> Am Di., 8. Juli 2025 um 15:49 Uhr schrieb Redouane Kachach <
> rkach...@redhat.com>:
&g
Hi,
The changes in the PR should be already in Tentacle so the fix will come
with the release.
In addition I'd recommend in general puting any service related config in
the service-spec. Setting the key-store by hand directly is not a good idea
as cephadm
will not be aware of those changes and ca
Hello Ali,
You can set configuration by including a config section in our yaml as
following:
config:
param_1: val_1
...
param_N: val_N
this is equivalent to call the following ceph cmd:
> ceph config set
Best Regards,
Redo.
On Fri, Jul 15, 2022 at 2:45 PM Ali Akil w
s added to ceph.conf.
>
> Best Regards,
> Ali
> On 15.07.22 15:21, Redouane Kachach Elhichou wrote:
>
> Hello Ali,
>
> You can set configuration by including a config section in our yaml as
> following:
>
> config:
> param_1: val_1
> ...
Great, thanks for sharing your solution.
It would be great if you can open a tracker describing the issue so it
could be fixed later in cephadm code.
Best,
Redo.
On Tue, Jul 19, 2022 at 9:28 AM Robert Reihs wrote:
> Hi,
> I think I found the problem. We are using ipv6 only, and the config ceph
?
>
> Best,
>
> Luis Domingues
> Proton AG
>
>
> --- Original Message ---
> On Friday, July 15th, 2022 at 17:06, Redouane Kachach Elhichou <
> rkach...@redhat.com> wrote:
>
>
> > This section could be added to any service spec. cephadm will pa
----- Original Message ---
> On Tuesday, July 19th, 2022 at 13:47, Redouane Kachach Elhichou <
> rkach...@redhat.com> wrote:
>
>
> > Did you try the *rm *option? both ceph config and ceph config-key support
> > removing config kyes:
> >
> > From:
> >
&g
Great, thank you.
Best,
Redo.
On Thu, Jul 21, 2022 at 2:01 PM Robert Reihs wrote:
> Bug Reported:
> https://tracker.ceph.com/issues/56660
> Best
> Robert Reihs
>
> On Tue, Jul 19, 2022 at 11:44 AM Redouane Kachach Elhichou <
> rkach...@redhat.com> wrote:
>
>
Hello,
As of this PR https://github.com/ceph/ceph/pull/47098 grafana cert/key are
now stored per-node. So instead of *mgr/cephadm/grafana_crt* they are
stored per-nodee as:
*mgr/cephadm/{hostname}/grafana_crt*
*mgr/cephadm/{hostname}/grafana_key*
In order to see the config entries that have been
Glad it helped you to fix the issue. I'll open a tracker to fix the docs.
On Wed, Oct 5, 2022 at 3:52 PM E Taka <0eta...@gmail.com> wrote:
> Thanks, Redouane, that helped! The documentation should of course also be
> updated in this context.
>
> Am Mi., 5. Okt. 2022 um 15:
Currently the generated template is the same for all the hosts and there's
no way to have a dedicated template for a specific host AFAIK.
On Tue, Oct 25, 2022 at 12:45 PM Lasse Aagren wrote:
> The context provided, when parsing the template:
>
>
> https://github.com/ceph/ceph/blob/v16.2.10/src/p
If you are running quincy and using cephadm then you can have more
instances of prometheus (and other monitoring daemons) running in HA mode
by increasing the number of daemons as in [1]:
from a cephadm shell (to run 2 instances of prometheus and altertmanager):
> ceph orch apply prometheus --plac
/latest/configuration/configuration/#http_sd_config
<https://prometheus.io/docs/prometheus/2.28/configuration/configuration/#http_sd_config>
On Tue, Nov 8, 2022 at 4:47 PM Eugen Block wrote:
> I somehow missed the HA part in [1], thanks for pointing that out.
>
>
> Zitat vo
Normally it should work, another way to do it is basically by just entering
the container by using podman commands (or docker).
For this, just run:
> podman ps | grep mds | awk '{print $1}' (to get the container ID)
> podman exec -it /bin/sh
That should work if the container is running.
Regar
Sometimes some ceph-volume commands hang when trying to access some device.
Please, take a look at the solution/steps provided by Adam in the thread
with title "Issue adding host with cephadm - nothing is deployed" to check
if the cephadm is waiting for some ceph-volume command to complete.
Regard
Hello Dmitriy,
You have to provide a valid ip during the bootstrap: --mon-ip **
* *must be a valid ip from some interface on the current node.
Regards,
Redouane.
On Thu, May 26, 2022 at 2:14 AM Dmitriy Trubov
wrote:
> Hi,
>
> I'm trying to install ansible octopus with cephadm.
>
> Here is
To see what cephadm is doing you can check both the logs on:
*/var/log/ceph/cephadm.log* (here you can see what the cephadm running on
each host is doing) and you can also check what the cephadm (mgr module) is
doing by checking the logs of the mgr container by:
> podman logs -f `podman ps | grep
>From the error message:
2022-06-25 21:51:59,798 7f4748727b80 INFO /usr/bin/ceph-mon: stderr too many
arguments:
[--default-log-to-journald=true,--default-mon-cluster-log-to-journald=true]
it seems that you are not using the cephadm that corresponds to your ceph
version. Please, try to get cephad
40 matches
Mail list logo