>
> Isn't this one of the reasons containers were pushed, so that the
> packaging isn't as big a deal?
> Is it the continued push to support lots of distros without using
> containers that is the problem?
>
apache httpd and openssh sshd are able to support lots of distros without using
contai
this looks like an old traceback you would get if you ended up with a
service type that shouldn't be there somehow. The things I'd probably check
are that "cephadm ls" on either host definitely doesn't report and strange
things that aren't actually daemons in your cluster such as
"cephadm.". Anothe
Dear Ceph users,
I'm setting up a cluster, at the moment I have 56 OSDs for a total
available space of 109 TiB, and an erasure coded pool with a total
occupancy of just 90 GB. The autoscale mode for the pool is set to "on",
but I still have just 32 PGs. As far as I understand (admittedly not
Hi Adam,
In cephadm ls i found the following service but i believe it was there
before also.
{
"style": "cephadm:v1",
"name":
"cephadm.7ce656a8721deb5054c37b0cfb90381522d521dde51fb0c5a2142314d663f63d",
"fsid": "f270ad9e-1f6f-11ed-b6f8-a539d87379ea",
"systemd_unit":
Okay, I'm wondering if this is an issue with version mismatch. Having
previously had a 16.2.10 mgr and then now having a 15.2.17 one that doesn't
expect this sort of thing to be present. Either way, I'd think just
deleting this cephadm.7ce656a8721deb5054c37b0cfb9038
1522d521dde51fb0c5a2142314d663f6
Hello,
I'm new to Ceph and I recently inherited a 4 node cluster with 32 OSDs and
about 116TB raw space, which shows low available space, which I'm trying to
increase by enabling the balancer and lowering priority for the most-used
OSDs. My questions are: is what I did correct with the current sta
Hi Adam,
I have deleted file located here - rm
/var/lib/ceph/f270ad9e-1f6f-11ed-b6f8-a539d87379ea/cephadm.7ce656a8721deb5054c37b0cfb90381522d521dde51fb0c5a2142314d663f63d
But still getting the same error, do i need to do anything else?
On Fri, Sep 2, 2022 at 9:51 AM Adam King wrote:
> Okay, I'
maybe also a "ceph orch ps --refresh"? It might still have the old cached
daemon inventory from before you remove the files.
On Fri, Sep 2, 2022 at 9:57 AM Satish Patel wrote:
> Hi Adam,
>
> I have deleted file located here - rm
> /var/lib/ceph/f270ad9e-1f6f-11ed-b6f8-a539d87379ea/cephadm.7ce656
I can see that in the output but I'm not sure how to get rid of it.
root@ceph1:~# ceph orch ps --refresh
NAME
HOST STATUSREFRESHED AGE VERSIONIMAGE NAME
IMAGE ID
CONTAINER ID
alertmanager.ceph1
ceph1 running (9h)
Hi Adam,
Wait..wait.. now it's working suddenly without doing anything.. very odd
root@ceph1:~# ceph orch ls
NAME RUNNING REFRESHED AGE PLACEMENTIMAGE NAME
IMAGE ID
alertmanager 1/1 5s ago 2w count:1
quay.io/prometheus/alertmanager:v0.20.0
Let's come back to the original question: how to bring back the second mgr?
root@ceph1:~# ceph orch apply mgr 2
Scheduled mgr update...
Nothing happened with above command, logs saying nothing
2022-09-02T14:16:20.407927+ mgr.ceph1.smfvfd (mgr.334626) 16939 :
cephadm [INF] refreshing ceph2 fa
It Looks like I did it with the following command.
$ ceph orch daemon add mgr ceph2:10.73.0.192
Now i can see two with same version 15.x
root@ceph1:~# ceph orch ps --daemon-type mgr
NAME HOST STATUS REFRESHED AGE VERSION IMAGE NAME
IMAGE ID CONTAINER ID
mgr.ce
Hi Adam,
I run the following command to upgrade but it looks like nothing is
happening
$ ceph orch upgrade start --image quay.io/ceph/ceph:v16.2.10
Status message is empty..
root@ceph1:~# ceph orch upgrade status
{
"target_image": "quay.io/ceph/ceph:v16.2.10",
"in_progress": true,
"
hmm, at this point, maybe we should just try manually upgrading the mgr
daemons and then move from there. First, just stop the upgrade "ceph orch
upgrade stop". If you figure out which of the two mgr daemons is the
standby (it should say which one is active in "ceph -s" output) and then do
a "ceph
Hi Adam,
As you said, i did following
$ ceph orch daemon redeploy mgr.ceph1.smfvfd quay.io/ceph/ceph:v16.2.10
Noticed following line in logs but then no activity nothing, still standby
mgr running in older version
2022-09-02T15:35:45.753093+ mgr.ceph2.huidoh (mgr.344392) 2226 :
cephadm [IN
Hi Kevin,
> Isn't this one of the reasons containers were pushed, so that the
> packaging isn't as big a deal?
>
Yes, but the Ceph community has a strong commitment to provide distro
packages for those users who are not interested in moving to containers.
Is it the continued push to support lot
hmm, okay. It seems like cephadm is stuck in general rather than an issue
specific to the upgrade. I'd first make sure the orchestrator isn't paused
(just running "ceph orch resume" should be enough, it's idempotent).
Beyond that, there was someone else who had an issue with things getting
stuck t
Adam,
I have enabled debug and my logs flood with the following. I am going to
try some stuff from your provided mailing list and see..
root@ceph1:~# tail -f
/var/log/ceph/f270ad9e-1f6f-11ed-b6f8-a539d87379ea/ceph.cephadm.log
2022-09-02T18:38:21.754391+ mgr.ceph2.huidoh (mgr.344392) 211198 :
Do you think this is because I have only a single MON daemon running? I
have only two nodes.
On Fri, Sep 2, 2022 at 2:39 PM Satish Patel wrote:
> Adam,
>
> I have enabled debug and my logs flood with the following. I am going to
> try some stuff from your provided mailing list and see..
>
> roo
I don't think the number of mons should have any effect on this. Looking at
your logs, the interesting thing is that all the messages are so close
together. Was this before having stopped the upgrade?
On Fri, Sep 2, 2022 at 2:53 PM Satish Patel wrote:
> Do you think this is because I have only a
Yes, i have stopped upgrade and those log before upgrade
On Fri, Sep 2, 2022 at 3:27 PM Adam King wrote:
> I don't think the number of mons should have any effect on this. Looking
> at your logs, the interesting thing is that all the messages are so close
> together. Was this before having stopp
Adam,
In google someone suggested a manual upgrade using the following method and
it seems to work but I am stuck in MON redeploy.. haha
Go to mgr container and edit /var/lib/ceph/$fsid/mgr.$whatever/unit.run
file and change ceph/ceph:v16.2.10 on both mgr and restart mgr service
using systemctl r
On Mon, Aug 29, 2022 at 12:49 AM Burkhard Linke
wrote:
>
> Hi,
>
>
> some years ago we changed our setup from a IPoIB cluster network to a
> single network setup, which is a similar operation.
>
>
> The OSD use the cluster network for heartbeats and backfilling
> operation; both use standard tcp c
On Sun, Aug 28, 2022 at 12:19 PM Vladimir Brik
wrote:
>
> Hello
>
> Is there a way to query or get an approximate value of an
> MDS's cache hit ratio without using "dump loads" command
> (which seems to be a relatively expensive operation) for
> monitoring and such?
Unfortunately, I'm not seeing o
We partly rolled our own with AES-GCM. See
https://docs.ceph.com/en/quincy/rados/configuration/msgr2/#connection-modes
and https://docs.ceph.com/en/quincy/dev/msgr2/#frame-format
-Greg
On Wed, Aug 24, 2022 at 4:50 PM Jinhao Hu wrote:
>
> Hi,
>
> I have a question about the MSGR protocol Ceph used
Folks,
I have created a new lab using cephadm and installed a new 1TB spinning
disk which is trying to add in a cluster but somehow ceph is not detecting
it.
$ parted /dev/sda print
Model: ATA WDC WD10EZEX-00B (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/4096B
Partition Table
It is detecting the disk, but it contains a partition table so it
can’t use it. Wipe the disk properly first.
Zitat von Satish Patel :
Folks,
I have created a new lab using cephadm and installed a new 1TB spinning
disk which is trying to add in a cluster but somehow ceph is not detecting
it.
27 matches
Mail list logo