disabling the telemetry mgr module it resumed and went trough.
Yours,
bbk
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
ah.. ok, it was not clear to me that skipping minor version when doing a major
upgrade was supported.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
The unit file tells me:
```
# cat
/var/lib/ceph/6d0ecf22-9155-4684-971a-2f6cde8628c8/mgr.pamir.ajvbug/unit.run
set -e
/usr/bin/install -d -m0770 -o 167 -g 167
/var/run/ceph/6d0ecf22-9155-4684-971a-2f6cde8628c8
# mgr.pamir.ajvbug
! /usr/bin/podman rm -f
ceph-6d0ecf22-9155-4684-971a-2f6cde8628c8-
Sorry, my replies from the webinterface didn't yet went trough.
Thank you both! With your help i was able to get it up and running on 17.2.5.
Yours,
bbk
On Tue, 2023-03-14 at 09:44 -0400, Adam King wrote:
> That's very odd, I haven't seen this before. What container image is t
still running... somehow. I installed the latest
17.2.X cephadm on all
nodes and rebooted them nodes, but this didn't help.
Does someone have a hint?
Yours,
bbk
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an ema
rs@ceph.io/message/PKVU5GDOPAX4K4X3YRYNVFW6WHNOGBY2/
Yours,
bbk
On Wed, 2022-02-02 at 12:42 +0100, Robert Sander wrote:
> On 02.02.22 12:15, Manuel Holtgrewe wrote:
> >
> > Would this also work when renaming hosts at the same time?
> >
> > - remove host from ceph orch
> &
XXX\n"
}
```
Deploy the OSD daemon:
```
cephadm deploy --fsid 6d0ecf22-9155-4684-971a-2f6cde8628c8 --osd-fsid [ID]
--name osd.[ID] --config-json osd.[ID].json
```
Yours,
bbk
On Thu, 2021-12-09 at 18:35 +0100, bbk wrote:
> After reading my mail it may not be clear that i reinstalled the OS of
> a
After reading my mail it may not be clear that i reinstalled the OS of
a node with OSDs.
On Thu, 2021-12-09 at 18:10 +0100, bbk wrote:
> Hi,
>
> the last time i have reinstalled a node with OSDs, i added the disks
> with the following command. But unfortunatly this time i ran in
e8628c8
Using recent ceph image
quay.io/ceph/ceph@sha256:2f7f0af8663e73a422f797de605e769ae44eb0297f2a79324739404cc1765728
[ceph: root@hobro /]#
Yours,
bbk
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
up. I don't think this is related.
Yours,
bbk
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
start ceph-$CLUSTERID@osd.$ID
Yours,
bbk
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
...
INFO:cephadm:Chowning
/var/lib/ceph/6d0ecf22-9155-4684-971a-2f6cde8628c8/osd.0/block...
INFO:cephadm:Disabling host unit ceph-volume@ lvm unit...
INFO:cephadm:Moving logs...
INFO:cephadm:Creating new units...
Yours,
bbk
___
ceph-users
m", line 2916, in command_adopt
command_adopt_ceph(daemon_type, daemon_id, fsid);
File "/usr/sbin/cephadm", line 2979, in command_adopt_ceph
os.rmdir(data_dir_src)
OSError: [Errno 39] Directory not empty: '//var/lib/ceph/osd/ceph-0'
Yours,
bbk
___
13 matches
Mail list logo