That is correct, just omit the wal_devices and they will be placed on
the db_devices automatically.
Zitat von "Fox, Kevin M" :
I haven't done it, but had to read through the documentation a
couple months ago and what I gathered was:
1. if you have a db device specified but no wal device, it
I haven't done it, but had to read through the documentation a couple months
ago and what I gathered was:
1. if you have a db device specified but no wal device, it will put the wal on
the same volume as the db.
2. the recommendation seems to be to not have a separate volume for db and wal
if on
y
> of the
> >>> images but it clearly must be set somewhere or we wouldn't be trying to
> >>> pull that repeatedly. Never seen an issue like this before. This is a
> total
> >>> long shot, but you could trying setting "ceph config se
am King"
> Cc: "ceph-users"
> Sent: Tuesday, 1 February, 2022 18:12:16
> Subject: [ceph-users] Re: cephadm trouble
> Hi!
> YES! HERE IT IS!
>
> global basic container_image
> quay.io/ceph/ceph@sha256:2f7f0af8663e73a422f797de605e769ae44eb02
questions:
1. How did it get there
2. How to delete it - as far as I understand this field is not editable?
- Original Message -
> From: "Adam King"
> To: "Fyodor Ustinov"
> Cc: "ceph-users"
> Sent: Tuesday, 1 February, 2022 17:45:13
> Subject:
uot;
> > Cc: "ceph-users"
> > Sent: Friday, 28 January, 2022 23:02:26
> > Subject: [ceph-users] Re: cephadm trouble
>
> > Hi!
> >
> >> Hmm, I'm not seeing anything that could be a cause in any of that
> output. I
> >> did not
Hi!
No mode ideas? :(
- Original Message -
> From: "Fyodor Ustinov"
> To: "Adam King"
> Cc: "ceph-users"
> Sent: Friday, 28 January, 2022 23:02:26
> Subject: [ceph-users] Re: cephadm trouble
> Hi!
>
>> Hmm, I'm not
Hi!
> Hmm, I'm not seeing anything that could be a cause in any of that output. I
> did notice, however, from your "ceph orch ls" output that none of your
> services have been refreshed since the 24th. Cephadm typically tries to
> refresh these things every 10 minutes so that signals something is
Hmm, I'm not seeing anything that could be a cause in any of that output. I
did notice, however, from your "ceph orch ls" output that none of your
services have been refreshed since the 24th. Cephadm typically tries to
refresh these things every 10 minutes so that signals something is quite
wrong.
Hi!
I think this happened after I tried to recreate the osd with the command "ceph
orch daemon add osd s-8-2-1:/dev/bcache0"
> It looks like cephadm believes "s-8-2-1:/dev/bcache0" is a container image
> for some daemon. Can you provide the output of "ceph orch ls --format
> yaml",
https://pa
It looks like cephadm believes "s-8-2-1:/dev/bcache0" is a container image
for some daemon. Can you provide the output of "ceph orch ls --format
yaml", "ceph orch upgrade status", "ceph config get mgr container_image",
and the values for monitoring stack container images (format is "ceph
config get
11 matches
Mail list logo