Hi Adam,
Big Thanks for the responses and clarifying the global usage of the --image
parameter. Eventhough, I gave --image during bootstrap only mgr & mon
daemons on the bootstrap host are getting created with that image and the
rest of the demons are created on the image daemon-base as I mention
This probably doesn’t solve your overall immediate problem, but these PRs that
should be in Quincy enable Lua scripting to override any user-supplied storage
class on upload. This is useful in contexts where user / client behavior is
difficult to enforce but the operator wishes to direct object
Hi Frederic
For your point 3, the default_storage_class from the user info is apparently
ignored.
Setting it on Nautilus 14.2.15 had no impact and objects were still stored with
STANDARD.
Another issue is that some clients like s3cmd are per default explicitly using
STANDARD.
And even afte
Hey ceph-users,
I am debugging a mgr pg_autoscaler WARN which states a target_size_bytes
on a pool would overcommit the available storage.
There is only one pool with value for target_size_bytes (=5T) defined
and that apparently would consume more than the available storage:
--- cut ---
# c
Hello Everyone,
We encounter some issue that OS hanging on host OSD causes the cluster to
stop ingesting data.
Below are CEPH Cluster details:
CEPH Object Storage v14.2.22
No. of Monitor nodes: 5
No. of RGW nodes:5
No.of OSD's:252 (all NVME's)
OS : Centos 7.9
kernel: 3.10.0-1160.45.1.el7
Hi,
I've found a solution for getting rid of the stale pg_temp. I've scaled
the pool up to 128 PGs (thus "covering" the pg_temp). Afterwards the
remapped PG was gone. I'm currently scaling down back to 32, no extra PG
(either regular or temp) so far.
The pool is almost empty, so playing ar
Hi,
On 2/2/22 14:39, Konstantin Shalygin wrote:
Hi,
The cluster is Nautilus 14.2.22
For a long time we have bogus 1 remapped PG, without actual 'remapped' PG's
# ceph pg dump pgs_brief | awk '{print $2}' | grep active | sort | uniq -c
dumped pgs_brief
15402 active+clean
6 active+cle
Hi,
The cluster is Nautilus 14.2.22
For a long time we have bogus 1 remapped PG, without actual 'remapped' PG's
# ceph pg dump pgs_brief | awk '{print $2}' | grep active | sort | uniq -c
dumped pgs_brief
15402 active+clean
6 active+clean+scrubbing
# ceph osd dump | grep pg_temp
pg_temp
Well, looks like not many people have tried this.
And to me it looks like a bug/omission in "ceph orch apply rgw".
After digging through the setup I figured out that the unit.run file for the
new rgw.zone21 process/container doesn't get the --rgw-zonegroup (or
--rgw-region) parameter for radosgw
On 02.02.22 12:15, Manuel Holtgrewe wrote:
Would this also work when renaming hosts at the same time?
- remove host from ceph orch
- reinstall host with different name/IP
- add back host into ceph orch
- use ceph osd activate
as above?
That could also work as long as the OSDs are still in th
Thank you for the information. I will try this.
Would this also work when renaming hosts at the same time?
- remove host from ceph orch
- reinstall host with different name/IP
- add back host into ceph orch
- use ceph osd activate
as above?
On Mon, Jan 31, 2022 at 10:44 AM Robert Sander
wrote:
11 matches
Mail list logo