[ceph-users] Quincy: Stuck on image permissions

2023-02-12 Thread hicks
Hello guys, could someone help me with this? We've been long-time CEPH users... runing several Mimic + Pacific CEPH clusters. Dozens of disk per cluster, typically. BUT... now I have this brand new Quincy cluster and I'm not able to give CLIENT (Quincy on Rocky 8) rw access to ONE IMAGE on Quin

[ceph-users] Re: Ceph Quincy On Rocky 8.x - Upgrade To Rocky 9.1

2023-02-12 Thread hicks
We're running Quincy cluster on Rocky9... its in the podman and also, you can install ceph-common (Quincy version) from packages. ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Extremally need help. Openshift cluster is down :c

2023-02-12 Thread kreept . sama
Hello Eugen, yes i have Its from object a ... debug 2023-02-12T07:12:55.469+ 7f66af51e700 1 mds.gml-okd-cephfs-a asok_command: status {prefix=status} (starting...) debug 2023-02-12T07:13:05.453+ 7f66af51e700 1 mds.gml-okd-cephfs-a asok_command: status {prefix=status} (starting...) debug

[ceph-users] Re: Quincy: Stuck on image permissions

2023-02-12 Thread Jakub Chromy
Hello, looks like I've found it -- THE NAMESPACES :) I love it. Thanks! On 11/02/2023 21:37, hi...@cgi.cz wrote: Hello guys, could someone help me with this? We've been long-time CEPH users... runing several Mimic + Pacific CEPH clusters. Dozens of disk per cluster, typically. BUT... now I

[ceph-users] Subject: OSDs added, remapped pgs and objects misplaced cycling up and down

2023-02-12 Thread Chris Dunlop
Hi, ceph-16.2.9 I've added some new osds - some added to existing hosts and some on newly-commissioned hosts. The new osds were added to the data side of an existing EC 8.3 pool. I've been waiting for the system to finish remapping / backfilling for some time. Originally the number of remap

[ceph-users] recovery for node disaster

2023-02-12 Thread farhad kh
I have a cluster of three nodes, with three replicas per pool on cluster nodes - HOST ADDR LABELS STATUS apcepfpspsp0101 192.168.114.157 _admin mon apcepfpspsp0103 192.168.114.158 mon _admin apcepfpspsp0105 192.168.114.159 mon _admin 3 hosts in cluster --

[ceph-users] Re: Subject: OSDs added, remapped pgs and objects misplaced cycling up and down

2023-02-12 Thread Alexandre Marangone
This could be the pg autoscaler since you added new OSDs. You can run ceph osd pool ls detail and check the pg_num and pg_target numbers iirc to confirm On Sun, Feb 12, 2023 at 20:24 Chris Dunlop wrote: > Hi, > > ceph-16.2.9 > > I've added some new osds - some added to existing hosts and some on

[ceph-users] Re: Subject: OSDs added, remapped pgs and objects misplaced cycling up and down

2023-02-12 Thread Chris Dunlop
On Sun, Feb 12, 2023 at 20:24 Chris Dunlop wrote: Is this "sawtooth" pattern of remapped pgs and misplaced objects a normal consequence of adding OSDs? On Sun, Feb 12, 2023 at 10:02:46PM -0800, Alexandre Marangone wrote: This could be the pg autoscaler since you added new OSDs. You can run ce