Hi Eugen and Anthony,
Thanks for your input, it's much appreciated.
I had not spotted the rados purge command so I'll file that one away
for the future.
I agree that in this case the pool delete seems like the best option
and I've done a test on our dev cluster with a pool of 2.5TB and a few
hundre
> Once stopped:
>
> ceph osd crush remove osd.2
> ceph auth del osd.2
> ceph osd rm osd.2
While I can't help you with the is-it-gone-or-not part of your
journey, the three commands above are correct, but also done in one
single step with "ceph osd purge osd.2". So just adding this if anyone
else i
Hi,
can you show the output of 'ceph orch ls osd --export'? I would look
in the cephadm.log and ceph-volume.log on that node as well as in the
active mgr log. If you already have an osd service that would pick up
this osd, and you zapped it, its creation might have been interrupted.
If yo
Hi Rich,
I agree with the general advice. From what I recall, removing a pool as a whole
will trigger less load on a cluster than removing all objects in that pool.
Also, make sure you know about osd_delete_sleep [1]. It could help you regulate
the PG deletion process.
Regards,
Frédéric.
[1]
I am trying to use the RGW Cloud-Sync module to replicate data from an on-prem
RGW cluster to AWS. This is a multi-tenant cluster with many tenants each who
have multiple buckets. I only want to replicate a single bucket from a single
tenant and am having problems creating a tier config that doe
I will provide you any info you need, just gimme a sign.
My starter post was related to 19.2.0. Now I downgraded (full reinstall as this
is completely new cluster I wanna run) to 18.2.4 and the same story
Mar 06 09:37:41 node1.ec.mts conmon[10588]: failed to collect metrics:
Mar 06 09:37:41 nod
a bit more details. Now I've notices that ceph health detail signals to me that
[WRN] CEPHADM_APPLY_SPEC_FAIL: Failed to apply 1 service(s):
osd.node1.ec.all_disks
osd.node1.ec.all_disks: Expecting value: line 1 column 2311 (char 2310)
Okay, I checked my spec but do not see anything suspicious
From the stack-trace it seems Grafana certificates are broken.
Maybe the recommendations from the thread can help:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/RX7BREBAQBWFVZZ6ADXC33PZNNT5IY5H/
Best,
Redo.
On Tue, Mar 4, 2025 at 1:20 PM Laimis Juzeliūnas <
laimis.juzeliu...@ox
Hi Janne,
That's it.
When finished rebalancing, the space looked as expected.
Thank you.
De: "Janne Johansson"
Enviada: 2025/02/27 16:52:20
Para: quag...@bol.com.br
Cc: ceph-users@ceph.io
Assunto: [ceph-users] Re: Free space
Den tors 27 feb. 2025 kl 18:48 skrev quag...@bol.com.b
Hi Rich,
I waited for other users/operators to chime in because it's been a
while since we deleted a large pool last time in a customer cluster. I
may misremember, so please take that with a grain of salt. But the
pool deletion I am referring to was actually on Nautilus as well. In a
smal
I couldn't see the zone entry in the 'zonegroup get' response.
Updating the period and commiting it separately worked fine.
Thank you so much!
Best regards,
Mahnoosh
On Wed, 5 Mar 2025, 19:35 Shilpa Manjrabad Jagannath,
wrote:
> On Wed, Mar 5, 2025 at 9:34 AM Adam Prycki wrote:
>
> > Hi,
> >
>
I did. It says more or less the same
Mar 06 10:44:05 node1.ec.mts conmon[10588]: 2025-03-06T10:44:05.769+
7faca5624640 -1 log_channel(cephadm) log [ERR] : Failed to apply
osd.node1.ec.mts_all_disks spec
DriveGroupSpec.from_json(yaml.safe_load('''service_type: osd
Mar 06 10:44:05 node1.ec.m
Thanks for the help, buddy! I really appreciate it! Will try to wait. Maybe
someone else jumps in.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On 05/03/2025 20:45, Frédéric Nass wrote:
Hi Florian,
Point 1 is certainly a bug regarding the choice of terms in the response
(confusion between file and directory).
Well... no, I don't think so. Rather, I'd guess it's simply a result of
setfattr returning ENOTEMPTY (errno 39), which the sh
On 06/03/2025 09:46, Florian Haas wrote:
On 05/03/2025 20:45, Frédéric Nass wrote:
Hi Florian,
Point 1 is certainly a bug regarding the choice of terms in the response
(confusion between file and directory).
Well... no, I don't think so. Rather, I'd guess it's simply a result of
setfattr retu
If using the autoscaler there may be some knock-on PG splitting, but that
should be throttled automatically. Do be sure that your mon DBs are on SSDs.
> On Mar 6, 2025, at 7:26 AM, Eugen Block wrote:
>
> Hi Rich,
>
> I waited for other users/operators to chime in because it's been a while
>
Alright, now `ceph orch device ls` show the disk as locked:
HOST PATH TYPE DEVICE ID SIZE
AVAILABLE REFRESHED REJECT REASONS
node-osd1 /dev/sdah hdd LENOVO_ST18000NM004J_ZR5F5TH0W413B814 18.0T
No 10m agolocked
I also noti
17 matches
Mail list logo