Hi,
Am 3/6/25 um 23:04 schrieb Gustavo Garcia Rondina:
ceph osd out osd.2
ceph osd ok-to-stop osd.2
Once Ok to stop, then:
ceph orch daemon stop osd.2
Once stopped:
ceph osd crush remove osd.2
ceph auth del osd.2
ceph osd rm osd.2
As this is a cluster
hi , for trying deploy cluster with cephadm version 19.2.1 and using docker
version 28.0.1 i get this error :
---
# cephadm--image opkbhfpsksp0101.p.fnst/ceph/ceph:v19.2.1 bootstrap
--mon-ip 10.248.35.143 --registry-json /root/reg.json
--allow-fqdn-hostname --initial-dashboard-user admin
-
I don't have a good idea right now, I literally took the same spec
file as yours and it works fine for me in a tiny lab cluster. Maybe
someone else has a good idea.
Zitat von Alex from North :
I did. It says more or less the same
Mar 06 10:44:05 node1.ec.mts conmon[10588]:
2025-03-06T10
Sorry, I forgot to help you with the troubleshooting.
Usually, when slow requests appear in the Ceph status output, running 'ceph
tell osd.X dump_historic_ops' allows you to see which I/O operations took a
long time to execute. The value of X is reported by the Ceph status. Does 'ceph
health d
Ok, thank you. I'll try to understand if there's something I can do
about it.
Nicola
smime.p7s
Description: S/MIME Cryptographic Signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Nicola,
from my experience "stalled" reads often indicate low-level/hardware
issues - you might want to check kernel log on relevant nodes - likely
there've been some relevant errors reported.
Thanks,
Igor
On 07.03.2025 19:05, Nicola Mori wrote:
Dear Ceph users,
after upgrading from 1
Dear Ceph users,
after upgrading from 19.2.0 to 19.2.1 (via cephadm) my cluster started
showing some warnings never seen before:
29 OSD(s) experiencing slow operations in BlueStore
13 OSD(s) experiencing stalled read in db device of BlueFS
I searched for these messages but didn't fi
Am 3/7/25 um 15:40 schrieb Gustavo Garcia Rondina:
# ceph orch ls osd --export
service_type: osd
service_id: osd_spec
service_name: osd.osd_spec
placement:
host_pattern: node-osd5
spec:
data_devices:
rotational: 1
db_devices:
rotational: 0
filter_logic: AND
objectstore:
Hi Eugen,
>From: Eugen Block
>Sent: Friday, March 7, 2025 1:21 AM
>
>
>can you show the output of 'ceph orch ls osd --export'?
# ceph orch ls osd --export
service_type: osd
service_id: osd_spec
service_name: osd.osd_spec
placement:
host_pattern: node-osd5
spec:
data_devices:
rotational:
>From: Robert Sander
>Sent: Friday, March 7, 2025 9:11 AM
>
>> # ceph orch ls osd --export
>> service_type: osd
>> service_id: osd_spec
>> service_name: osd.osd_spec
>> placement:
>> host_pattern: node-osd5
>> spec:
>> data_devices:
>> rotational: 1
>> db_devices:
>> rotational:
Hi Robert
>From: Robert Sander
>Sent: Friday, March 7, 2025 7:02 AM
>
>For the original issue: Do you have an active (managed) OSD service in
>the orchestrator?
In fact, I do - I hadn't noticed it before, the cluster was configured
by a contractor a while ago and my expertise in Ceph still leav
>From: Janne Johansson
>Sent: Friday, March 7, 2025 1:10 AM
>
>While I can't help you with the is-it-gone-or-not part of your
>journey, the three commands above are correct, but also done in one
>single step with "ceph osd purge osd.2". So just adding this if anyone
>else is doing those 3. Forget
Dear Ceph Community,
We need guidance on the best practices for deployment. Specifically, we would
like to know:
What are the recommended methods for Ceph deploying in Kubernetes?
What are the best practices for a production deployment?
What network configurations are recommended for a stable
Understood, Frédéric. What should I look at to try to understand the
underlying reason for the warning and eventually fix it? Moreover, could
this be the reason why the listing of directory contents gets stalled?
Thank you,
Nicola
smime.p7s
Description: S/MIME Cryptographic Signature
ceph health detail actually gives me the number of affected OSDs, e.g.:
[WRN] DB_DEVICE_STALLED_READ_ALERT: 13 OSD(s) experiencing stalled read
in db device of BlueFS
osd.28 observed stalled read indications in DB device
I tried to inspect the output of ceph tell osd.28 dump_historic_ops
Den fre 7 mars 2025 kl 17:05 skrev Nicola Mori :
>
> Dear Ceph users,
>
> after upgrading from 19.2.0 to 19.2.1 (via cephadm) my cluster started
> showing some warnings never seen before:
>
> 29 OSD(s) experiencing slow operations in BlueStore
> 13 OSD(s) experiencing stalled read in db
Hi Nicola,
A quick look in Ceph repository points to this commit [1] and this part [2] of
the documentation.
I doubt your cluster is actually performing worse than it did before the
update. It's just that now it's letting you know when performance isn't optimal
(when queries exceed the default
I think it's enough as the default threshold is 1s if I read correctly (on my
phone right now).
Regards
Frédéric.
De : Nicola Mori
Envoyé : vendredi 7 mars 2025 18:13
À : ceph-users
Objet : [ceph-users] Re: BlueStore and BlueFS warnings after upgrade to 19.2.1
18 matches
Mail list logo