Hi Özkan,
> ... The client is actually at idle mode and there is no reason to fail at
> all. ...
if you re-read my message, you will notice that I wrote that
- its not the client failing, its a false positive error flag that
- is not cleared for idle clients.
You seem to encounter exactly this
Hi,
Due to idiotic behaviour on my part I made a mistake while replacing some
disks in our data centre and our cluster ended up all powered off!
I have been using ceph for many years (since firefly) but only recently
upgraded to reef and moved to the cephadm / podman setup. I am trying to
figure
Hello Carl,
What do you mean by powered off? Is the OS booted up and online? Was your disk
activity for the OS disk or the disks to which OSDs are deployed?
If your OSs are online, all of the daemons should come online automatically.
Sometimes when my OSDs are not coming online and assuming rest
Thank you Frank.
My focus is actually performance tuning.
After your mail, I started to investigate client-side.
I think the kernel tunings work great now.
After the tunings I didn't get any warning again.
Now I will continue with performance tunings.
I decided to distribute subvolumes across mu
Hi! I'm new with ceph and i struggle to make a mapping between
my current storage knowledge and ceph...
So, i will state my understanding of the context and the question
so please correct me with anything that i got wrong :)
So, files (or pieces of files) are put in PGs that are given sections
o
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io