[ceph-users] Re: No MDS No FS after update and restart - respectfully request help to rebuild FS and maps

2022-03-14 Thread GoZippy
Ran sudo systemctl status ceph\*.service ceph\*.target on all monitor nodes from cli All showed *root@node7:~# sudo systemctl status ceph\*.service ceph\*.target* ● ceph-mds.target - ceph target allowing to start/stop all ceph-mds@.service instances at once Loaded: loaded (/lib/systemd/syste

[ceph-users] Ceph-CSI and OpenCAS

2022-03-14 Thread Martin Plochberger
Hello, ceph-users community I have watched the recording of "Ceph Performance Meeting 2022-03-03" (in the Ceph channel, link https://www.youtube.com/watch?v=syq_LTg25T4) about OpenCAS and block caching yesterday and it was really informative to me (I especially liked the part where the filtering o

[ceph-users] Re: Scrubbing

2022-03-14 Thread Ray Cunningham
Thank you Dennis! We have made most of these changes and are waiting to see what happens. Thank you, Ray -Original Message- From: Denis Polom Sent: Saturday, March 12, 2022 1:40 AM To: ceph-users@ceph.io Subject: [ceph-users] Re: Scrubbing Hi, I had similar problem on my larce clus

[ceph-users] Re: 3 OSDs can not be started after a server reboot - rocksdb Corruption

2022-03-14 Thread Sebastian Mazza
Hallo Igor, I'm glad I could be of help. Thank you for your explanation! > And I was right this is related to deferred write procedure and apparently > fast shutdown mode. Does that mean I can prevent the error in the meantime, before you can fix the root cause, by disabling osd_fast_shutdow

[ceph-users] Re: Ceph-CSI and OpenCAS

2022-03-14 Thread Mark Nelson
Hi Martin, I believe RH's reference architecture team has deployed ceph with CAS (and perhaps open CAS when it was open sourced), but I'm not sure if there's been any integration work done yet with ceph-csi. Theoretically it should be fairly easy though since the OSD will just treat it as ge

[ceph-users] Re: 3 OSDs can not be started after a server reboot - rocksdb Corruption

2022-03-14 Thread Igor Fedotov
Hi Sebastian, the proper parameter name is 'osd fast shutdown". As with any other OSD config parameter one can use either ceph.conf or 'ceph config set osd.N osd_fast_shutdown false' command to adjust it. I'd recommend the latter form. And yeah from my last experiments it looks like setting

[ceph-users] Re: 3 OSDs can not be started after a server reboot - rocksdb Corruption

2022-03-14 Thread Sebastian Mazza
Hi Igor, great that you was able to reproduce it! I did read your comments at the issue #54547. Am I right that I probably have hundreds of corrupted objects on my EC pools (cephFSD and RBD)? But I only ever noticed when a rocksDB was damaged. A deep scrub should find the other errors, right?

[ceph-users] Re: How often should I scrub the filesystem ?

2022-03-14 Thread Milind Changire
I've created a tracker https://tracker.ceph.com/issues/54557 to track this issue. Thanks Chris, for bringing this to my attention. Regards, Milind On Sun, Mar 13, 2022 at 1:11 AM Chris Palmer wrote: > Hi Miland (or anyone else who can help...) > > Reading this thread made me realise I had over