[ceph-users] MDS Abort druing FS scrub

2024-05-24 Thread Malcolm Haak
When running a cephfs scrub the MDS will crash with the following backtrace -1> 2024-05-25T09:00:23.028+1000 7ef2958006c0 -1 /usr/src/debug/ceph/ceph-18.2.2/src/mds/MDSRank.cc: In function 'void MDSRank::abort(std::string_view)' thread 7ef2958006c0 time 2024-05-25T09:00:23.031373+1000 /usr/src

[ceph-users] Re: Cephfs over internet

2024-05-21 Thread Malcolm Haak
Yeah, you really want to do this over a vpn. Performance is going to be average at best. It would probably be faster to re-export it as NFS/SMB and push that across the internet. On Mon, May 20, 2024 at 11:37 PM Marc wrote: > > > Hi all, > > Due to so many reasons (political, heating problems, l

[ceph-users] lost+found is corrupted.

2024-05-20 Thread Malcolm Haak
Hi all, I've almost got my ceph back to normal after a triple drive failure. But it seems my lost+found folder is corrupted. I've followed the process in https://docs.ceph.com/en/latest/cephfs/disaster-recovery-experts/#disaster-recovery-experts However doing an online scrub, as there is still o

[ceph-users] PG's stuck incomplete on EC pool after multiple drive failure

2024-03-28 Thread Malcolm Haak
Hello all. I have a cluster with ~80TB of spinning disk. Its primary role is cephfs. Recently I had a multiple drive failure (it was not simultaneous) but it's left me with 20 incomplete pg's I know this data is toast, but I need to be able to get what isn't toast out of the cephfs. Well out of t

[ceph-users] OSD has Rocksdb corruption that crashes ceph-bluestore-tool repair

2023-12-17 Thread Malcolm Haak
Hello all, I had an OSD go offline due to UWE. When restarting the OSD service, to try and at least get it to drain cleanly of that data that wasn't damaged, the ceph-osd process would crash. I then attempted to repair it using ceph-bluestore-tool. I can run fsck and it will complete without iss