[ceph-users] Remove orphaned ceph volumes

2022-03-16 Thread Chris Page
Hi, We had to recreate our Ceph cluster and it seems some legacy data was left over. I think this is causing our valid OSD's to hang for 15-20 minutes before starting up on a machine reboot. When checking /var/log/ceph/ceph-volume.log I can see the following - [2022-03-08 09:32:10,581][ceph_volu

[ceph-users] Re: Remove orphaned ceph volumes

2022-03-16 Thread Chris Page
This is now resolved. I simply found the old systemd files inside /etc/systemd/system/multi-user.target.wants and disabled them which automatically cleaned them up. Thanks! On Wed, 16 Mar 2022 at 09:30, Chris Page wrote: > Hi, > > We had to recreate our Ceph cluster and it seems so

[ceph-users] Ceph OSD's take 10+ minutes to start on reboot

2022-03-16 Thread Chris Page
Hi, I'm having an issue on one of my nodes where all of it's OSD's take a long time to come back online (between 10 and 15 minutes). In the Ceph log, it sits on: bluestore(/var/lib/ceph/osd/ceph-8) _open_db_and_around read-only:0 repair:0 Until eventually something changes which allows the start

[ceph-users] Re: Ceph OSD's take 10+ minutes to start on reboot

2022-03-16 Thread Chris Page
> > Thanks Igor, So I stuck the debugging up to 5 and rebooted, and suddenly the OSD's are coming back in no time again. Might this be because they were so recently rebooted? I've added the log with debug below: 2022-03-16T14:31:30.031+ 7f739fd28f00 1 bluestore(/var/lib/ceph/osd/ceph-9) _m

[ceph-users] Re: Ceph OSD's take 10+ minutes to start on reboot

2022-03-16 Thread Chris Page
er nodes? You haven't seen any issues with them or you > just haven't tried? > > And you missed my question about Ceph version? > > > Thanks, > > Igor > > > On 3/16/2022 5:37 PM, Chris Page wrote: > >> Thanks Igor, > > > > So I stuck the

[ceph-users] Re: Ceph OSD's take 10+ minutes to start on reboot

2022-03-16 Thread Chris Page
rectory 2022-03-16T15:48:29.345+ 7f0a29778700 0 log_channel(cluster) log [DBG] : purged_snaps scrub starts 2022-03-16T15:48:29.349+ 7f0a29778700 0 log_channel(cluster) log [DBG] : purged_snaps scrub ok On Wed, 16 Mar 2022 at 15:33, Chris Page wrote: > > Hi Igor, > > > Curi

[ceph-users] Re: Ceph OSD's take 10+ minutes to start on reboot

2022-03-18 Thread Chris Page
Hi, Following up from this, is it just normal for them to take a while? I notice that once I have restarted an OSD, the 'meta' value drops right down to empty and slowly builds back up. The restarted OSD's start with just 1gb or so of metadata and increase over time to 160/170GB of metadata. So p

[ceph-users] Re: Ceph OSD's take 10+ minutes to start on reboot

2022-03-22 Thread Chris Page
db_statistics { "rocksdb_compaction_statistics": "", "": "", "": "** Compaction Stats [default] **", "": "LevelFiles Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop",

[ceph-users] Bad CRC in data messages logging out to syslog

2022-04-25 Thread Chris Page
Hi, Every now and then I am getting the following logs - pve01 2022-04-25T16:41:03.109+0100 7ff35b6da700 0 bad crc in data 3860390385 != exp 919468086 from v1:10.0.0.111:0/873787122 pve01 2022-04-25T16:41:04.361+0100 7fb0e2feb700 0 bad crc in data 1141834112 != exp 797386370 from v1:10.0.0.111: