This helped: https://tracker.ceph.com/issues/44509
$ systemctl stop ceph-osd@68
$ ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-68 --devs-source
/var/lib/ceph/osd/ceph-68/block --dev-target
/var/lib/ceph/osd/ceph-68/block.db bluefs-bdev-migrate
$ systemctl start ceph-osd@68
Thanks a lot for
One more question:
How do I get rid of the bluestore spillover message?
osd.68 spilled over 64 KiB metadata from 'db' device (13 GiB used of
50 GiB) to slow device
I tried an offline compactation, which did not help.
Am Mo., 17. Mai 2021 um 15:56 Uhr schrieb Boris Behrens :
> I have no idea
I have no idea why, but it worked.
As the fsck went well, I just re did the bluefs-bdev-new-db and now the OSD
is back up, with a block.db device.
Thanks a lot
Am Mo., 17. Mai 2021 um 15:28 Uhr schrieb Igor Fedotov :
> If you haven't had successful OSD.68 starts with standalone DB I think
> it'
If you haven't had successful OSD.68 starts with standalone DB I think
it's safe to revert previous DB adding and just retry it.
At first suggest to run bluefs-bdev-new-db command only and then do fsck
again. If it's OK - proceed with bluefs migrate followed by another
fsck. And then finalize
See my last mail :)
Am Mo., 17. Mai 2021 um 14:52 Uhr schrieb Igor Fedotov :
> Would you try fsck without standalone DB?
>
> On 5/17/2021 3:39 PM, Boris Behrens wrote:
> > Here is the new output. I kept both for now.
> >
> > [root@s3db10 export-bluefs2]# ls *
> > db:
> > 018215.sst 018444.sst 0
The FSCK looks good:
[root@s3db10 export-bluefs2]# ceph-bluestore-tool --path
/var/lib/ceph/osd/ceph-68 fsck
fsck success
Am Mo., 17. Mai 2021 um 14:39 Uhr schrieb Boris Behrens :
> Here is the new output. I kept both for now.
>
> [root@s3db10 export-bluefs2]# ls *
> db:
> 018215.sst 018444.ss
Would you try fsck without standalone DB?
On 5/17/2021 3:39 PM, Boris Behrens wrote:
Here is the new output. I kept both for now.
[root@s3db10 export-bluefs2]# ls *
db:
018215.sst 018444.sst 018839.sst 019074.sst 019210.sst 019381.sst
019560.sst 019755.sst 019849.sst 019888.sst 01995
Here is the new output. I kept both for now.
[root@s3db10 export-bluefs2]# ls *
db:
018215.sst 018444.sst 018839.sst 019074.sst 019210.sst 019381.sst
019560.sst 019755.sst 019849.sst 019888.sst 019958.sst 019995.sst
020007.sst 020042.sst 020067.sst 020098.sst 020115.sst
018216.sst
On 5/17/2021 2:53 PM, Boris Behrens wrote:
Like this?
Yeah.
so DB dir structure is more or less O but db/CURRENT looks corrupted. It
should contain something like: MANIFEST-020081
Could you please remove (or even just rename) block.db symlink and do the
export again? Preferably to preserv
Like this?
[root@s3db10 export-bluefs]# ls *
db:
018215.sst 018444.sst 018839.sst 019074.sst 019174.sst 019372.sst
019470.sst 019675.sst 019765.sst 019882.sst 019918.sst 019961.sst
019997.sst 020022.sst 020042.sst 020061.sst 020073.sst
018216.sst 018445.sst 018840.sst 019075.sst
You might want to check file structure at new DB using bluestore-tools's
bluefs-export command:
ceph-bluestore-tool --path --command bluefs-export --out
needs to have enough free space enough to fit DB data.
Once exported - does contain valid BlueFS directory
structure - multiple .sst
Hi Igor,
I posted it on pastebin: https://pastebin.com/Ze9EuCMD
Cheers
Boris
Am Mo., 17. Mai 2021 um 12:22 Uhr schrieb Igor Fedotov :
> Hi Boris,
>
> could you please share full OSD startup log and file listing for
> '/var/lib/ceph/osd/ceph-68'?
>
>
> Thanks,
>
> Igor
>
> On 5/17/2021 1:09 PM,
Hi Boris,
could you please share full OSD startup log and file listing for
'/var/lib/ceph/osd/ceph-68'?
Thanks,
Igor
On 5/17/2021 1:09 PM, Boris Behrens wrote:
Hi,
sorry for replying to this old thread:
I tried to add a block.db to an OSD but now the OSD can not start with the
error:
Mai
Hi,
sorry for replying to this old thread:
I tried to add a block.db to an OSD but now the OSD can not start with the
error:
Mai 17 09:50:38 s3db10.fra2.gridscale.it ceph-osd[26038]: -7> 2021-05-17
09:50:38.362 7fc48ec94a80 -1 rocksdb: Corruption: CURRENT file does not end
with newline
Mai 17 09:5
Eugene,
Thanks for your help. The info is rea/usr/bin/ceph --cluster ceph --name
client.osd-lockbox.${OSD_FSID} --keyring $OSD_PATH/lockbox.keyring config-key
get dm-crypt/osd/$OSD_FSID/lukslly helpful. In my case, the OSDs were encrypted
so the process is a bit more involved but I managed to g
Don’t forget to change the lv tags and make sure ceph-bluestore-tool
show-label has the right labels. This has been discussed multiple
times [1].
[1]
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/GSFUUIMYDPSFM2HHO25TCTPLTXBS3O2K/
Zitat von t...@postix.net:
Hey all,
I
16 matches
Mail list logo