Hi Reed,
you might want to use bluefs-bdev-migrate command which simply moves
BlueFS files from source path to destination. I.e. from main device to
DB in you case.
It needs neither OSD redeployment nor additional/new device creation.
Neither it guarantees that spillover reoccurs one day tho
Thanks Igor,
I did see that L4 sizing and thought it seemed auspicious.
Though after looking at a couple other OSD's with this, I saw that I think the
sum of L0-L4 appears to match a rounded off version of the metadata size
reported in ceph osd df tree.
So I'm not sure if thats actually showing
hmm, RocksDB reports 13GB at L4:
"": "Level Files Size Score Read(GB) Rn(GB) Rnp1(GB)
Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec)
CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop",
"":
"
Thanks for sticking with me Igor.Attached is the ceph-kvstore-tool stats output.Hopefully something interesting in here.Thanks,Reed
kvstoretool.log
Description: Binary data
On Jun 12, 2020, at 6:56 AM, Igor Fedotov wrote:
Hi Reed,thanks for the log.Nothing much of i
Hi Reed,
thanks for the log.
Nothing much of interest there though. Just a regular SST file that
RocksDB instructed to put at "slow" device. Presumably it belongs to a
higher level hence the desire to put it that "far". Or (which is less
likely) RocksDB lacked free space when doing compaction
Hi,
We had this issue in a 14.2.8 cluster, although it appeared after
resizing db device to a larger one.
After some time (weeks), spillover was gone...
Cheers
Eneko
El 6/6/20 a las 0:07, Reed Dier escribió:
I'm going to piggy back on this somewhat.
I've battled RocksDB spillovers over the
Reed,
No, "ceph-kvstore-tool stats" isn't be of any interest.
For the sake of better issue understanding it might be interesting to
have bluefs log dump obtained via ceph-bluestore-tool's bluefs-log-dump
command. This will give some insight what RocksDB files are spilled
over. It's still not
The WAL/DB was part of the OSD deployment.
OSD is running 14.2.9.
Would grabbing the ceph-kvstore-tool bluestore-kv stats as in
that ticket be of any usefulness to this?
Thanks,
Reed
> On Jun 5, 2020, at 5:27 PM, Igor Fedotov wrote:
>
> This might help -see comment #4 at https://tracker.ce
This might help -see comment #4 at https://tracker.ceph.com/issues/44509
And just for the sake of information collection - what Ceph version is
used in this cluster?
Did you setup DB volume along with OSD deployment or they were added
later as was done in the ticket above?
Thanks,
Igor
I'm going to piggy back on this somewhat.
I've battled RocksDB spillovers over the course of the life of the cluster
since moving to bluestore, however I have always been able to compact it well
enough.
But now I am stumped at getting this to compact via $ceph tell osd.$osd
compact, which has
10 matches
Mail list logo