> -----Original Message-----
> From: Konstantin Shalygin <k0...@k0ste.ru>
> Sent: 22 February 2019 14:23
> To: Nick Fisk <n...@fisk.me.uk>
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] [Bluestore] Some of my osd's uses BlueFS slow 
> storage for db - why?
> 
> Bluestore/RocksDB will only put the next level up size of DB on flash if the 
> whole size will fit.
> These sizes are roughly 3GB,30GB,300GB. Anything in-between those sizes are 
> pointless. Only ~3GB of SSD will ever be used out of a
> 28GB partition. Likewise a 240GB partition is also pointless as only ~30GB 
> will be used.
> 
> I'm currently running 30GB partitions on my cluster with a mix of 6,8,10TB 
> disks. The 10TB's are about 75% full and use around 14GB,
> this is on mainly 3x Replica RBD(4MB objects)
> 
> Nick
> 
> Can you explain more? You mean that I should increase my 28Gb to 30Gb and 
> this do a trick?
> How is your db_slow size? We should control it? You control it? How?

Yes, I was in a similar situation initially where I had deployed my OSD's with 
25GB DB partitions and after 3GB DB used, everything else was going into slowDB 
on disk. From memory 29GB was just enough to make the DB fit on flash, but 30GB 
is a safe round figure to aim for. With a 30GB DB partition with most RBD type 
workloads all data should reside on flash even for fairly large disks running 
erasure coding.

Nick

> 
> 
> k

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to