I'm sorry I have mixed up some information. The actual ratio I have now
is 0,0005% (*100MB for 20TB data*).


Regards,

Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*

On Wed, May 9, 2018 at 11:32 AM, Webert de Souza Lima <webert.b...@gmail.com
> wrote:

> Hello,
>
> Currently, I run Jewel + Filestore for cephfs, with SSD-only pools used
> for cephfs-metadata, and HDD-only pools for cephfs-data. The current
> metadata/data ratio is something like 0,25% (50GB metadata for 20TB data).
>
> Regarding bluestore architecture, assuming I have:
>
>  - SSDs for WAL+DB
>  - Spinning Disks for bluestore data.
>
> would you recommend still store metadata in SSD-Only OSD nodes?
> If not, is it recommended to *dedicate* some OSDs (Spindle+SSD for
> WAL/DB) for cephfs-metadata?
>
> If I just have 2 pools (metadata and data) all sharing the same OSDs in
> the cluster, would it be enough for heavy-write cases?
>
> Assuming min_size=2, size=3.
>
> Thanks for your thoughts.
>
> Regards,
>
> Webert Lima
> DevOps Engineer at MAV Tecnologia
> *Belo Horizonte - Brasil*
> *IRC NICK - WebertRLZ*
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to