I would suggest 5 x 70GB on each SSD for the RocksDB and WAL in the same 
partition/LVM. 

Unless you have a plan in the future to add some faster NVME for the WAL's when 
you might want them in a separate partition to make it easier to move the WAL 
to the NVME.

Darren

On 04/09/2019, 10:12, "Yoann Moulin" <yoann.mou...@epfl.ch> wrote:

    Le 04/09/2019 à 11:01, Lars Täuber a écrit :
    > Wed, 4 Sep 2019 10:32:56 +0200
    > Yoann Moulin <yoann.mou...@epfl.ch> ==> ceph-users@ceph.io :
    >> Hello,
    >>
    >>> Tue, 3 Sep 2019 11:28:20 +0200
    >>> Yoann Moulin <yoann.mou...@epfl.ch> ==> ceph-users@ceph.io :  
    >>>> Is it better to put all WAL on one SSD and all DBs on the other one? 
Or put WAL and DB of the first 5 OSDs on the first SSD and the 5 others on
    >>>> the second one.  
    >>>
    >>> I don't know if this has a relevant impact on the latency/speed of the 
ceph system but we use LVM on top of a SW RAID 1 over two SSDs for WAL & DB on 
this RAID1.  
    >>
    >> What is the recommended size for wall and db in my case?
    >>
    >> I have :
    >>
    >> 10x 6TB Disk OSDs (data)
    >>  2x 480G SSD
    >>
    >> Best,
    > 
    > I'm still unsure with the size of the block.db and the wal.
    > This seems to be relevant:
    > http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-May/035086.html
    > 
    > But it is also said that the pure WAL need just 1 GB of space. 
    > 
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-August/036509.html
    > 
    > So the conclusion would be to use 2*X(DB) + 1GB (WAL) if you put both on 
the same partition/LV.
    > With X being on of 3GB, 30GB or 300GB.
    > 
    > You have 10 OSDs. That means you should have 10 partitions/LVs for DBs & 
WALs.
    
    So, I don't have enough space on SSDs to do raid1, I must use 1 SSD for 5 
disks.
    
    5x64GB + 5x2GB should be good, shouldn't it?
    
    And I still don't know if the ceph-ansible playbook can manage the LVM 
setup of shall I need to prepare all VG and LV before.
    
    Best,
    
    -- 
    Yoann Moulin
    EPFL IC-IT
    _______________________________________________
    ceph-users mailing list -- ceph-users@ceph.io
    To unsubscribe send an email to ceph-users-le...@ceph.io
    

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to