>>>>> Tue, 3 Sep 2019 11:28:20 +0200 >>>>> Yoann Moulin <yoann.mou...@epfl.ch> ==> ceph-users@ceph.io : >>>>>> Is it better to put all WAL on one SSD and all DBs on the other one? Or >>>>>> put WAL and DB of the first 5 OSDs on the first SSD and the 5 others on >>>>>> the second one. >>>>> >>>>> I don't know if this has a relevant impact on the latency/speed of the >>>>> ceph system but we use LVM on top of a SW RAID 1 over two SSDs for WAL & >>>>> DB on this RAID1. >>>> >>>> What is the recommended size for wall and db in my case? >>>> >>>> I have : >>>> >>>> 10x 6TB Disk OSDs (data) >>>> 2x 480G SSD >>>> >>>> Best, >>> >>> I'm still unsure with the size of the block.db and the wal. >>> This seems to be relevant: >>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-May/035086.html >>> >>> But it is also said that the pure WAL need just 1 GB of space. >>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-August/036509.html >>> >>> So the conclusion would be to use 2*X(DB) + 1GB (WAL) if you put both on >>> the same partition/LV. >>> With X being on of 3GB, 30GB or 300GB. >>> >>> You have 10 OSDs. That means you should have 10 partitions/LVs for DBs & >>> WALs. >> >> So, I don't have enough space on SSDs to do raid1, I must use 1 SSD for 5 >> disks. >> >> 5x64GB + 5x2GB should be good, shouldn't it? > > I'd put both on one LV/partition.
You mean, one LV/partition for wal and db ? I didn't know it was possible to put both on the same partition. In that case, I can split my SSD in 64GB partition for WAL and DB for each OSD. >> And I still don't know if the ceph-ansible playbook can manage the LVM setup >> of shall I need to prepare all VG and LV before. >> > > I did this manually before running the ansible-playbook. > > host_vars/host3.yml > lvm_volumes: > - data: /dev/sdb > db: '1' > db_vg: host-3-db > - data: /dev/sdc > db: '2' > db_vg: host-3-db > - data: /dev/sde > db: '3' > db_vg: host-3-db > - data: /dev/sdf > db: '4' > db_vg: host-3-db OK thanks for the help. Best, -- Yoann Moulin EPFL IC-IT _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io