Re: [ceph-users] Bluestore disk colocation using NVRAM, SSD and SATA

2017-09-21 Thread Mark Nelson
On 09/21/2017 03:19 AM, Maged Mokhtar wrote: On 2017-09-21 10:01, Dietmar Rieder wrote: Hi, I'm in the same situation (NVMEs, SSDs, SAS HDDs). I asked the same questions to myself. For now I decided to use the NVMEs as wal and db devices for the SAS HDDs and on the SSDs I colocate wal and d

Re: [ceph-users] Bluestore disk colocation using NVRAM, SSD and SATA

2017-09-21 Thread Dietmar Rieder
On 09/21/2017 10:19 AM, Maged Mokhtar wrote: > On 2017-09-21 10:01, Dietmar Rieder wrote: > >> Hi, >> >> I'm in the same situation (NVMEs, SSDs, SAS HDDs). I asked the same >> questions to myself. >> For now I decided to use the NVMEs as wal and db devices for the SAS >> HDDs and on the SSDs I col

Re: [ceph-users] Bluestore disk colocation using NVRAM, SSD and SATA

2017-09-21 Thread Дробышевский , Владимир
I believe you should use only one big partition in case of one device per OSD. And in case of using additional device(s) for wal\db block.db size is set to 1% of the main partition by default (at least according to ceph-disk sources; it just gets bluestore_block_size, divides it by 100 and use it

Re: [ceph-users] Bluestore disk colocation using NVRAM, SSD and SATA

2017-09-21 Thread Maged Mokhtar
On 2017-09-21 10:01, Dietmar Rieder wrote: > Hi, > > I'm in the same situation (NVMEs, SSDs, SAS HDDs). I asked the same > questions to myself. > For now I decided to use the NVMEs as wal and db devices for the SAS > HDDs and on the SSDs I colocate wal and db. > > However, I'm still wonderin ho

Re: [ceph-users] Bluestore disk colocation using NVRAM, SSD and SATA

2017-09-21 Thread Graeme Seaton
Hi, This is the approach I've also taken. As for sizing, I simply divided the nvme into a partition per HDD and colocate the WAL/DB in that partition.  My understanding is that Bluestore will simply use the extra space for smaller reads/writes until it reaches capacity when it then spools out

Re: [ceph-users] Bluestore disk colocation using NVRAM, SSD and SATA

2017-09-21 Thread Dietmar Rieder
Hi, I'm in the same situation (NVMEs, SSDs, SAS HDDs). I asked the same questions to myself. For now I decided to use the NVMEs as wal and db devices for the SAS HDDs and on the SSDs I colocate wal and db. However, I'm still wonderin how (to what size) and if I should change the default sizes of

Re: [ceph-users] Bluestore disk colocation using NVRAM, SSD and SATA

2017-09-20 Thread Alejandro Comisario
But for example, on the same server i have 3 disks technologies to deploy pools, SSD, SAS and SATA. The NVME were bought just thinking on the journal for SATA and SAS, since journals for SSD were colocated. But now, exactly the same scenario, should i trust the NVME for the SSD pool ? are there th

Re: [ceph-users] Bluestore disk colocation using NVRAM, SSD and SATA

2017-09-20 Thread Andras Pataki
Is there any guidance on the sizes for the WAL and DB devices when they are separated to an SSD/NVMe?  I understand that probably there isn't a one size fits all number, but perhaps something as a function of cluster/usage parameters like OSD size and usage pattern (amount of writes, number/siz

Re: [ceph-users] Bluestore disk colocation using NVRAM, SSD and SATA

2017-09-20 Thread Nigel Williams
On 21 September 2017 at 04:53, Maximiliano Venesio wrote: > Hi guys i'm reading different documents about bluestore, and it never > recommends to use NVRAM to store the bluefs db, nevertheless the official > documentation says that, is better to use the faster device to put the > block.db in. >

[ceph-users] Bluestore disk colocation using NVRAM, SSD and SATA

2017-09-20 Thread Maximiliano Venesio
Hi guys i'm reading different documents about bluestore, and it never recommends to use NVRAM to store the bluefs db, nevertheless the official documentation says that, is better to use the faster device to put the block.db in. In my cluster i have NVRAM devices of 400GB, SSDs disks for high perfo