You referenced parallel writes for journal and data. Which is default for
btrfs but but XFS. Now you are mentioning multiple parallel writes to the
drive , which of course yes will occur.

Also Our Dell 400 Gb NVMe drives do not top out around 5-7 sequential
writes as you mentioned. That would be 5-7 random writes from a drives
perspective and the NVMe drives can do many times that.

I would park it at 5-6 partitions per NVMe , journal on the same disk.
Frequently I want more concurrent operations , rather than all out
throughput.
On Thu, Feb 4, 2016 at 6:49 AM Sascha Vogt <sascha.v...@gmail.com> wrote:

> Am 03.02.2016 um 17:24 schrieb Wade Holler:
> > AFAIK when using XFS, parallel write as you described is not enabled.
> Not sure I'm getting this. If I have multiple OSDs on the same NVMe
> (separated by different data-partitions) I have multiple parallel writes
> (one "stream" per OSD), or am I mistaken?
>
> > Regardless in a way though the NVMe drives are so fast it shouldn't
> > matter much the partitioned journal or other choice.
> Thanks, does anyone has benchmarks on this. How about the size of the
> journal?
>
> > What I would be more interested in is you replication size on the cache
> > pool.
> >
> > This might sound crazy but if your KVM instances are really that short
> > lived, could you get away with size=2 on the cache pool from and
> > availability perspective ?
> :) We are already on min_size=1, size=2 - we even ran for a while witz
> min_size=1, size=1, so we cannot squeeze out much more on that end.
>
> Greetings
> -Sascha-
>
> PS: Thanks a lot already for all the answers!
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to