Hello Oliver,

As far as I know yet, you can use the same DB device for about 4 or 5 OSDs,
just need to be aware of the free space. I'm also developing a bluestore
cluster, and our DB and WAL will be in the same SSD of about 480GB serving
4 OSD HDDs of 4 TB each. About the sizes, its just a feeling because I
couldn't find yet any clear rule about how to measure the requirements.

* The only concern that took me some time to realize is that you should
create a XFS partition if using ceph-deploy because if you don't it will
simply give you a RuntimeError that doesn't give any hint about what's
going on.

So, answering your question, you could do something like:
$ ceph-deploy osd create --bluestore --data=/dev/sdb --block-db
/dev/nvme0n1p1 $HOSTNAME
$ ceph-deploy osd create --bluestore --data=/dev/sdc --block-db
/dev/nvme0n1p1 $HOSTNAME

On Fri, May 11, 2018 at 10:35 AM Oliver Schulz <oliver.sch...@tu-dortmund.de>
wrote:

> Dear Ceph Experts,
>
> I'm trying to set up some new OSD storage nodes, now with
> bluestore (our existing nodes still use filestore). I'm
> a bit unclear on how to specify WAL/DB devices: Can
> several OSDs share one WAL/DB partition? So, can I do
>
>      ceph-deploy osd create --bluestore --osd-db=/dev/nvme0n1p2
> --data=/dev/sdb HOSTNAME
>
>      ceph-deploy osd create --bluestore --osd-db=/dev/nvme0n1p2
> --data=/dev/sdc HOSTNAME
>
>      ...
>
> Or do I need to use osd-db=/dev/nvme0n1p2 for data=/dev/sdb,
> osd-db=/dev/nvme0n1p3 for data=/dev/sdc, and so on?
>
> And just to make sure - if I specify "--osd-db", I don't need
> to set "--osd-wal" as well, since the WAL will end up on the
> DB partition automatically, correct?
>
>
> Thanks for any hints,
>
> Oliver
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-- 

João Paulo Sacchetto Ribeiro Bastos
+55 31 99279-7092
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to