ceph-iscsi doesn't support round-robin multi-pathing; so you need at least
one LUN per gateway to utilize all of them.

Please see https://docs.ceph.com for basics about RBDs and pools.

Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


On Thu, Dec 5, 2019 at 5:04 PM Philip Brown <pbr...@medata.com> wrote:

> Interesting.
> I thought when you defined a pool, and then defined an RBD within that
> pool.. that any auto-replication stayed within that pool?
> So what kind of "load balancing" do you mean?
> I'm confused.
>
>
>
>
> ----- Original Message -----
> From: "Paul Emmerich" <paul.emmer...@croit.io>
> To: "Philip Brown" <pbr...@medata.com>
> Cc: "ceph-users" <ceph-users@lists.ceph.com>
> Sent: Wednesday, December 4, 2019 12:05:47 PM
> Subject: Re: [ceph-users] best pool usage for vmware backing
>
> 1 pool per storage class (e.g., SSD and HDD), at least one RBD per
> gateway per pool for load balancing (failover-only load balancing
> policy).
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> On Wed, Dec 4, 2019 at 8:51 PM Philip Brown <pbr...@medata.com> wrote:
> >
> > Lets say that you had roughly 60 OSDs that you wanted to use to provide
> storage for VMware, through RBDs served through iscsi.
> >
> > Target VM types are completely mixed. Web front ends, app tier.. a few
> databases.. and the kitchen sink.
> > Estimated number of VMs: 50-200
> > b
> >
> > How would people recommend the storage be divided up?
> >
> > The big questions are:
> >
> > * 1 pool, or multiple,and why
> >
> > * many RBDs, few RBDs, or single  RBD per pool? why?
> >
> >
> >
> >
> >
> >
> >
> > --
> > Philip Brown| Sr. Linux System Administrator | Medata, Inc.
> > 5 Peters Canyon Rd Suite 250
> > Irvine CA 92606
> > Office 714.918.1310| Fax 714.918.1325
> > pbr...@medata.com| www.medata.com
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to