No, you obviously don't need multiple pools for load balancing.
-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


On Thu, Dec 5, 2019 at 6:46 PM Philip Brown <pbr...@medata.com> wrote:

> Hmm...
>
> I reread through the docs in and around
> https://docs.ceph.com/docs/master/rbd/iscsi-targets/
>
> and it mentions about iscsi multipathing through multiple CEPH storage
> gateways... but it doesnt seem to say anything about needing multiple POOLS.
>
> when you wrote,
> " 1 pool per storage class (e.g., SSD and HDD), at least one RBD per
> gateway per pool for load balancing (failover-only load balancing
> policy)."
>
>
> you seemed to imply that we needed to set up multiple pools to get "load
> balancing", but it is lacking some information.
>
> Let me see if I can infer the missing details to your original post.
>
> Perhaps you are suggesting that we use storage pools, to emulate the old
> dual-controller hardware raid array best practice of assigning half the
> LUNs to one controller, and half to the other, for "load balancing".
> Except in this case, we would tell half our vms to use (pool1) and half
> our vms to use (pool2). And then somehow (?) assign preference for pool1 to
> use ceph gateway1, and pool2 to prefer using ceph gateway2
>
> is that what you are saying?
>
> it would make sense...
> Except that I dont see anything in the docs that says how to make such an
> association between pools and a theoretical preferred iscsi gateway.
>
>
>
>
>
>
> ----- Original Message -----
> From: "Paul Emmerich" <paul.emmer...@croit.io>
> To: "Philip Brown" <pbr...@medata.com>
> Cc: "ceph-users" <ceph-users@lists.ceph.com>
> Sent: Thursday, December 5, 2019 8:16:09 AM
> Subject: Re: [ceph-users] best pool usage for vmware backing
>
> ceph-iscsi doesn't support round-robin multi-pathing; so you need at least
> one LUN per gateway to utilize all of them.
>
> Please see https://docs.ceph.com for basics about RBDs and pools.
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
>
> On Thu, Dec 5, 2019 at 5:04 PM Philip Brown <pbr...@medata.com> wrote:
>
> > Interesting.
> > I thought when you defined a pool, and then defined an RBD within that
> > pool.. that any auto-replication stayed within that pool?
> > So what kind of "load balancing" do you mean?
> > I'm confused.
> >
> >
> >
> >
> > ----- Original Message -----
> > From: "Paul Emmerich" <paul.emmer...@croit.io>
> > To: "Philip Brown" <pbr...@medata.com>
> > Cc: "ceph-users" <ceph-users@lists.ceph.com>
> > Sent: Wednesday, December 4, 2019 12:05:47 PM
> > Subject: Re: [ceph-users] best pool usage for vmware backing
> >
> > 1 pool per storage class (e.g., SSD and HDD), at least one RBD per
> > gateway per pool for load balancing (failover-only load balancing
> > policy).
> >
> > Paul
> >
> > --
> > Paul Emmerich
> >
> > Looking for help with your Ceph cluster? Contact us at https://croit.io
> >
> > croit GmbH
> > Freseniusstr. 31h
> > 81247 München
> > www.croit.io
> > Tel: +49 89 1896585 90
> >
> > On Wed, Dec 4, 2019 at 8:51 PM Philip Brown <pbr...@medata.com> wrote:
> > >
> > > Lets say that you had roughly 60 OSDs that you wanted to use to provide
> > storage for VMware, through RBDs served through iscsi.
> > >
> > > Target VM types are completely mixed. Web front ends, app tier.. a few
> > databases.. and the kitchen sink.
> > > Estimated number of VMs: 50-200
> > > b
> > >
> > > How would people recommend the storage be divided up?
> > >
> > > The big questions are:
> > >
> > > * 1 pool, or multiple,and why
> > >
> > > * many RBDs, few RBDs, or single  RBD per pool? why?
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > --
> > > Philip Brown| Sr. Linux System Administrator | Medata, Inc.
> > > 5 Peters Canyon Rd Suite 250
> > > Irvine CA 92606
> > > Office 714.918.1310| Fax 714.918.1325
> > > pbr...@medata.com| www.medata.com
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to