Interesting. I thought when you defined a pool, and then defined an RBD within that pool.. that any auto-replication stayed within that pool? So what kind of "load balancing" do you mean? I'm confused.
----- Original Message ----- From: "Paul Emmerich" <[email protected]> To: "Philip Brown" <[email protected]> Cc: "ceph-users" <[email protected]> Sent: Wednesday, December 4, 2019 12:05:47 PM Subject: Re: [ceph-users] best pool usage for vmware backing 1 pool per storage class (e.g., SSD and HDD), at least one RBD per gateway per pool for load balancing (failover-only load balancing policy). Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Wed, Dec 4, 2019 at 8:51 PM Philip Brown <[email protected]> wrote: > > Lets say that you had roughly 60 OSDs that you wanted to use to provide > storage for VMware, through RBDs served through iscsi. > > Target VM types are completely mixed. Web front ends, app tier.. a few > databases.. and the kitchen sink. > Estimated number of VMs: 50-200 > b > > How would people recommend the storage be divided up? > > The big questions are: > > * 1 pool, or multiple,and why > > * many RBDs, few RBDs, or single RBD per pool? why? > > > > > > > > -- > Philip Brown| Sr. Linux System Administrator | Medata, Inc. > 5 Peters Canyon Rd Suite 250 > Irvine CA 92606 > Office 714.918.1310| Fax 714.918.1325 > [email protected]| www.medata.com > _______________________________________________ > ceph-users mailing list > [email protected] > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
