Hi, this looks reasonable to me but I would prefer B. In this case the operator can configure the hard limit. I don't think we more granularity or expose it using the API.
Belmiro On Fri, Jun 8, 2018 at 3:46 PM Dan Smith <d...@danplanet.com> wrote: > > Some ideas that have been discussed so far include: > > FYI, these are already in my order of preference. > > > A) Selecting a new, higher maximum that still yields reasonable > > performance on a single compute host (64 or 128, for example). Pros: > > helps prevent the potential for poor performance on a compute host > > from attaching too many volumes. Cons: doesn't let anyone opt-in to a > > higher maximum if their environment can handle it. > > I prefer this because I think it can be done per virt driver, for > whatever actually makes sense there. If powervm can handle 500 volumes > in a meaningful way on one instance, then that's cool. I think libvirt's > limit should likely be 64ish. > > > B) Creating a config option to let operators choose how many volumes > > allowed to attach to a single instance. Pros: lets operators opt-in to > > a maximum that works in their environment. Cons: it's not discoverable > > for those calling the API. > > This is a fine compromise, IMHO, as it lets operators tune it per > compute node based on the virt driver and the hardware. If one compute > is using nothing but iSCSI over a single 10g link, then they may need to > clamp that down to something more sane. > > Like the per virt driver restriction above, it's not discoverable via > the API, but if it varies based on compute node and other factors in a > single deployment, then making it discoverable isn't going to be very > easy anyway. > > > C) Create a configurable API limit for maximum number of volumes to > > attach to a single instance that is either a quota or similar to a > > quota. Pros: lets operators opt-in to a maximum that works in their > > environment. Cons: it's yet another quota? > > Do we have any other quota limits that are per-instance like this would > be? If not, then this would likely be weird, but if so, then this would > also be an option, IMHO. However, it's too much work for what is really > not a hugely important problem, IMHO, and both of the above are > lighter-weight ways to solve this and move on. > > --Dan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >
__________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev