Mathieu, Blame it on my scattered brain but I'm now curious. How would this be approached practically speaking? I.e. how would ram_weight_multiplier enable the scenario I mentioned in my earliest post ?
//adam *Adam Lawson* AQORN, Inc. 427 North Tatnall Street Ste. 58461 Wilmington, Delaware 19801-2230 Toll-free: (844) 4-AQORN-NOW ext. 101 International: +1 302-387-4660 Direct: +1 916-246-2072 On Thu, Mar 3, 2016 at 10:43 AM, Silence Dogood <[email protected]> wrote: > cool! > > On Thu, Mar 3, 2016 at 1:39 PM, Mathieu Gagné <[email protected]> wrote: > >> On 2016-03-03 12:50 PM, Silence Dogood wrote: >> > We did some early affinity work and discovered some interesting problems >> > with affinity and scheduling. =/ by default openstack used to ( may >> > still ) deploy nodes across hosts evenly. >> > >> > Personally, I think this is a bad approach. Most cloud providers stack >> > across a couple racks at a time filling them then moving to the next. >> > This allows older equipment to age out instances more easily for removal >> > / replacement. >> > >> > The problem then is, if you have super large capacity instances they can >> > never be deployed once you've got enough tiny instances deployed across >> > the environment. So now you are fighting with the scheduler to ensure >> > you have deployment targets for specific instance types ( not very >> > elastic / ephemeral ). goes back to the wave scheduling model being >> > superior. >> > >> > Anyways we had the braindead idea of locking whole physical nodes out >> > from the scheduler for a super ( full node ) instance type. And I >> > suppose you could do this with AZs or regions if you really needed to. >> > But, it's not a great approach. >> > >> > I would say that you almost need a wave style scheduler to do this sort >> > of affinity work. >> > >> >> You can already do it with the RAMWeigher using the >> ram_weight_multiplier config: >> >> Multiplier used for weighing ram. Negative >> numbers mean to stack vs spread. >> >> Default is 1.0 which means spread. >> >> -- >> Mathieu >> >> _______________________________________________ >> OpenStack-operators mailing list >> [email protected] >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > > > _______________________________________________ > OpenStack-operators mailing list > [email protected] > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >
_______________________________________________ OpenStack-operators mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
