Sorry that was I prefer #3 (not #2) at the end there. Keyboard failure ;) On 11/19/13 10:27 AM, "Joshua Harlow" <harlo...@yahoo-inc.com> wrote:
>Personally I would prefer #3 from the below. #2 I think will still have to >deal with consistency issues, just switching away from a DB doesn't make >magical ponies and unicorns appear (in-fact it can potentially make the >problem worse if its done incorrectly - and its pretty easy to get it >wrong IMHO). #1 could also work, but then u hit a vertical scaling limit >(works if u paid oracle for there DB or IBM for DB2 I suppose). I prefer >#2 since I think it is honestly needed under all solutions. > >On 11/19/13 9:29 AM, "Chris Friesen" <chris.frie...@windriver.com> wrote: > >>On 11/18/2013 06:47 PM, Joshua Harlow wrote: >>> An idea related to this, what would need to be done to make the DB have >>> the exact state that a compute node is going through (and therefore the >>> scheduler would not make unreliable/racey decisions, even when there >>>are >>> multiple schedulers). It's not like we are dealing with a system which >>> can not know the exact state (as long as the compute nodes are >>>connected >>> to the network, and a network partition does not occur). >> >>How would you synchronize the various schedulers with each other? >>Suppose you have multiple scheduler nodes all trying to boot multiple >>instances each. >> >>Even if each at the start of the process each scheduler has a perfect >>view of the system, each scheduler would need to have a view of what >>every other scheduler is doing in order to not make racy decisions. >> >>I see a few options: >> >>1) Push scheduling down into the database itself. Implement scheduler >>filters as SQL queries or stored procedures. >> >>2) Get rid of the DB for scheduling. It looks like people are working >>on this: https://blueprints.launchpad.net/nova/+spec/no-db-scheduler >> >>3) Do multi-stage scheduling. Do a "tentative" schedule, then try and >>update the DB to reserve all the necessary resources. If that fails, >>someone got there ahead of you so try again with the new data. >> >>Chris >> >>_______________________________________________ >>OpenStack-dev mailing list >>OpenStack-dev@lists.openstack.org >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev