Joe, Sure we will. Mike, Thanks for sharing information about scalability problems, presentation was great. Also could you say what do you think is 150 req/sec is it big load for qpid or rabbit? I think it is just nothing..
Best regards, Boris Pavlovic --- Mirantis Inc. On Wed, Jul 24, 2013 at 12:17 AM, Joe Gordon <joe.gord...@gmail.com> wrote: > > > > On Tue, Jul 23, 2013 at 1:09 PM, Boris Pavlovic <bo...@pavlovic.me> wrote: > >> Ian, >> >> There are serious scalability and performance problems with DB usage in >> current scheduler. >> Rapid Updates + Joins makes current solution absolutely not scalable. >> >> Bleuhost example just shows personally for me just a trivial thing. (It >> just won't work) >> >> We will add tomorrow antother graphic: >> Avg user req / sec in current and our approaches. >> > > Will you be releasing your code to generate the results? Without that the > graphic isn't very useful > > >> I hope it will help you to better understand situation. >> >> >> Joshua, >> >> Our current discussion is about could we remove information about compute >> nodes from Nova saftly. >> Both our and your approach will remove data from nova DB. >> >> Also your approach had much more: >> 1) network load >> 2) latency >> 3) one more service (memcached) >> >> So I am not sure that it is better then just send directly to scheduler >> information. >> >> >> Best regards, >> Boris Pavlovic >> --- >> Mirantis Inc. >> >> >> >> >> >> >> On Tue, Jul 23, 2013 at 11:56 PM, Joe Gordon <joe.gord...@gmail.com>wrote: >> >>> >>> On Jul 23, 2013 3:44 PM, "Ian Wells" <ijw.ubu...@cack.org.uk> wrote: >>> > >>> > > * periodic updates can overwhelm things. Solution: remove unneeded >>> updates, >>> > > most scheduling data only changes when an instance does some state >>> change. >>> > >>> > It's not clear that periodic updates do overwhelm things, though. >>> > Boris ran the tests. Apparently 10k nodes updating once a minute >>> > extend the read query by ~10% (the main problem being the read query >>> > is abysmal in the first place). I don't know how much of the rest of >>> > the infrastructure was involved in his test, though (RabbitMQ, >>> > Conductor). >>> >>> A great openstack at scale talk, that covers the scheduler >>> http://www.bluehost.com/blog/bluehost/bluehost-presents-operational-case-study-at-openstack-summit-2111 >>> >>> > >>> > There are reasonably solid reasons why we would want an alternative to >>> > the DB backend, but I'm not sure the update rate is one of them. If >>> > we were going for an alternative the obvious candidate to my mind >>> > would be something like ZooKeeper (particularly since in some setups >>> > it's already a channel between the compute hosts and the control >>> > server). >>> > -- >>> > Ian. >>> > >>> > _______________________________________________ >>> > OpenStack-dev mailing list >>> > OpenStack-dev@lists.openstack.org >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> _______________________________________________ >>> OpenStack-dev mailing list >>> OpenStack-dev@lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> _______________________________________________ >> OpenStack-dev mailing list >> OpenStack-dev@lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > _______________________________________________ > OpenStack-dev mailing list > OpenStack-dev@lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >
_______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev