Good points Soren, and this is why I was suggesting we not solve this problem with a cache, but instead an eventually consistent replication stream of the aggregate data.
-Eric On Wed, Feb 16, 2011 at 11:12:50PM +0100, Soren Hansen wrote: > 2011/2/16 Ed Leafe <e...@leafe.com>: > > This was one of the issues we discussed during the sprint planning. I > > believe (check with cyn) that the consensus was to use a caching strategy > > akin to DNS: e.g., if zone A got a request for instance ID=12345, it would > > check to see if it had id 12345 in its cache. If not, it would ask all of > > its child nodes if they knew about that instance. That would repeat until > > the instance was found, at which point every upstream server would now know > > about where to reach 12345. > > Has any formal analysis been done as to how this would scale? > > I have a couple of problems with this approach: > > * Whenever I ask something for information and I get out-of-date, > cached data back I feel like I'm back in 2003. And 2003 sucked, I > might add. > * Doesn't this caching strategy only help if people are asking for > the same stuff over and over? It doesn't sound very awesome if 100 > requests for new stuff coming in at roughly the same time causes a > request to be sent to every single compute node (or whereever the data > actually resides). I'm assuming breadth-first search here, of course. > > > -- > Soren Hansen > Ubuntu Developer http://www.ubuntu.com/ > OpenStack Developer http://www.openstack.org/ > > _______________________________________________ > Mailing list: https://launchpad.net/~openstack > Post to : openstack@lists.launchpad.net > Unsubscribe : https://launchpad.net/~openstack > More help : https://help.launchpad.net/ListHelp _______________________________________________ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp