Can someone explain _why_ we need caching? With our approach to pagination, without caching, the answer is always correct: each query always returns the next {limit} values whose ID is >= {start-id}.
I agree that in practice this means that there's no way to guarantee you get all values while they're changing behind the scenes, but this is a shortcoming of pagination, not caching. Caching doesn't solve this, it just creates thornier edge cases. The solution here is a more sensible ordering than 'last modified', and I question the value of pagination (other than for compatibility) Justin --- Justin Santa Barbara Founder, FathomDB On Wed, Mar 16, 2011 at 11:14 AM, Andrew Shafer <and...@cloudscaling.com>wrote: > > Global temporal consistency is a myth. > > If you decide not to cache and support pagination then querying every zone > for every page is potentially as misleading as caching because what should > be on each page could change for every request. > > +1 for cache with ttl > > > On Wed, Mar 16, 2011 at 11:58 AM, Paul Voccio > <paul.voc...@rackspace.com>wrote: > >> Ed, >> >> I would agree. The caching would go with the design tenet #7: Accept >> eventual consistency and use it where it is appropriate. >> >> If we're ok with accepting that the list may or may not always be up to >> date and feel its appropriate, we should be good with the caching. >> >> pvo >> >> >> On 3/16/11 11:45 AM, "Ed Leafe" <ed.le...@rackspace.com> wrote: >> >> >On Mar 16, 2011, at 12:23 PM, Paul Voccio wrote: >> > >> >> Not only is this expensive, but there is no way I can see at the moment >> >>to do pagination, which is what makes this really expensive. If someone >> >>asked for an entire list of all their instances and it was > 10,000 then >> >>I would think they're ok with waiting while that response is gathered >> >>and returned. However, since the API spec says we should be able to do >> >>pagination, this is where asking each zone for all its children every >> >>time gets untenable. >> > >> > This gets us into the caching issues that were discussed at the last >> >summit. We could run the query and then cache the results at the >> >endpoint, but this would require accepting some level of staleness of the >> >results. The cache would handle the paging, and some sort of TTL would >> >have to be established as a balance between performance and staleness. >> > >> > >> > >> >-- Ed Leafe >> > >> >> >> _______________________________________________ >> Mailing list: https://launchpad.net/~openstack >> Post to : openstack@lists.launchpad.net >> Unsubscribe : https://launchpad.net/~openstack >> More help : https://help.launchpad.net/ListHelp >> > > > _______________________________________________ > Mailing list: https://launchpad.net/~openstack > Post to : openstack@lists.launchpad.net > Unsubscribe : https://launchpad.net/~openstack > More help : https://help.launchpad.net/ListHelp > >
_______________________________________________ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp