So on review of a few code paths I see that the default search depth (request_node_count) is `2 x <num_replicas>` which is smallish on single replica systems. There are some error limiting situations that might have allowed the write to go deeper than the expected 2 nodes (primary, and one handoff) which may not have been reflected in the proxy worker later servicing the read (or the error limits may have aged out).
This inconsistency in search depth based on the per-worker error limiting may be something worth looking into generally - but it's probably mostly hidden on clusters that are going 3 or more nodes deep into handoffs in the default case. If you're able to reliably reproduce the failure you might try increasing your [app:proxy_server] request_node_count setting to a fixed value of perhaps 3, and see if that works better for you. -Clay On Tue, May 24, 2016 at 5:14 PM, Clay Gerrard <clay.gerr...@gmail.com> wrote: > > > On Tue, May 24, 2016 at 4:56 PM, Shrinand Javadekar < > shrin...@maginatics.com> wrote: > >> Sorry... I missed the first question >> >> > no worries > > >> >> Yes, I was running on a single replica system. > > > Ah! That's great information. I have *zero* experience with single > replica systems. The logs should be even *more* interesting. FWIW I have > no idea what your expectations should be for a single replica Swift > cluster. But if you can produce scenarios which reliably fail to meet your > expectations quality bug reports are greatly appreciated! > > GL! > > -Clay > >
_______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack