Sorry... I missed the first question

>> Are you running only a single replica!?  Was the object data *only* on the 
>> second handoff?!  If the original PUT request did not return success it's 
>> much more likely that you would have an unspecified behavior on the read 
>> path.
Yes, I was running on a single replica system. The object was *only*
found in the second handoff node (expected I guess because num
replicas as 1). The original PUT request returned SUCCESS. I'd try to
read the object iff the original PUT succeeded.


On Tue, May 24, 2016 at 4:51 PM, Shrinand Javadekar
<shrin...@maginatics.com> wrote:
> Thanks for the detailed explanation...
>
>>>
>>>
>>> 1. So when the replicator catches up, it will move the object back to
>>> the correct location. Is that right?
>>
>>
>> The read path will find the object on any primary or any handoff location.
>> The replicator *will* copy the data files to the primary and delete it from
>> the handoff once it's successfully in sync.  But GETs for the object will be
>> able to find the object during that entire process.  Having data written to
>> a handoff location does not mean it is unaccessible - quite the opposite -
>> stable handoff ordering is the mechanism that enables data to be accessible
>> during failure of primary storage devices.
>
> This is unlike what I've seen in this setup. I have some code that
> tried to read the object 5 times from Swift with exponential backoff.
> But it failed with a 404 on all occassions which is why it gave up. I
> also tried manually using swift command line tool and got back an
> object "not found" error.
> The object was found in the *second* handoff node. Not the first. Does
> that matter.
>
> The replicator eventually did transfer the blob to the original node.
> After that things were just fine...

_______________________________________________
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to     : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Reply via email to