Luke,

As a test I’ve already increased all timeouts to 5 minutes but the failure 
occurs within under 1 minute so it doesn’t appear to be timeout related. I 
change the logs to tcplog tomorrow and let you know if I find anything.

Thanks

John Fanjoy
Systems Engineer
jfan...@inetu.net





On 1/13/16, 6:05 PM, "Luke Bakken" <lbak...@basho.com> wrote:

>haproxy ships with some "short" default timeouts. If CyberDuck is able
>to upload these files faster than aws-sdk, it may be doing so within
>the default haproxy timeouts.
>
>You can also look at haproxy's log to see if you find any TCP
>connections that it has closed.
>--
>Luke Bakken
>Engineer
>lbak...@basho.com
>
>
>On Wed, Jan 13, 2016 at 3:02 PM, John Fanjoy <jfan...@inetu.net> wrote:
>> Luke,
>>
>> I may be able to do that. The only problem is without haproxy I have no way 
>> to inject CORS headers which the browser requires, but I may be able to 
>> write up small nodejs app to get past that and see if it is somehow related 
>> to haproxy. The fact that these errors are not present when using Cyberduck 
>> which is also talking to haproxy leads me to believe that’s not the cause, 
>> but it’s definitely worth testing.
>>
>> --
>> John Fanjoy
>> Systems Engineer
>> jfan...@inetu.net
>>
>>
>>
>>
>>
>> On 1/13/16, 5:55 PM, "Luke Bakken" <lbak...@basho.com> wrote:
>>
>>>John -
>>>
>>>The following error indicates that the connection was unexpectedly
>>>closed by something outside of Riak while the chunk is uploading:
>>>
>>>{badmatch,{error,closed}}
>>>
>>>Is it possible to remove haproxy to test using the the aws-sdk?
>>>
>>>That is my first thought as to the cause of this issue, especially
>>>since writing to S3 works with the same code.
>>>
>>>--
>>>Luke Bakken
>>>Engineer
>>>lbak...@basho.com
>>>
>>>On Wed, Jan 13, 2016 at 2:46 PM, John Fanjoy <jfan...@inetu.net> wrote:
>>>> Luke,
>>>>
>>>> Yes on both parts. To confirm cyberduck was using multi-part I actually 
>>>> tailed the console.log while it was uploading the file, and it uploaded 
>>>> the file in approx. 40 parts. Afterwards the parts were reassembled as you 
>>>> would expect. The AWS-SDK for javascript has an object called 
>>>> ManagedUpload which automatically switches to multi-part when the input is 
>>>> larger than the maxpartsize (default 5mb). I have confirmed that it is 
>>>> splitting the files up, but so far I’ve only ever seen one part get 
>>>> successfully uploaded before the others failed at which point it removes 
>>>> the upload (DELETE call) automatically. I also verified that the 
>>>> javascript I have in place does work with an actual AWS S3 bucket to rule 
>>>> out coding issues on my end and the same >400mb file was successfully 
>>>> uploaded to the bucket I created there without issue.
>>>>
>>>> A few things worth mentioning that I missed before. I am running riak-s2 
>>>> behind haproxy. Haproxy is handling ssl and enabling CORS for browser 
>>>> based requests. I have tested smaller files (~4-5mb) and GET requests 
>>>> using the browser client and everything works with my current haproxy 
>>>> configuration, but the larger files are failing, usually after 1 part is 
>>>> successfully uploaded. I can also list bucket contents and delete existing 
>>>> contents. The only feature that is not working appears to be the 
>>>> multi-part uploads. We are running centOS 7 (kernel version 
>>>> 3.10.0-327.4.4.el7.x86_64). Please let me know if you have any further 
>>>> questions.
>>>>
>>>> --
>>>> John Fanjoy
>>>> Systems Engineer
>>>> jfan...@inetu.net
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to