Luke, Yes on both parts. To confirm cyberduck was using multi-part I actually tailed the console.log while it was uploading the file, and it uploaded the file in approx. 40 parts. Afterwards the parts were reassembled as you would expect. The AWS-SDK for javascript has an object called ManagedUpload which automatically switches to multi-part when the input is larger than the maxpartsize (default 5mb). I have confirmed that it is splitting the files up, but so far I’ve only ever seen one part get successfully uploaded before the others failed at which point it removes the upload (DELETE call) automatically. I also verified that the javascript I have in place does work with an actual AWS S3 bucket to rule out coding issues on my end and the same >400mb file was successfully uploaded to the bucket I created there without issue.
A few things worth mentioning that I missed before. I am running riak-s2 behind haproxy. Haproxy is handling ssl and enabling CORS for browser based requests. I have tested smaller files (~4-5mb) and GET requests using the browser client and everything works with my current haproxy configuration, but the larger files are failing, usually after 1 part is successfully uploaded. I can also list bucket contents and delete existing contents. The only feature that is not working appears to be the multi-part uploads. We are running centOS 7 (kernel version 3.10.0-327.4.4.el7.x86_64). Please let me know if you have any further questions. -- John Fanjoy Systems Engineer jfan...@inetu.net On 1/13/16, 5:33 PM, "Luke Bakken" <lbak...@basho.com> wrote: >Hi John, > >Does CyberDuck automatically enable multi-part uploads for files >greater than a certain size? Does the aws-sdk support multi-part >uploads? >-- >Luke Bakken >Engineer >lbak...@basho.com > > >On Wed, Jan 13, 2016 at 1:24 PM, John Fanjoy <jfan...@inetu.net> wrote: >> Hello Everyone, >> >> I have been doing some searching around to see if anyone has come across the >> specific issue I am having and can’t find anything. I have a project that >> provides a file upload interface through a website and puts the uploaded >> object into riak-cs. The front end uses the javascript aws-sdk 2.1.29 (tried >> .19 as well) with a patch I found using the list archives to fix up some url >> encoding issues with uploadIDs. Everything seems to work fine when the image >> is less than 5mb (max part size), however when the object is larger the >> ManagedUpload functionality of the S3 object is failing after the first >> part. I’ve tested the code I’m using against a vanilla AWS bucket and a 400+ >> mb file was uploaded without issue. I tested the same file with riak-cs >> using the aws-sdk and CyberDuck. Cyberduck was able to upload the file >> without any failure, but it has not been successful even once using the >> javascript library. I have turned the riak-cs debug logging on, and I am >> getting some errors logged, but I’m not really sure how useful they are: >> >> ``` >> 2016-01-13 15:34:51.648 [error] <0.1255.0> Webmachine error at path >> "/buckets/three2016com-devopsdevel/objects/DeSMan_Ep1_Create_Website_720p_DRAFT%2520%25281%2529.mov/uploads/kL9dRGLaQCiZBNnPSpu5mw==" >> : >> {error,{error,{badmatch,{error,closed}},[{webmachine_request,recv_unchunked_body,3,[{file,"src/webmachine_request.erl"},{line,490}]},{riak_cs_wm_object_upload_part,accept_streambody,4,[{file,"src/riak_cs_wm_object_upload_part.erl"},{line,308}]},{riak_cs_wm_object_upload_part,accept_body,2,[{file,"src/riak_cs_wm_object_upload_part.erl"},{line,224}]},{riak_cs_wm_common,accept_body,2,[{file,"src/riak_cs_wm_common.erl"},{line,342}]},{webmachine_resource,resource_call,3,[{file,"src/webmachine_re..."},...]},...]}} >> in webmachine_request:recv_unchunked_body/3 line 490 >> ``` >> >> This error does not show up when using Cyberduck or performing an upload >> that is <5mb using javascript. I’ve tried riak-cs 1.5 and now risk-s2 2.1.0 >> with out any success. Any help you can offer would be greatly appreciated. >> If you have any questions for me or need additional information let me know >> and I’ll fill in any holes I left. >> >> >> >> Thank you in advance, >> >> John Fanjoy >> >> >> _______________________________________________ >> riak-users mailing list >> riak-users@lists.basho.com >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com >> _______________________________________________ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com