> The s3 signer uses the signs the "canonicalizedResource" and that have the 
> query parameters already encoded, so I tried to replace the "%3D" by the "=" 
> and it already works.

Yey! The culprit is here. Most client mistakenly encodes Multipart
uploadId although it is already supposed to be url-encoded. This is
the case for #1063, too. Maybe Riak CS can be aligned to how S3
behaves to save most S3 clients - stay tuned to that issue, please.
Anyway, thank you for reporting!



On Thu, Feb 26, 2015 at 9:54 PM, Patrick F. Marques
<patrickfmarq...@gmail.com> wrote:
> Hi,
>
> thanks for your help Uenishi.
>
> I'm using Riak 1.5.2, and AWS Node.js SDK 2.1.14 and the example code I'm
> running is bellow.
> I have beed trying with and without forcing a singing version. With some
> debug I found that the default is the use the s3 signer.... If I force v2 I
> have another error, "Cannot set property 'Timestamp' of undefined" that is
> throe by v2.js signer code, I made a simple fix but then every request
> returns "Access Denied".
>
> The s3 signer uses the signs the "canonicalizedResource" and that have the
> query parameters already encoded, so I tried to replace the "%3D" by the "="
> and it already works.
>
>
> // ----------------------------------
>
> 'use strict';
>
> var fs = require('fs');
> var path = require('path');
> var zlib = require('zlib');
>
> var config = {
>     accessKeyId: 'WDH-HCBBZONGEY2PADRC',
>     secretAccessKey: '9nJpf_C3hoaGrMBbvWH_pJ7qQT5ijrQKrN2XVg==',
>     // region: 'eu'
>
>     httpOptions: {
>         proxy: 'http://192.168.56.100:8080'
>     },
>
>     signatureVersion: 'v2'
> };
>
> var bigfile = path.join('./', 'bigfile');
> var body = fs.createReadStream(bigfile).pipe(zlib.createGzip());
>
> var AWS = require('aws-sdk');
> var s3 = new AWS.S3(new AWS.Config(config));
>
> var params = {
>     Bucket: 'test',
>     Key: 'myKey',
>     Body: body
> };
>
> s3.upload(params).
>     on('httpUploadProgress', function(evt) { console.log(evt); }).
>     send(function(err, data) {
>         console.log(err, data);
>     });
>
> // ----------------------------------
>
> Bets Regards,
> Patrick Marques
>
>
> On Thu, Feb 26, 2015 at 6:47 AM, Kota Uenishi <k...@basho.com> wrote:
>>
>> Hi,
>>
>> My 6th sense says you're hitting this problem:
>> https://github.com/basho/riak_cs/issues/1063
>>
>> Could you give me an example of code or debug print of Node.js client that
>> includes the source string before being signed by a secret key?
>>
>> Otherwise maybe that client is just using v4 authentication which we
>> haven't yet supported. To avoid it, please try v2 authentication.
>>
>> 2015/02/26 9:06 "Patrick F. Marques" <patrickfmarq...@gmail.com>:
>>>
>>> Hi everyone,
>>>
>>> I'm trying to use AWS SDK as a S3 client for Riack CS to upload large
>>> objects that I usually don't know the its size, for that propose I'm trying
>>> to use the multipart upload like in the SDK example
>>> https://github.com/aws/aws-sdk-js/blob/master/doc-src/guide/node-examples.md#amazon-s3-uploading-an-arbitrarily-sized-stream-upload.
>>> The problem is that I'm always getting Access Denied.
>>>
>>> I've been trying some other clients but also without success.
>>>
>>> Best regards,
>>> Patrick Marques
>>>
>>>
>>>
>>> _______________________________________________
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>



-- 
Kota UENISHI / @kuenishi
Basho Japan KK

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to