Re: Riak-CS Node.js client

2015-02-26 Thread Patrick F. Marques
Hi,

thanks for your help Uenishi.

I'm using Riak 1.5.2, and AWS Node.js SDK 2.1.14 and the example code I'm
running is bellow.
I have beed trying with and without forcing a singing version. With some
debug I found that the default is the use the s3 signer If I force v2 I
have another error, "Cannot set property 'Timestamp' of undefined" that is
throe by v2.js signer code, I made a simple fix but then every request
returns "Access Denied".

The s3 signer uses the signs the "canonicalizedResource" and that have the
query parameters already encoded, so I tried to replace the "%3D" by the
"=" and it already works.


// --

'use strict';

var fs = require('fs');
var path = require('path');
var zlib = require('zlib');

var config = {
accessKeyId: 'WDH-HCBBZONGEY2PADRC',
secretAccessKey: '9nJpf_C3hoaGrMBbvWH_pJ7qQT5ijrQKrN2XVg==',
// region: 'eu'

httpOptions: {
proxy: 'http://192.168.56.100:8080'
},

signatureVersion: 'v2'
};

var bigfile = path.join('./', 'bigfile');
var body = fs.createReadStream(bigfile).pipe(zlib.createGzip());

var AWS = require('aws-sdk');
var s3 = new AWS.S3(new AWS.Config(config));

var params = {
Bucket: 'test',
Key: 'myKey',
Body: body
};

s3.upload(params).
on('httpUploadProgress', function(evt) { console.log(evt); }).
send(function(err, data) {
console.log(err, data);
});

// --

Bets Regards,
Patrick Marques


On Thu, Feb 26, 2015 at 6:47 AM, Kota Uenishi  wrote:

> Hi,
>
> My 6th sense says you're hitting this problem:
> https://github.com/basho/riak_cs/issues/1063
>
> Could you give me an example of code or debug print of Node.js client that
> includes the source string before being signed by a secret key?
>
> Otherwise maybe that client is just using v4 authentication which we
> haven't yet supported. To avoid it, please try v2 authentication.
> 2015/02/26 9:06 "Patrick F. Marques" :
>
>> Hi everyone,
>>
>> I'm trying to use AWS SDK as a S3 client for Riack CS to upload large
>> objects that I usually don't know the its size, for that propose I'm trying
>> to use the multipart upload like in the SDK example
>> https://github.com/aws/aws-sdk-js/blob/master/doc-src/guide/node-examples.md#amazon-s3-uploading-an-arbitrarily-sized-stream-upload
>> .
>> The problem is that I'm always getting Access Denied.
>>
>> I've been trying some other clients but also without success.
>>
>> Best regards,
>> Patrick Marques
>>
>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak-CS Node.js client

2015-02-26 Thread Kota Uenishi
> The s3 signer uses the signs the "canonicalizedResource" and that have the 
> query parameters already encoded, so I tried to replace the "%3D" by the "=" 
> and it already works.

Yey! The culprit is here. Most client mistakenly encodes Multipart
uploadId although it is already supposed to be url-encoded. This is
the case for #1063, too. Maybe Riak CS can be aligned to how S3
behaves to save most S3 clients - stay tuned to that issue, please.
Anyway, thank you for reporting!



On Thu, Feb 26, 2015 at 9:54 PM, Patrick F. Marques
 wrote:
> Hi,
>
> thanks for your help Uenishi.
>
> I'm using Riak 1.5.2, and AWS Node.js SDK 2.1.14 and the example code I'm
> running is bellow.
> I have beed trying with and without forcing a singing version. With some
> debug I found that the default is the use the s3 signer If I force v2 I
> have another error, "Cannot set property 'Timestamp' of undefined" that is
> throe by v2.js signer code, I made a simple fix but then every request
> returns "Access Denied".
>
> The s3 signer uses the signs the "canonicalizedResource" and that have the
> query parameters already encoded, so I tried to replace the "%3D" by the "="
> and it already works.
>
>
> // --
>
> 'use strict';
>
> var fs = require('fs');
> var path = require('path');
> var zlib = require('zlib');
>
> var config = {
> accessKeyId: 'WDH-HCBBZONGEY2PADRC',
> secretAccessKey: '9nJpf_C3hoaGrMBbvWH_pJ7qQT5ijrQKrN2XVg==',
> // region: 'eu'
>
> httpOptions: {
> proxy: 'http://192.168.56.100:8080'
> },
>
> signatureVersion: 'v2'
> };
>
> var bigfile = path.join('./', 'bigfile');
> var body = fs.createReadStream(bigfile).pipe(zlib.createGzip());
>
> var AWS = require('aws-sdk');
> var s3 = new AWS.S3(new AWS.Config(config));
>
> var params = {
> Bucket: 'test',
> Key: 'myKey',
> Body: body
> };
>
> s3.upload(params).
> on('httpUploadProgress', function(evt) { console.log(evt); }).
> send(function(err, data) {
> console.log(err, data);
> });
>
> // --
>
> Bets Regards,
> Patrick Marques
>
>
> On Thu, Feb 26, 2015 at 6:47 AM, Kota Uenishi  wrote:
>>
>> Hi,
>>
>> My 6th sense says you're hitting this problem:
>> https://github.com/basho/riak_cs/issues/1063
>>
>> Could you give me an example of code or debug print of Node.js client that
>> includes the source string before being signed by a secret key?
>>
>> Otherwise maybe that client is just using v4 authentication which we
>> haven't yet supported. To avoid it, please try v2 authentication.
>>
>> 2015/02/26 9:06 "Patrick F. Marques" :
>>>
>>> Hi everyone,
>>>
>>> I'm trying to use AWS SDK as a S3 client for Riack CS to upload large
>>> objects that I usually don't know the its size, for that propose I'm trying
>>> to use the multipart upload like in the SDK example
>>> https://github.com/aws/aws-sdk-js/blob/master/doc-src/guide/node-examples.md#amazon-s3-uploading-an-arbitrarily-sized-stream-upload.
>>> The problem is that I'm always getting Access Denied.
>>>
>>> I've been trying some other clients but also without success.
>>>
>>> Best regards,
>>> Patrick Marques
>>>
>>>
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>



-- 
Kota UENISHI / @kuenishi
Basho Japan KK

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


repair-2i fails: Error: index_scan_timeout

2015-02-26 Thread Jason Greathouse
We have a 5 node cluster running 2.0.0beta1. Our 2i indexes seem to return
different responses depending on which node you hit, so I'm trying to
rebuild them.

I'm trying to run riak-admin repair-2i  and it fails with Error:
index_scan_timeout

An example:
# riak-admin repair-2i 0
Will repair 2i on these partitions:
0
Watch the logs for 2i repair progress reports

console.log:
2015-02-26 14:55:32.099 [info] <0.7489.15>@riak_kv_2i_aae:init:139 Starting
2i repair at speed 100 for partitions [0]
2015-02-26 14:55:32.100 [info]
<0.7491.15>@riak_kv_2i_aae:repair_partition:259 Acquired lock on partition 0
2015-02-26 14:55:32.100 [info]
<0.7491.15>@riak_kv_2i_aae:repair_partition:261 Repairing indexes in
partition 0
2015-02-26 14:55:32.100 [info]
<0.7491.15>@riak_kv_2i_aae:create_index_data_db:326 Creating temporary
database of 2i data in /data/riak/anti_entropy/2i/tmp_db
2015-02-26 14:55:32.114 [info]
<0.7491.15>@riak_kv_2i_aae:create_index_data_db:363 Grabbing all index data
for partition 0
2015-02-26 15:00:32.118 [error] <0.2701.0> gen_server <0.2701.0> terminated
with reason: bad argument in call to
eleveldb:async_get(#Ref<0.0.77.168699>, <<>>,
<<131,104,2,109,0,0,0,12,80,68,45,101,118,101,110,116,98,97,115,101,109,0,0,0,22,48,48,48,48,48,...>>,
[]) in eleveldb:get/3 line 150
2015-02-26 15:00:32.119 [error] <0.2701.0> CRASH REPORT Process <0.2701.0>
with 0 neighbours exited with reason: bad argument in call to
eleveldb:async_get(#Ref<0.0.77.168699>, <<>>,
<<131,104,2,109,0,0,0,12,80,68,45,101,118,101,110,116,98,97,115,101,109,0,0,0,22,48,48,48,48,48,...>>,
[]) in eleveldb:get/3 line 150 in gen_server:terminate/6 line 744
2015-02-26 15:00:32.121 [error] <0.2696.0> Supervisor
{<0.2696.0>,poolboy_sup} had child riak_core_vnode_worker started with
{riak_core_vnode_worker,start_link,undefined} at <0.2701.0> exit with
reason bad argument in call to eleveldb:async_get(#Ref<0.0.77.168699>,
<<>>,
<<131,104,2,109,0,0,0,12,80,68,45,101,118,101,110,116,98,97,115,101,109,0,0,0,22,48,48,48,48,48,...>>,
[]) in eleveldb:get/3 line 150 in context child_terminated
2015-02-26 15:00:32.237 [info]
<0.7489.15>@riak_kv_2i_aae:next_partition:160 Finished 2i repair:
Total partitions: 1
Finished partitions: 1
Speed: 100
Total 2i items scanned: 0
Total tree objects: 0
Total objects fixed: 0
With errors:
Partition: 0
Error: index_scan_timeout

I found a previous post about this:
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2014-August/015655.html

I tried to set the aae_2i_batch_size, 10 in the advanced.conf, the server
started, but I'm not sure how to confirm this setting took.
If it did take It didn't seem to help.

Any other suggestions?
Let me know if you need more details.

Thanks,

*Jason Greathouse*
Sr. Systems Engineer

*[image: LeanKitlogo] *
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Using Riak for Data with many Entities and Relationships

2015-02-26 Thread Matt Brooks
I am designing a web application that, for the purpose of this
conversation, deals with three main entities:

   - Users
   - Groups
   - Tasks

Users are members of groups, and tasks belong to groups.

Early in the development of the application, Neo4j was used to store the
data. Users would have a *MEMBER_OF *relationship to a group, and tasks
would have a *BELONGS_TO *relationship to a group. Neo4j was nice for
access control because I could add permissions to the MEMBER_OF
relationship. It was also nice for the simple BELONGS_TO relationship.
Neo4j separates entites and relationships nicely.

After reading about Riak and reminiscing about my use of MongoDB in the
past, I began to think about using Riak to store my data instead of Neo4j.
Storing the users, groups, and tasks seems trivial enough. But storing the
relationships seems a bit tougher.

 I am planning on storing the entities in three buckets:

   - user
   - group
   - task

...where each of the buckets has the entity's ID as the key and a map of
the relevant information as the value.

What I am struggling with now is modeling the relationships I so easily
modeled in Neo4j, in Riak. I have a few ideas:

   1. Store both user IDs and task IDs in lists inside of the group
   information. The user ID list would also include permissions for the users.
   2. Store group IDs in a list inside of the user information and task IDs
   in a list inside of the group information.
   3. Use a *user-group* bucket and a *group-task *bucket. The
*user-group *bucket will
   have user IDs as the keys and a list of maps as the value. The maps in
   question would hold a group ID and permission information for the group.
   The *group-task *bucket would be similar to the *user-group *bucket, but
   instead of a list of maps, it would simply have a list of task IDs.
   4. Use Riak's links for both user membership and tasks belonging to
   groups. A given user would have *member *links to groups, and a given
   group would have *task *links to tasks. Permissions for a given user ID
   would be stored in the group somewhere.

None of the four entirely satisfy me..

Number one makes it really *hard* and inefficient to ask the DB for the
groups that a user is a member of (I would have to go through every single
group and check if the user ID is in the member list). The same issue
occurs with tasks.

Number two makes it really *easy* to go from user to group to tasks, but
makes it difficult to go from group back to users. What if I wanted to
ask *"what
users are members of group X?"*.

Number three works in a way similar to relational databases, and does a
good job of separating relationships from entities. This has the same
issues mentioned in number two.

Number four seems to be the one that might be considered idiomatic Riak
usage, but we completely separate permissions from the *member*
relationship a user has with a group due to links not supporting complex
properties.

What do you think about the 4 models mentioned? Any ideas about how I can
model this data in Riak effectively?

-- 
Matt Brooks
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com