Re: Riak-CS Node.js client

2015-03-09 Thread Patrick F. Marques
Thanks! I'll stay tuned ;)
till there I will patch my code an pray not to introduce any issues here..
The patch is bellow for anyone to try (just send the decoded string to be
signed)

>From a439f6126317a2b66fc08baf31b24b47e8ec4ed9 Mon Sep 17 00:00:00 2001
From: "Patrick F. Marques" 
Date: Thu, 26 Feb 2015 14:59:14 +
Subject: [PATCH] [fix]

---
 lib/signers/s3.js |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/lib/signers/s3.js b/lib/signers/s3.js
index 2f49fff..604e8a0 100644
--- a/lib/signers/s3.js
+++ b/lib/signers/s3.js
@@ -73,7 +73,7 @@ AWS.Signers.S3 = inherit(AWS.Signers.RequestSigner, {

 var headers = this.canonicalizedAmzHeaders();
 if (headers) parts.push(headers);
-parts.push(this.canonicalizedResource());
+parts.push(decodeURIComponent(this.canonicalizedResource()));

 return parts.join('\n');

On Thu, Feb 26, 2015 at 2:13 PM, Kota Uenishi  wrote:

> > The s3 signer uses the signs the "canonicalizedResource" and that have
> the query parameters already encoded, so I tried to replace the "%3D" by
> the "=" and it already works.
>
> Yey! The culprit is here. Most client mistakenly encodes Multipart
> uploadId although it is already supposed to be url-encoded. This is
> the case for #1063, too. Maybe Riak CS can be aligned to how S3
> behaves to save most S3 clients - stay tuned to that issue, please.
> Anyway, thank you for reporting!
>
>
>
> On Thu, Feb 26, 2015 at 9:54 PM, Patrick F. Marques
>  wrote:
> > Hi,
> >
> > thanks for your help Uenishi.
> >
> > I'm using Riak 1.5.2, and AWS Node.js SDK 2.1.14 and the example code I'm
> > running is bellow.
> > I have beed trying with and without forcing a singing version. With some
> > debug I found that the default is the use the s3 signer If I force
> v2 I
> > have another error, "Cannot set property 'Timestamp' of undefined" that
> is
> > throe by v2.js signer code, I made a simple fix but then every request
> > returns "Access Denied".
> >
> > The s3 signer uses the signs the "canonicalizedResource" and that have
> the
> > query parameters already encoded, so I tried to replace the "%3D" by the
> "="
> > and it already works.
> >
> >
> > // --
> >
> > 'use strict';
> >
> > var fs = require('fs');
> > var path = require('path');
> > var zlib = require('zlib');
> >
> > var config = {
> > accessKeyId: 'WDH-HCBBZONGEY2PADRC',
> > secretAccessKey: '9nJpf_C3hoaGrMBbvWH_pJ7qQT5ijrQKrN2XVg==',
> > // region: 'eu'
> >
> > httpOptions: {
> > proxy: 'http://192.168.56.100:8080'
> > },
> >
> > signatureVersion: 'v2'
> > };
> >
> > var bigfile = path.join('./', 'bigfile');
> > var body = fs.createReadStream(bigfile).pipe(zlib.createGzip());
> >
> > var AWS = require('aws-sdk');
> > var s3 = new AWS.S3(new AWS.Config(config));
> >
> > var params = {
> > Bucket: 'test',
> > Key: 'myKey',
> > Body: body
> > };
> >
> > s3.upload(params).
> > on('httpUploadProgress', function(evt) { console.log(evt); }).
> > send(function(err, data) {
> > console.log(err, data);
> > });
> >
> > // --
> >
> > Bets Regards,
> > Patrick Marques
> >
> >
> > On Thu, Feb 26, 2015 at 6:47 AM, Kota Uenishi  wrote:
> >>
> >> Hi,
> >>
> >> My 6th sense says you're hitting this problem:
> >> https://github.com/basho/riak_cs/issues/1063
> >>
> >> Could you give me an example of code or debug print of Node.js client
> that
> >> includes the source string before being signed by a secret key?
> >>
> >> Otherwise maybe that client is just using v4 authentication which we
> >> haven't yet supported. To avoid it, please try v2 authentication.
> >>
> >> 2015/02/26 9:06 "Patrick F. Marques" :
> >>>
> >>> Hi everyone,
> >>>
> >>> I'm trying to use AWS SDK as a S3 client for Riack CS to upload large
> >>> objects that I usually don't know the its size, for that propose I'm
> trying
> >>> to use the multipart upload like in the SDK example
> >>>
> https://github.com/aws/aws-sdk-js/blob/master/doc-src/guide/node-examples.md#amazon-s3-uploading-an-arbitrarily-sized-stream-upload
> .
> >>> The problem is that I'm always getting Access Denied.
> >>>
> >>> I've been trying some other clients but also without success.
> >>>
> >>> Best regards,
> >>> Patrick Marques
> >>>
> >>>
> >>>
> >>> ___
> >>> riak-users mailing list
> >>> riak-users@lists.basho.com
> >>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >>>
> >
>
>
>
> --
> Kota UENISHI / @kuenishi
> Basho Japan KK
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Query on Riak Search in a cluster of 3 nodes behind ELB is giving different result everytime

2015-03-09 Thread Santi Kumar
Hi Zeeshan,

We have typically seen this issue when we have lots of indexes created in
that instance. On a t2.medium machine we already have around 512+ indexes
created in data folder. In such case, if we trying to create any new
indexes it's taking time. Association of Index to Bucket is failing even
after  the FetchIndex operation returning sucess as shown in the below code.

is there any limitation of the number of Indexes? Any thing related to
FileSystem handlers causing this issue?

while(!isCreated){

 FetchIndex fetchIndex = new FetchIndex.Builder(indexName).build();

 
RiakFuture fetchIndexFuture = client.executeAsync(fetchIndex);

 try{

  fetchIndexFuture.await();

  com.basho.riak.client.core.operations.YzFetchIndexOperation.Response
response = fetchIndexFuture.get();

  List indexes = response.getIndexes();

  for(YokozunaIndex index:indexes){

  if(indexName.equals(index.getName())){

   isCreated=true;

   logger.info("Index "+indexName+" created ");

   continue;

  }

  }

 }catch(Exception e){

 logger.warn("Unable to get "+indexName+" Still trying");

 isCreated=false;

 }

 }

On Fri, Mar 6, 2015 at 2:11 AM, Zeeshan Lakhani  wrote:

> Hello Baskar, Santi,
>
> 2-15 minutes is a long while, and we’ve not seen index
> creation/propagation be so slow. I’d definitely take a closer look at how
> you’re creating these indexes dynamically on the fly, as index creation is
> typically a more straightforward admin task.
>
> We’ve added defaults to solrconfig.xml to handle most typical use-cases.
> You can read more about solrconfig.xml at
> http://wiki.apache.org/solr/SolrConfigXml#mainIndex_Section. You may want
> to take another look and optimize/improve your schema design to prevent
> such issues. You can read more about Solr’s performance factors here ->
> http://wiki.apache.org/solr/SolrPerformanceFactors.
>
> Thanks.
>
>
> Zeeshan Lakhani
> programmer |
> software engineer at @basho |
> org. member/founder of @papers_we_love | paperswelove.org
> twitter => @zeeshanlakhani
>
> On Mar 5, 2015, at 3:00 PM, Baskar Srinivasan  wrote:
>
> Hello Zeeshan,
>
> Thanks for the pointer regarding waiting for index creation in each node
> in the cluster.
>
> Presently, when the indices get created on one node, it takes a full 2-15
> minutes for it to get created on other nodes in the cluster. Following are
> the timestamps on 3 nodes for a single index:
>
> #Create index request from our server via load balancer
>
> 11:16:52.999 [http-bio-8080-exec-3] INFO  c.v.s.u.RiakClientUtil - Created
> index for bsr-test-fromlocal-1-Access_index
> #1st node, immediate creation (12 secs) once call is issued from our server
>
> 2015-03-05 19:17:04.135 [info] <0.17388.104>@yz_index:local_create:189
> Created index bsr-test-fromlocal-1-Access_index with schema
>
> #2nd node, takes another 4 minutes for creation request to propagate
>
> 2015-03-05 19:21:17.879 [info] <0.20606.449>@yz_index:local_create:189
> Created index bsr-test-fromlocal-1-Access_index
>
> #3rd node, takes 15 minutes for creation request to propagate
>
> 2015-03-05 19:32:32.172 [info] <0.14715.94>@yz_index:local_create:189
> Created index bsr-test-fromlocal-1-Access_index
>
> Is there a solr config we can tune to make the 2nd and 3rd node
> propagation more immediate in the order of < 60 seconds?
>
> Thanks,
>
> Baskar
>
> On Thu, Mar 5, 2015 at 9:11 AM, Zeeshan Lakhani 
> wrote:
>
>> Hello Santi, Baskar. Please keep your messages on the user group mailing
>> list, btw. Thanks.
>>
>> Here’s an example of our testing harness’s wait_for_index function,
>> https://github.com/basho/yokozuna/blob/develop/riak_test/yz_rt.erl#L420.
>> We check for the index on each of the nodes, which is an approach you can
>> take.
>>
>> And, as I mentioned, I’m currently working on making Index creation
>> synchronous to make this easier.
>>
>> If your logs are not pointing to any errors and being that your bucket,
>> index contains so few objects, I’d delete or mv the search-root/index
>> directory (./data/yz/<>) and let AAE resync the data, which
>> should then give you consistent results.
>>
>> Thanks.
>>
>
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Query on Riak Search in a cluster of 3 nodes behind ELB is giving different result everytime

2015-03-09 Thread Zeeshan Lakhani
Hey Santi, Baskar,

Are you noticing increased CPU load as you create more and more indexes? 
Running `riak-admin top -interval 2` a few times may bring sometime to light.

I’d see how you could increase resources or think more critically on how you’re 
indexing data for Solr. Does the data share most fields? Can you reuse indexes 
for some of the data and filter certain queries?

You may also wanted to look at this thread, 
https://groups.google.com/forum/#!topic/nosql-databases/9ECQpVS0QjE 
, which 
discusses modeling Riak Search data and the issues you’ll have with the 
overhead with gossiping so much metadata and the what Solr can handle.

Zeeshan Lakhani
programmer | 
software engineer at @basho | 
org. member/founder of @papers_we_love | paperswelove.org
twitter => @zeeshanlakhani

> On Mar 9, 2015, at 8:25 AM, Santi Kumar  wrote:
> 
> Hi Zeeshan,
> 
> We have typically seen this issue when we have lots of indexes created in 
> that instance. On a t2.medium machine we already have around 512+ indexes 
> created in data folder. In such case, if we trying to create any new indexes 
> it's taking time. Association of Index to Bucket is failing even after  the 
> FetchIndex operation returning sucess as shown in the below code.
> 
> is there any limitation of the number of Indexes? Any thing related to 
> FileSystem handlers causing this issue?
> 
> while(!isCreated){
> 
> FetchIndex fetchIndex = new FetchIndex.Builder(indexName).build();
> 
> 
> RiakFuture  String> fetchIndexFuture = client.executeAsync(fetchIndex);
> 
> try{
> 
> fetchIndexFuture.await();
> 
> com.basho.riak.client.core.operations.YzFetchIndexOperation.Response 
> response = fetchIndexFuture.get();
> 
> List indexes = response.getIndexes();
> 
> for(YokozunaIndex index:indexes){
> 
> if(indexName.equals(index.getName())){
> 
> isCreated=true;
> 
> logger.info("Index "+indexName+" created ");
> 
> continue;
> 
> }
> 
> }
> 
> }catch(Exception e){
> 
> logger.warn("Unable to get "+indexName+" Still trying");
> 
> isCreated=false;
> 
> }
> 
> }
> 
> 
> On Fri, Mar 6, 2015 at 2:11 AM, Zeeshan Lakhani  > wrote:
> Hello Baskar, Santi,
> 
> 2-15 minutes is a long while, and we’ve not seen index creation/propagation 
> be so slow. I’d definitely take a closer look at how you’re creating these 
> indexes dynamically on the fly, as index creation is typically a more 
> straightforward admin task.
> 
> We’ve added defaults to solrconfig.xml to handle most typical use-cases. You 
> can read more about solrconfig.xml at 
> http://wiki.apache.org/solr/SolrConfigXml#mainIndex_Section 
> . You may want 
> to take another look and optimize/improve your schema design to prevent such 
> issues. You can read more about Solr’s performance factors here -> 
> http://wiki.apache.org/solr/SolrPerformanceFactors 
> . 
> 
> Thanks.
> 
> 
> Zeeshan Lakhani
> programmer | 
> software engineer at @basho | 
> org. member/founder of @papers_we_love | paperswelove.org 
> 
> twitter => @zeeshanlakhani
> 
>> On Mar 5, 2015, at 3:00 PM, Baskar Srinivasan > > wrote:
>> 
>> Hello Zeeshan,
>> 
>> Thanks for the pointer regarding waiting for index creation in each node in 
>> the cluster.
>> 
>> Presently, when the indices get created on one node, it takes a full 2-15 
>> minutes for it to get created on other nodes in the cluster. Following are 
>> the timestamps on 3 nodes for a single index:
>> 
>> #Create index request from our server via load balancer
>> 11:16:52.999 [http-bio-8080-exec-3] INFO  c.v.s.u.RiakClientUtil - Created 
>> index for bsr-test-fromlocal-1-Access_index
>> 
>> #1st node, immediate creation (12 secs) once call is issued from our server
>> 2015-03-05 19:17:04.135 [info] <0.17388.104>@yz_index:local_create:189 
>> Created index bsr-test-fromlocal-1-Access_index with schema
>> 
>> #2nd node, takes another 4 minutes for creation request to propagate
>> 
>> 
>> 2015-03-05 19:21:17.879 [info] <0.20606.449>@yz_index:local_create:189 
>> Created index bsr-test-fromlocal-1-Access_index
>> 
>> #3rd node, takes 15 minutes for creation request to propagate
>> 
>> 
>> 2015-03-05 19:32:32.172 [info] <0.14715.94>@yz_index:local_create:189 
>> Created index bsr-test-fromlocal-1-Access_index
>> 
>> Is there a solr config we can tune to make the 2nd and 3rd node propagation 
>> more immediate in the order of < 60 seconds?
>> 
>> Thanks,
>> 
>> Baskar
>> 
>> 
>> On Thu, Mar 5, 2015 at 9:11 AM, Zeeshan Lakhani > > wrote:
>> Hello Santi, Baskar. Please keep your messages on the user group mailing 
>> list, btw. Thanks.
>> 
>> Here’s an example of our testing harness’s wait_for_index function, 
>> https://github.com

Re: Query on Riak Search in a cluster of 3 nodes behind ELB is giving different result everytime

2015-03-09 Thread Baskar Srinivasan
Hello Zeeshan,

We create a new set of buckets/indices when a new tenant is created in a
multi-tenancy environment. Alternate approach for us is to use single set
of index/buckets and filter by a tenant identifier. Before moving to the
second approach we want to confirm if we expect to see significant delays
(several minutes) with index propagation as the number of indices in the
system grows.

Regards,
Baskar

On Mon, Mar 9, 2015 at 7:02 AM, Zeeshan Lakhani  wrote:

> Hey Santi, Baskar,
>
> Are you noticing increased CPU load as you create more and more indexes?
> Running `riak-admin top -interval 2` a few times may bring sometime to
> light.
>
> I’d see how you could increase resources or think more critically on how
> you’re indexing data for Solr. Does the data share most fields? Can you
> reuse indexes for some of the data and filter certain queries?
>
> You may also wanted to look at this thread,
> https://groups.google.com/forum/#!topic/nosql-databases/9ECQpVS0QjE,
> which discusses modeling Riak Search data and the issues you’ll have with
> the overhead with gossiping so much metadata and the what Solr can handle.
>
> Zeeshan Lakhani
> programmer |
> software engineer at @basho |
> org. member/founder of @papers_we_love | paperswelove.org
> twitter => @zeeshanlakhani
>
> On Mar 9, 2015, at 8:25 AM, Santi Kumar  wrote:
>
> Hi Zeeshan,
>
> We have typically seen this issue when we have lots of indexes created in
> that instance. On a t2.medium machine we already have around 512+ indexes
> created in data folder. In such case, if we trying to create any new
> indexes it's taking time. Association of Index to Bucket is failing even
> after  the FetchIndex operation returning sucess as shown in the below code.
>
> is there any limitation of the number of Indexes? Any thing related to
> FileSystem handlers causing this issue?
>
> while(!isCreated){
>
>  FetchIndex fetchIndex = new FetchIndex.Builder(indexName).build();
>
>  
> RiakFuture String> fetchIndexFuture = client.executeAsync(fetchIndex);
>
>  try{
>
>   fetchIndexFuture.await();
>
>   com.basho.riak.client.core.operations.YzFetchIndexOperation.Response
> response = fetchIndexFuture.get();
>
>   List indexes = response.getIndexes();
>
>   for(YokozunaIndex index:indexes){
>
>   if(indexName.equals(index.getName())){
>
>isCreated=true;
>
>logger.info("Index "+indexName+" created ");
>
>continue;
>
>   }
>
>   }
>
>  }catch(Exception e){
>
>  logger.warn("Unable to get "+indexName+" Still trying");
>
>  isCreated=false;
>
>  }
>
>  }
>
> On Fri, Mar 6, 2015 at 2:11 AM, Zeeshan Lakhani 
> wrote:
>
>> Hello Baskar, Santi,
>>
>> 2-15 minutes is a long while, and we’ve not seen index
>> creation/propagation be so slow. I’d definitely take a closer look at how
>> you’re creating these indexes dynamically on the fly, as index creation is
>> typically a more straightforward admin task.
>>
>> We’ve added defaults to solrconfig.xml to handle most typical use-cases.
>> You can read more about solrconfig.xml at
>> http://wiki.apache.org/solr/SolrConfigXml#mainIndex_Section. You may
>> want to take another look and optimize/improve your schema design to
>> prevent such issues. You can read more about Solr’s performance factors
>> here -> http://wiki.apache.org/solr/SolrPerformanceFactors.
>>
>> Thanks.
>>
>>
>> Zeeshan Lakhani
>> programmer |
>> software engineer at @basho |
>> org. member/founder of @papers_we_love | paperswelove.org
>> twitter => @zeeshanlakhani
>>
>> On Mar 5, 2015, at 3:00 PM, Baskar Srinivasan 
>> wrote:
>>
>> Hello Zeeshan,
>>
>> Thanks for the pointer regarding waiting for index creation in each node
>> in the cluster.
>>
>> Presently, when the indices get created on one node, it takes a full 2-15
>> minutes for it to get created on other nodes in the cluster. Following are
>> the timestamps on 3 nodes for a single index:
>>
>> #Create index request from our server via load balancer
>>
>> 11:16:52.999 [http-bio-8080-exec-3] INFO  c.v.s.u.RiakClientUtil -
>> Created index for bsr-test-fromlocal-1-Access_index
>> #1st node, immediate creation (12 secs) once call is issued from our
>> server
>>
>> 2015-03-05 19:17:04.135 [info] <0.17388.104>@yz_index:local_create:189
>> Created index bsr-test-fromlocal-1-Access_index with schema
>>
>> #2nd node, takes another 4 minutes for creation request to propagate
>>
>> 2015-03-05 19:21:17.879 [info] <0.20606.449>@yz_index:local_create:189
>> Created index bsr-test-fromlocal-1-Access_index
>>
>> #3rd node, takes 15 minutes for creation request to propagate
>>
>> 2015-03-05 19:32:32.172 [info] <0.14715.94>@yz_index:local_create:189
>> Created index bsr-test-fromlocal-1-Access_index
>>
>> Is there a solr config we can tune to make the 2nd and 3rd node
>> propagation more immediate in the order of < 60 seconds?
>>
>> Thanks,
>>
>> Baskar
>>
>> On Thu, Mar 5, 2015 at 9:11 AM, Zeeshan Lakhani 
>> wrote:
>>
>>> Hello Santi, 

Re: Query on Riak Search in a cluster of 3 nodes behind ELB is giving different result everytime

2015-03-09 Thread Zeeshan Lakhani
The second approach would most probably cut down on index creation time. 
However, you should definitely spend a little time testing it out and 
benchmarking accordingly. And, as I mentioned, please take a look at CPU load  
as indexes are created, as well as experiment with solrconfig and increasing 
jvm heap memory settings for your use-case.

Thanks.

Zeeshan Lakhani
programmer | 
software engineer at @basho | 
org. member/founder of @papers_we_love | paperswelove.org
twitter => @zeeshanlakhani

> On Mar 9, 2015, at 10:13 AM, Baskar Srinivasan  wrote:
> 
> Hello Zeeshan,
> 
> We create a new set of buckets/indices when a new tenant is created in a 
> multi-tenancy environment. Alternate approach for us is to use single set of 
> index/buckets and filter by a tenant identifier. Before moving to the second 
> approach we want to confirm if we expect to see significant delays (several 
> minutes) with index propagation as the number of indices in the system grows.
> 
> Regards,
> Baskar
> 
> On Mon, Mar 9, 2015 at 7:02 AM, Zeeshan Lakhani  > wrote:
> Hey Santi, Baskar,
> 
> Are you noticing increased CPU load as you create more and more indexes? 
> Running `riak-admin top -interval 2` a few times may bring sometime to light.
> 
> I’d see how you could increase resources or think more critically on how 
> you’re indexing data for Solr. Does the data share most fields? Can you reuse 
> indexes for some of the data and filter certain queries?
> 
> You may also wanted to look at this thread, 
> https://groups.google.com/forum/#!topic/nosql-databases/9ECQpVS0QjE 
> , which 
> discusses modeling Riak Search data and the issues you’ll have with the 
> overhead with gossiping so much metadata and the what Solr can handle.
> 
> Zeeshan Lakhani
> programmer | 
> software engineer at @basho | 
> org. member/founder of @papers_we_love | paperswelove.org 
> 
> twitter => @zeeshanlakhani
> 
>> On Mar 9, 2015, at 8:25 AM, Santi Kumar > > wrote:
>> 
>> Hi Zeeshan,
>> 
>> We have typically seen this issue when we have lots of indexes created in 
>> that instance. On a t2.medium machine we already have around 512+ indexes 
>> created in data folder. In such case, if we trying to create any new indexes 
>> it's taking time. Association of Index to Bucket is failing even after  the 
>> FetchIndex operation returning sucess as shown in the below code.
>> 
>> is there any limitation of the number of Indexes? Any thing related to 
>> FileSystem handlers causing this issue?
>> 
>> while(!isCreated){
>> 
>> FetchIndex fetchIndex = new FetchIndex.Builder(indexName).build();
>> 
>> 
>> RiakFuture>  String> fetchIndexFuture = client.executeAsync(fetchIndex);
>> 
>> try{
>> 
>> fetchIndexFuture.await();
>> 
>> com.basho.riak.client.core.operations.YzFetchIndexOperation.Response 
>> response = fetchIndexFuture.get();
>> 
>> List indexes = response.getIndexes();
>> 
>> for(YokozunaIndex index:indexes){
>> 
>> if(indexName.equals(index.getName())){
>> 
>> isCreated=true;
>> 
>> logger.info("Index "+indexName+" created ");
>> 
>> continue;
>> 
>> }
>> 
>> }
>> 
>> }catch(Exception e){
>> 
>> logger.warn("Unable to get "+indexName+" Still trying");
>> 
>> isCreated=false;
>> 
>> }
>> 
>> }
>> 
>> 
>> On Fri, Mar 6, 2015 at 2:11 AM, Zeeshan Lakhani > > wrote:
>> Hello Baskar, Santi,
>> 
>> 2-15 minutes is a long while, and we’ve not seen index creation/propagation 
>> be so slow. I’d definitely take a closer look at how you’re creating these 
>> indexes dynamically on the fly, as index creation is typically a more 
>> straightforward admin task.
>> 
>> We’ve added defaults to solrconfig.xml to handle most typical use-cases. You 
>> can read more about solrconfig.xml at 
>> http://wiki.apache.org/solr/SolrConfigXml#mainIndex_Section 
>> . You may want 
>> to take another look and optimize/improve your schema design to prevent such 
>> issues. You can read more about Solr’s performance factors here -> 
>> http://wiki.apache.org/solr/SolrPerformanceFactors 
>> . 
>> 
>> Thanks.
>> 
>> 
>> Zeeshan Lakhani
>> programmer | 
>> software engineer at @basho | 
>> org. member/founder of @papers_we_love | paperswelove.org 
>> 
>> twitter => @zeeshanlakhani
>> 
>>> On Mar 5, 2015, at 3:00 PM, Baskar Srinivasan >> > wrote:
>>> 
>>> Hello Zeeshan,
>>> 
>>> Thanks for the pointer regarding waiting for index creation in each node in 
>>> the cluster.
>>> 
>>> Presently, when the indices get created on one node, it takes a full 2-15 
>>> minutes for it to get created on other nodes in the cluster. Following are 
>>> the timestamps on 3 nodes for a si

Re: RiakCS auto expiry of objects

2015-03-09 Thread Seema Jethani
Hi Niels

Object lifecycle management is not yet supported in Riak CS. It is however
a feature that we plan on supporting in the future. We will be planning our
upcoming CS releases shortly and I will keep you posted on the timeline.

Let me know if you have any questions or concerns. Thanks!

-- 
Seema Jethani
Director of Product Management, Basho 
4083455739 | @seemaj 
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RiakCS auto expiry of objects

2015-03-09 Thread Shawn Debnath
I would love to see this make it in the next (major?) riak-cs release. It would 
put it in parity with riak’s bitcask auto-expiry.

On 3/9/15, 11:31 AM, "Seema Jethani" mailto:se...@basho.com>> 
wrote:

Hi Niels

Object lifecycle management is not yet supported in Riak CS. It is however a 
feature that we plan on supporting in the future. We will be planning our 
upcoming CS releases shortly and I will keep you posted on the timeline.

Let me know if you have any questions or concerns. Thanks!

--
Seema Jethani
Director of Product Management, Basho
4083455739 | @seemaj
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RiakCS large file uploads fail with 403/AccessDenied and 400/InvalidDigest

2015-03-09 Thread Kota Uenishi
Sorry, being late. I thought I've replied to you, but it was a very
close one where I think you're hitting the same problem as this:

http://lists.basho.com/pipermail/riak-users_lists.basho.com/2015-February/016845.html

Riak CS often includes `=' in uploadId of multipart uploads while S3
doesn't (where no specs described in official documents).

On Thu, Feb 12, 2015 at 12:41 AM, Niels O  wrote:
> Hello everyone,
>
> I have just installed riakcs and have the s3cmd and nodejs (the official
> amazon) plugin working.
>
> with the same credentials (accesskey&secret) I CAN upload big files with
> S3CMD but I CANNOT with the AWS/S3 nodejs plugin? (downloading very big
> files is no problem b.t.w.)
>
>
> with the nodejs plugin
>
> - until 992k, (I tested with 32 KiB increases) everything works
> - starting at 1024 KiB I get [400 InvalidDigest: The Content-MD5 you
> specified was invalid.]
> - from 8192 KiB and beyond I get [403 AccessDenied] back from riakcs.
>
> this while -again- with s3cmd I am able to upload files of over 1 GiB size
> easily  .. same machine, same creds
>
> any ideas?
>
>
>
>
>
> (below some riakcs debug logging from both the 400 and 403)  ...
>
>
> 400 - InvalidDigest:
>
> 2015-02-11 16:34:16.911 [debug]
> <0.17889.18>@riak_cs_s3_auth:calculate_signature:129 STS:
> ["PUT","\n","0BsQLab2tMEzr8IWoS2m5w==","\n","application/octet-stream","\n","\n",[["x-amz-date",":",<<"Wed,
> 11 Feb 2015 15:34:16 GMT">>,"\n"]],["/testje/4096k",[]]]
> 2015-02-11 16:34:17.854 [debug]
> <0.23568.18>@riak_cs_put_fsm:is_digest_valid:326 Calculated =
> <<"pIFX5fpeo7+sPPNjtSBWBg==">>, Reported = "0BsQLab2tMEzr8IWoS2m5w=="
> 2015-02-11 16:34:17.860 [debug] <0.23568.18>@riak_cs_put_fsm:done:303
> Invalid digest in the PUT FSM
>
>
> 403 - AccessDenied
>
> 2015-02-11 16:36:00.448 [debug]
> <0.22889.18>@riak_cs_s3_auth:calculate_signature:129 STS:
> ["POST","\n",[],"\n","application/octet-stream","\n","\n",[["x-amz-date",":",<<"Wed,
> 11 Feb 2015 15:36:00 GMT">>,"\n"]],["/testje/8192k","?uploads"]]
> 2015-02-11 16:36:00.484 [debug]
> <0.23539.18>@riak_cs_s3_auth:calculate_signature:129 STS:
> ["PUT","\n","sq5d2PIhC7I1xxT8Rp9cVg==","\n","application/octet-stream","\n","\n",[["x-amz-date",":",<<"Wed,
> 11 Feb 2015 15:36:00
> GMT">>,"\n"]],["/testje/8192k","?partNumber=1&uploadId=TXR2AuCeRDWwc2bviLPcOg=="]]
> 2015-02-11 16:36:00.484 [debug]
> <0.23539.18>@riak_cs_wm_common:post_authentication:471 bad_auth
> 2015-02-11 16:36:00.494 [debug]
> <0.23543.18>@riak_cs_s3_auth:calculate_signature:129 STS:
> ["DELETE","\n",[],"\n","application/octet-stream","\n","\n",[["x-amz-date",":",<<"Wed,
> 11 Feb 2015 15:36:00
> GMT">>,"\n"]],["/testje/8192k","?uploadId=TXR2AuCeRDWwc2bviLPcOg=="]]
> 2015-02-11 16:36:00.494 [debug]
> <0.23543.18>@riak_cs_wm_common:post_authentication:471 bad_auth
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>



-- 
Kota UENISHI / @kuenishi
Basho Japan KK

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: RiakCS large file uploads fail with 403/AccessDenied and 400/InvalidDigest

2015-03-09 Thread Shunichi Shinohara
Hi Niels,

Thank you for your interest on Riak CS.

Some questions about 400 - InvalidDigest:

- Can you confirm which MD5 was correct for the log
  2015-02-11 16:34:17.854 [debug]
<0.23568.18>@riak_cs_put_fsm:is_digest_valid:326
 Calculated = <<"pIFX5fpeo7+sPPNjtSBWBg==">>,
 Reported = "0BsQLab2tMEzr8IWoS2m5w=="
- What was the transfer-encoding? I want to confirm chunked encoding
was NOT used.
- Hopefully, packet capture (e.g. by pcap format) will be helpful to debug

Thanks,
Shino

On Tue, Mar 10, 2015 at 10:25 AM, Kota Uenishi  wrote:
> Sorry, being late. I thought I've replied to you, but it was a very
> close one where I think you're hitting the same problem as this:
>
> http://lists.basho.com/pipermail/riak-users_lists.basho.com/2015-February/016845.html
>
> Riak CS often includes `=' in uploadId of multipart uploads while S3
> doesn't (where no specs described in official documents).
>
> On Thu, Feb 12, 2015 at 12:41 AM, Niels O  wrote:
>> Hello everyone,
>>
>> I have just installed riakcs and have the s3cmd and nodejs (the official
>> amazon) plugin working.
>>
>> with the same credentials (accesskey&secret) I CAN upload big files with
>> S3CMD but I CANNOT with the AWS/S3 nodejs plugin? (downloading very big
>> files is no problem b.t.w.)
>>
>>
>> with the nodejs plugin
>>
>> - until 992k, (I tested with 32 KiB increases) everything works
>> - starting at 1024 KiB I get [400 InvalidDigest: The Content-MD5 you
>> specified was invalid.]
>> - from 8192 KiB and beyond I get [403 AccessDenied] back from riakcs.
>>
>> this while -again- with s3cmd I am able to upload files of over 1 GiB size
>> easily  .. same machine, same creds
>>
>> any ideas?
>>
>>
>>
>>
>>
>> (below some riakcs debug logging from both the 400 and 403)  ...
>>
>>
>> 400 - InvalidDigest:
>>
>> 2015-02-11 16:34:16.911 [debug]
>> <0.17889.18>@riak_cs_s3_auth:calculate_signature:129 STS:
>> ["PUT","\n","0BsQLab2tMEzr8IWoS2m5w==","\n","application/octet-stream","\n","\n",[["x-amz-date",":",<<"Wed,
>> 11 Feb 2015 15:34:16 GMT">>,"\n"]],["/testje/4096k",[]]]
>> 2015-02-11 16:34:17.854 [debug]
>> <0.23568.18>@riak_cs_put_fsm:is_digest_valid:326 Calculated =
>> <<"pIFX5fpeo7+sPPNjtSBWBg==">>, Reported = "0BsQLab2tMEzr8IWoS2m5w=="
>> 2015-02-11 16:34:17.860 [debug] <0.23568.18>@riak_cs_put_fsm:done:303
>> Invalid digest in the PUT FSM
>>
>>
>> 403 - AccessDenied
>>
>> 2015-02-11 16:36:00.448 [debug]
>> <0.22889.18>@riak_cs_s3_auth:calculate_signature:129 STS:
>> ["POST","\n",[],"\n","application/octet-stream","\n","\n",[["x-amz-date",":",<<"Wed,
>> 11 Feb 2015 15:36:00 GMT">>,"\n"]],["/testje/8192k","?uploads"]]
>> 2015-02-11 16:36:00.484 [debug]
>> <0.23539.18>@riak_cs_s3_auth:calculate_signature:129 STS:
>> ["PUT","\n","sq5d2PIhC7I1xxT8Rp9cVg==","\n","application/octet-stream","\n","\n",[["x-amz-date",":",<<"Wed,
>> 11 Feb 2015 15:36:00
>> GMT">>,"\n"]],["/testje/8192k","?partNumber=1&uploadId=TXR2AuCeRDWwc2bviLPcOg=="]]
>> 2015-02-11 16:36:00.484 [debug]
>> <0.23539.18>@riak_cs_wm_common:post_authentication:471 bad_auth
>> 2015-02-11 16:36:00.494 [debug]
>> <0.23543.18>@riak_cs_s3_auth:calculate_signature:129 STS:
>> ["DELETE","\n",[],"\n","application/octet-stream","\n","\n",[["x-amz-date",":",<<"Wed,
>> 11 Feb 2015 15:36:00
>> GMT">>,"\n"]],["/testje/8192k","?uploadId=TXR2AuCeRDWwc2bviLPcOg=="]]
>> 2015-02-11 16:36:00.494 [debug]
>> <0.23543.18>@riak_cs_wm_common:post_authentication:471 bad_auth
>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>
>
>
> --
> Kota UENISHI / @kuenishi
> Basho Japan KK
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com