Riak as Binary File Store

2012-05-29 Thread Praveen Baratam
Hello Everybody!

I have read abundantly over the web that Riak is very well suited to store
and retrieve small binary objects such as images, docs, etc.

In our scenario we are planning to use Riak to store uploads to our portal
which is a Social Network. Uploads are mostly images with maximum size of 2
MB and typical size ranges between few KBs to few 100 KBs.

Does this usage pattern fit Riak? What are the caveats if any?

Thank you!
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak as Binary File Store

2012-05-29 Thread Justin Sheehy
Hi, Praveen.

Nothing about what you have said would cause a problem for Riak. Go for it!

Justin



On May 29, 2012, at 8:36 AM, Praveen Baratam  wrote:

> Hello Everybody!
> 
> I have read abundantly over the web that Riak is very well suited to store 
> and retrieve small binary objects such as images, docs, etc.
> 
> In our scenario we are planning to use Riak to store uploads to our portal 
> which is a Social Network. Uploads are mostly images with maximum size of 2 
> MB and typical size ranges between few KBs to few 100 KBs.
> 
> Does this usage pattern fit Riak? What are the caveats if any?
> 
> Thank you!
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak as Binary File Store

2012-05-29 Thread Shuhao Wu
It'll be interesting if you can write a filesystem on top of Riak.

That would be a cool project to see on github :P

Shuhao


On Tue, May 29, 2012 at 8:36 AM, Praveen Baratam
wrote:

> Hello Everybody!
>
> I have read abundantly over the web that Riak is very well suited to store
> and retrieve small binary objects such as images, docs, etc.
>
> In our scenario we are planning to use Riak to store uploads to our portal
> which is a Social Network. Uploads are mostly images with maximum size of 2
> MB and typical size ranges between few KBs to few 100 KBs.
>
> Does this usage pattern fit Riak? What are the caveats if any?
>
> Thank you!
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak as Binary File Store

2012-05-29 Thread Alvaro Videla
Like this perhaps: https://github.com/johnthethird/riak-fuse *cough* *cough*

On Tue, May 29, 2012 at 2:49 PM, Shuhao Wu  wrote:

> It'll be interesting if you can write a filesystem on top of Riak.
>
> That would be a cool project to see on github :P
>
> Shuhao
>
>
> On Tue, May 29, 2012 at 8:36 AM, Praveen Baratam <
> praveen.bara...@gmail.com> wrote:
>
>> Hello Everybody!
>>
>> I have read abundantly over the web that Riak is very well suited to
>> store and retrieve small binary objects such as images, docs, etc.
>>
>> In our scenario we are planning to use Riak to store uploads to our
>> portal which is a Social Network. Uploads are mostly images with maximum
>> size of 2 MB and typical size ranges between few KBs to few 100 KBs.
>>
>> Does this usage pattern fit Riak? What are the caveats if any?
>>
>> Thank you!
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Upgrade from 0.14 to 1.1 riak problems!

2012-05-29 Thread Denis Barishev

Hello everybody!

Maybe some of you has faced my problem before. I would be glad to 
receive any ideas.

I'm trying to perform a rolling upgrade of nodes one by one.

I'm stopping one node and update configuration file to meet 
compatibility with riak 0.14. Namely I set

the following options in riak_kv section:

{legacy_keylisting, true},
{mapred_system, legacy},
{vnode_vclocks, false},

Then I install riak 1.1 and start it. But with no success as I 
understood the newly updated node can't bring up its vnodes...

The error log tells me the following:

11:20:36.096 [error] Supervisor riak_kv_sup had child 
riak_kv_vnode_master started with 
riak_core_vnode_master:start_link(riak_kv_vnode, riak_kv_legacy_vnode, 
riak_kv) at <0.491.0> exit with reason bad argument in call to 
ets:lookup(riak_core_node_watcher, {by_node,'riak@192.168.154.50'}) in 
riak_core_node_watcher:internal_get_services/1 in context child_terminated
11:20:36.104 [error] gen_server riak_kv_vnode_master terminated with 
reason: bad argument in call to ets:lookup(riak_core_node_watcher, 
{by_node,'riak@192.168.154.50'}) in 
riak_core_node_watcher:internal_get_services/1



11:20:36.132 [error] CRASH REPORT Process riak_kv_vnode_master with 0 
neighbours crashed with reason: bad argument in call to 
ets:lookup(riak_core_node_watcher, {by_node,'riak@192.168.154.50'}) in 
riak_core_node_watcher:internal_get_services/1
11:20:36.132 [error] Supervisor riak_kv_sup had child 
riak_kv_vnode_master started with 
riak_core_vnode_master:start_link(riak_kv_vnode, riak_kv_legacy_vnode, 
riak_kv) at <0.1474.0> exit with reason bad argument in call to 
ets:lookup(riak_core_node_watcher, {by_node,'riak@192.168.154.50'}) in 
riak_core_node_watcher:internal_get_services/1 in context child_terminated
11:20:36.133 [error] gen_server riak_kv_vnode_master terminated with 
reason: bad argument in call to ets:lookup(riak_core_node_watcher, 
{by_node,'riak@192.168.154.50'}) in 
riak_core_node_watcher:internal_get_services/1
11:20:36.134 [error] CRASH REPORT Process riak_kv_vnode_master with 0 
neighbours crashed with reason: bad argument in call to 
ets:lookup(riak_core_node_watcher, {by_node,'riak@192.168.154.50'}) in 
riak_core_node_watcher:internal_get_services/1
11:20:36.134 [error] Supervisor riak_kv_sup had child 
riak_kv_vnode_master started with 
riak_core_vnode_master:start_link(riak_kv_vnode, riak_kv_legacy_vnode, 
riak_kv) at <0.1475.0> exit with reason bad argument in call to 
ets:lookup(riak_core_node_watcher, {by_node,'riak@192.168.154.50'}) in 
riak_core_node_watcher:internal_get_services/1 in context child_terminated
11:20:36.136 [error] Supervisor riak_kv_sup had child 
riak_kv_vnode_master started with 
riak_core_vnode_master:start_link(riak_kv_vnode, riak_kv_legacy_vnode, 
riak_kv) at <0.1475.0> exit with reason reached_max_restart_intensity in 
context shutdown



PS. I tuned up all the limits and open ports values...

Waiting for you help and
thank you!

Denis

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Link walking with a java client

2012-05-29 Thread Brian Roach
Deepak -

I'll take a look at it this week, but more than likely it's a bug.

Link walking is a REST only operation as far as Riak’s interfaces are 
concerned. Link Walking in the protocol buffers Java client is a hack that 
issues two m/r jobs to the protocol buffers interface (the first constructs the 
inputs to the second by walking the links, the second returns the data).

Thanks,
Brian Roach

On May 27, 2012, at 7:19 AM, Deepak Balasubramanyam wrote:

> This looks like a bug. The code to walk links via a HTTP client works 
> perfectly. The same code fails when the PB client is used. The POJO attached 
> in this email reproduces the problem.
> 
> I searched the email archives and existing issues and found no trace of this 
> problem. Please run the POJO by swapping the clients returned from the 
> getClient() method to reproduce the problem. I can create a bug report once 
> someone from the dev team confirms this really is a bug.
> 
> Riak client pom:
>   
>   com.basho.riak
>   riak-client
>   1.0.5
>   
> 
> Riak server version - 1.1.2. Built from source. 
> 4 nodes running on 1 machine. OS - Linux mint.
> 
> On Sun, May 27, 2012 at 10:05 AM, Deepak Balasubramanyam 
>  wrote:
> Hi,
> 
> I have a cluster that contains 2 buckets. A bucket named 'usersMine' contains 
> the key 'user2', which is linked to several keys (about 10) under a bucket 
> named userPreferences. The relationship exists under the name 'myPref'. A 
> user and a preference have String values.
> 
> I can successfully traverse the link over HTTP using the following URL - 
> 
> curl -v localhost:8091/riak/usersMine/user2/_,myPref,1
> 
>    
> > User-Agent: curl/7.21.6 (i686-pc-linux-gnu) libcurl/7.21.6 OpenSSL/1.0.0e 
> > zlib/1.2.3.4 libidn/1.22 librtmp/2.3
> > Host: localhost:8091
> > Accept: */*
> > 
> < HTTP/1.1 200 OK
> < Server: MochiWeb/1.1 WebMachine/1.9.0 (someone had painted it blue)
> < Expires: Sun, 27 May 2012 04:12:39 GMT
> < Date: Sun, 27 May 2012 04:02:39 GMT
> < Content-Type: multipart/mixed; boundary=IYGfKNqjGdco9ddfyjRP1Utzfi2
> < Content-Length: 3271
> < 
> 
> --IYGfKNqjGdco9ddfyjRP1Utzfi2
> Content-Type: multipart/mixed; boundary=3YVES0x2tFnUDOdTzfn1OGS6uMt
> 
> --3YVES0x2tFnUDOdTzfn1OGS6uMt
> X-Riak-Vclock: a85hYGBgzGDKBVIcMRuuc/nvy7mSwZTImMfKUKpodpIvCwA=
> Location: /riak/userPreferences/preference3004
> Content-Type: text/plain; charset=UTF-8
> Link: ; rel="up"
> Etag: 5GucnGSk4TjQc8BO1eNLyI
> Last-Modified: Sun, 27 May 2012 03:54:29 GMT
> 
> junk
> 
> <<  Truncated  >>
> junk
> --3YVES0x2tFnUDOdTzfn1OGS6uMt--
> 
> --IYGfKNqjGdco9ddfyjRP1Utzfi2--
>    
> 
> However when I use the java client to walk the link, I get a 
> ClassCastException.
> 
>   
> Java code:
>   
> private void getAllLinks()
> {
> String user="user2";
> IRiakClient riakClient = null;
> try
> {
> long past = System.currentTimeMillis();
> riakClient = RiakFactory.pbcClient("localhost",8081);
> Bucket userBucket = riakClient.fetchBucket("usersMine").execute();
> DefaultRiakObject user1 =(DefaultRiakObject) 
> userBucket.fetch(user).execute();
> List links = user1.getLinks();
> System.out.println(links.size());
> WalkResult execute = 
> riakClient.walk(user1).addStep("userPreferences", "myPref",true).execute();
> Iterator iterator = execute.iterator();
> while(iterator.hasNext())
> {
> Object next = iterator.next();
> System.out.println(next);
> }
> long now = System.currentTimeMillis();
> System.out.println("Retrieval in " + (now-past) + " ms");
> }
> catch (Exception e)
> {
> e.printStackTrace();
> }
> finally
> {
> if(riakClient != null)
> {
> riakClient.shutdown();
> }
> }
> }
> 
>    
> Stack:
>    
> java.lang.ClassCastException: java.lang.String cannot be cast to 
> java.util.List
>   at 
> com.basho.riak.client.raw.pbc.PBClientAdapter.linkWalkSecondPhase(PBClientAdapter.java:380)
>   at 
> com.basho.riak.client.raw.pbc.PBClientAdapter.linkWalk(PBClientAdapter.java:325)
>   at com.basho.riak.client.query.LinkWalk.execute(LinkWalk.java:63)
>   at 
> com.chatterbox.persistence.riak.RiakTest.getAllLinks(RiakTest.java:81)
>   at com.chatterbox.persistenc

Re: Riak crashed with MANIFEST not found

2012-05-29 Thread Nam Nguyen
Thank you very much Justin.

Here's another command to hopefully speed up the handoff process.

On any of the node, attach to Erland console, then:

rp([{N, rpc:call(N, application, get_env, [riak_core, handoff_concurrency])} || 
N <- [node() | nodes()]]).

This command will show the current handoff_concurrency number. The result could 
be:

[{'riak@10.20.2.243',{ok,1}},   
 {'riak@10.20.2.242',{ok,1}},   
 {'riak@10.20.2.244',{ok,1}},   
 {'riak@10.20.2.245',{ok,1}}]

Then, to change handoff_concurrency to, say, 4, issue this command:

rp(rpc:multicall([node() | nodes()], application, set_env, [riak_core, 
handoff_concurrency, 4])).

Justin was very helpful to guide me along the process. He showed me those 
commands. Thanks again, Justin!

Cheers,
Nam


On May 25, 2012, at 8:51 PM, jshoffstall wrote:

> Nam,
> 
> To recap the upshot of our offline chat tonight:
> Though the leave operations on your cluster progressed fine, in the future I
> would just take the damaged nodes down, do the repair like I mentioned in
> the earlier post in this thread, and bring the nodes back up. No membership
> changes should be required.
> 
> Justin Shoffstall
> Developer Advocate | Basho Technologies, Inc.
> 
> 
> --
> View this message in context: 
> http://riak-users.197444.n3.nabble.com/Riak-crashed-with-MANIFEST-not-found-tp4015987p4016240.html
> Sent from the Riak Users mailing list archive at Nabble.com.
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak as Binary File Store

2012-05-29 Thread Vlad Gorodetsky
I've read somewhere here on the mailing list that storing blobs that
are more than 50KB isn't recommended.
Is that correct? If so, is it something specific to storage backend?

~Vlad

On Tue, May 29, 2012 at 3:51 PM, Alvaro Videla  wrote:
> Like this perhaps: https://github.com/johnthethird/riak-fuse *cough* *cough*
>
>
> On Tue, May 29, 2012 at 2:49 PM, Shuhao Wu  wrote:
>>
>> It'll be interesting if you can write a filesystem on top of Riak.
>>
>> That would be a cool project to see on github :P
>>
>> Shuhao
>>
>>
>> On Tue, May 29, 2012 at 8:36 AM, Praveen Baratam
>>  wrote:
>>>
>>> Hello Everybody!
>>>
>>> I have read abundantly over the web that Riak is very well suited to
>>> store and retrieve small binary objects such as images, docs, etc.
>>>
>>> In our scenario we are planning to use Riak to store uploads to our
>>> portal which is a Social Network. Uploads are mostly images with maximum
>>> size of 2 MB and typical size ranges between few KBs to few 100 KBs.
>>>
>>> Does this usage pattern fit Riak? What are the caveats if any?
>>>
>>> Thank you!
>>>
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>>
>>
>>
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>>
>
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Riak as Binary File Store

2012-05-29 Thread Mark Phillips
On Tue, May 29, 2012 at 12:51 PM, Vlad Gorodetsky  wrote:

> I've read somewhere here on the mailing list that storing blobs that
> are more than 50KB isn't recommended.
> Is that correct? If so, is it something specific to storage backend?
>
>
Riak can probably handle objects up to about 10MBs. That said, depending on
your hardware and network, the functional limit is probably smaller than
that. Also, keep in mind n_val when writing large values. (i.e., 5MB with
an n_val of 3 = 15MB across the wire).

Mark


> ~Vlad
>
> On Tue, May 29, 2012 at 3:51 PM, Alvaro Videla 
> wrote:
> > Like this perhaps: https://github.com/johnthethird/riak-fuse*cough* *cough*
> >
> >
> > On Tue, May 29, 2012 at 2:49 PM, Shuhao Wu  wrote:
> >>
> >> It'll be interesting if you can write a filesystem on top of Riak.
> >>
> >> That would be a cool project to see on github :P
> >>
> >> Shuhao
> >>
> >>
> >> On Tue, May 29, 2012 at 8:36 AM, Praveen Baratam
> >>  wrote:
> >>>
> >>> Hello Everybody!
> >>>
> >>> I have read abundantly over the web that Riak is very well suited to
> >>> store and retrieve small binary objects such as images, docs, etc.
> >>>
> >>> In our scenario we are planning to use Riak to store uploads to our
> >>> portal which is a Social Network. Uploads are mostly images with
> maximum
> >>> size of 2 MB and typical size ranges between few KBs to few 100 KBs.
> >>>
> >>> Does this usage pattern fit Riak? What are the caveats if any?
> >>>
> >>> Thank you!
> >>>
> >>> ___
> >>> riak-users mailing list
> >>> riak-users@lists.basho.com
> >>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >>>
> >>
> >>
> >> ___
> >> riak-users mailing list
> >> riak-users@lists.basho.com
> >> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >>
> >
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
>
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Client Riak Builders...

2012-05-29 Thread Brian Roach
Guido -

The real fix is to enhance the client to support a Collection, I'll add an 
issue for this in github.

What you would need to do right now is write your own Converter (which would 
really just be a modification of our JSONConverter if you're using JSON) that 
does this for you. 

If you look at the source for JSONConverter you'll see where the indexes are 
processed.  As it is, the index processing is handled by the 
RiakIndexConverter class which is where the limitation of requiring the 
annotated field to be a String is coming from (it's actually buried lower than 
that in the underlying annotation processing, but that's the starting point for 
the problem). The actual RiakIndexes class that encapsulates the data and 
exists in the IRiakObject doesn't have this problem. 

The catch is that you'll need to do all the reflection ugliness yourself, as 
that's the part that's broken (the annotation processing). 

Basically, in JSONConverter.fromDomain() you would need to replace
RiakIndexes indexes = riakIndexConverter.getIndexes(domainObject);
with your own annotation processing. The same would need to be done in 
JSONConverter.toDomain() at
riakIndexConverter.populateIndexes(…) 

Obviously this is not ideal and I'm considering it a bug; I'll put this toward 
to top of the list of things I'm working on right now. 

Thanks,
Brian Roach

 


On May 28, 2012, at 8:03 AM, Guido Medina wrote:

> Hi,
> 
>  I'm looking for a work around @RiakIndex annotation to support multiple 
> values per index name, since the annotation is limited to one single value 
> per annotated property (no collection support), I would like to know if there 
> is a way of using the DomainBucketBuilder, mutation & conflict resolver and 
> at the same time has access to a method signature like addIndex(String or 
> int)...addIndex(String or int)...build() same as you can do with 
> RiakObjectBuilder which lacks support for conflict resolution and mutation 
> style.
> 
> Regards,
> 
> Guido.
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Client Riak Builders...

2012-05-29 Thread Guido Medina
I will request a "pull request", I fixed it, I enabled @RiakIndex for 
collection fields AND methods (String, Integer or Collection of any of 
those), on our coding is working, but still I need to test it more before 
making it final.


I will share the details tomorrow, I already created a fork from your master 
branch.


Now you can have something like:

@RiakIndex
@JsonIgnore
Collection getNumbers()

Also this works as index and with no getter (as of 1.0.6-SNAPSHOT) will only 
be that, an index:


@RiakIndex
Collection numbers;
That will act as index and be ignored as property which is the intention of 
the index, to be a dynamic calculated value(s) and not as property which 
requires the caller to call a post-construct.


And of course, all subclasses of a collection apply.

Thanks for the answer,

Guido.

-Original Message- 
From: Brian Roach

Sent: Tuesday, May 29, 2012 6:09 PM
To: Guido Medina
Cc: riak-users@lists.basho.com
Subject: Re: Java Client Riak Builders...

Guido -

The real fix is to enhance the client to support a Collection, I'll add an 
issue for this in github.


What you would need to do right now is write your own Converter (which would 
really just be a modification of our JSONConverter if you're using JSON) 
that does this for you.


If you look at the source for JSONConverter you'll see where the indexes are 
processed.  As it is, the index processing is handled by the 
RiakIndexConverter class which is where the limitation of requiring the 
annotated field to be a String is coming from (it's actually buried lower 
than that in the underlying annotation processing, but that's the starting 
point for the problem). The actual RiakIndexes class that encapsulates the 
data and exists in the IRiakObject doesn't have this problem.


The catch is that you'll need to do all the reflection ugliness yourself, as 
that's the part that's broken (the annotation processing).


Basically, in JSONConverter.fromDomain() you would need to replace
RiakIndexes indexes = riakIndexConverter.getIndexes(domainObject);
with your own annotation processing. The same would need to be done in 
JSONConverter.toDomain() at

riakIndexConverter.populateIndexes(…)

Obviously this is not ideal and I'm considering it a bug; I'll put this 
toward to top of the list of things I'm working on right now.


Thanks,
Brian Roach




On May 28, 2012, at 8:03 AM, Guido Medina wrote:


Hi,

 I'm looking for a work around @RiakIndex annotation to support multiple 
values per index name, since the annotation is limited to one single value 
per annotated property (no collection support), I would like to know if 
there is a way of using the DomainBucketBuilder, mutation & conflict 
resolver and at the same time has access to a method signature like 
addIndex(String or int)...addIndex(String or int)...build() same as you 
can do with RiakObjectBuilder which lacks support for conflict resolution 
and mutation style.


Regards,

Guido.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Client Riak Builders...

2012-05-29 Thread Brian Roach
Guido - 

Thanks, looking forward to it. 

Also as an FYI, on Friday I fixed the bug that was causing the requirement of 
the @JsonIgnore for Riak annotated fields without getters. 

- Brian Roach

On May 29, 2012, at 11:52 AM, Guido Medina wrote:

> I will request a "pull request", I fixed it, I enabled @RiakIndex for 
> collection fields AND methods (String, Integer or Collection of any of 
> those), on our coding is working, but still I need to test it more before 
> making it final.
> 
> I will share the details tomorrow, I already created a fork from your master 
> branch.
> 
> Now you can have something like:
> 
> @RiakIndex
> @JsonIgnore
> Collection getNumbers()
> 
> Also this works as index and with no getter (as of 1.0.6-SNAPSHOT) will only 
> be that, an index:
> 
> @RiakIndex
> Collection numbers;
> That will act as index and be ignored as property which is the intention of 
> the index, to be a dynamic calculated value(s) and not as property which 
> requires the caller to call a post-construct.
> 
> And of course, all subclasses of a collection apply.
> 
> Thanks for the answer,
> 
> Guido.
> 
> -Original Message- From: Brian Roach
> Sent: Tuesday, May 29, 2012 6:09 PM
> To: Guido Medina
> Cc: riak-users@lists.basho.com
> Subject: Re: Java Client Riak Builders...
> 
> Guido -
> 
> The real fix is to enhance the client to support a Collection, I'll add an 
> issue for this in github.
> 
> What you would need to do right now is write your own Converter (which would 
> really just be a modification of our JSONConverter if you're using JSON) that 
> does this for you.
> 
> If you look at the source for JSONConverter you'll see where the indexes are 
> processed.  As it is, the index processing is handled by the 
> RiakIndexConverter class which is where the limitation of requiring the 
> annotated field to be a String is coming from (it's actually buried lower 
> than that in the underlying annotation processing, but that's the starting 
> point for the problem). The actual RiakIndexes class that encapsulates the 
> data and exists in the IRiakObject doesn't have this problem.
> 
> The catch is that you'll need to do all the reflection ugliness yourself, as 
> that's the part that's broken (the annotation processing).
> 
> Basically, in JSONConverter.fromDomain() you would need to replace
> RiakIndexes indexes = riakIndexConverter.getIndexes(domainObject);
> with your own annotation processing. The same would need to be done in 
> JSONConverter.toDomain() at
> riakIndexConverter.populateIndexes(…)
> 
> Obviously this is not ideal and I'm considering it a bug; I'll put this 
> toward to top of the list of things I'm working on right now.
> 
> Thanks,
> Brian Roach
> 
> 
> 
> 
> On May 28, 2012, at 8:03 AM, Guido Medina wrote:
> 
>> Hi,
>> 
>> I'm looking for a work around @RiakIndex annotation to support multiple 
>> values per index name, since the annotation is limited to one single value 
>> per annotated property (no collection support), I would like to know if 
>> there is a way of using the DomainBucketBuilder, mutation & conflict 
>> resolver and at the same time has access to a method signature like 
>> addIndex(String or int)...addIndex(String or int)...build() same as you can 
>> do with RiakObjectBuilder which lacks support for conflict resolution and 
>> mutation style.
>> 
>> Regards,
>> 
>> Guido.
>> 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> ___
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Client Riak Builders...

2012-05-29 Thread Guido Medina
Also, the coming riak client version removed the embedded json package from 
it and put an old implementation from the main maven repo, I think that what 
was meant to do was to put this version: 
https://github.com/douglascrockford/JSON-java which has lot of performance 
improvements but no maven repo, the old:


   
   org.json
   json
   20090211
   

uses lot of StringBuffer instead of StringBuilders and StringWritters 
introduced later on.


I'm wondering about the benchmark of one vs the other.

Regards,

Guido.

-Original Message- 
From: Brian Roach

Sent: Tuesday, May 29, 2012 7:05 PM
To: Guido Medina
Cc: riak-users@lists.basho.com
Subject: Re: Java Client Riak Builders...

Guido -

Thanks, looking forward to it.

Also as an FYI, on Friday I fixed the bug that was causing the requirement 
of the @JsonIgnore for Riak annotated fields without getters.


- Brian Roach

On May 29, 2012, at 11:52 AM, Guido Medina wrote:

I will request a "pull request", I fixed it, I enabled @RiakIndex for 
collection fields AND methods (String, Integer or Collection of any of 
those), on our coding is working, but still I need to test it more before 
making it final.


I will share the details tomorrow, I already created a fork from your 
master branch.


Now you can have something like:

@RiakIndex
@JsonIgnore
Collection getNumbers()

Also this works as index and with no getter (as of 1.0.6-SNAPSHOT) will 
only be that, an index:


@RiakIndex
Collection numbers;
That will act as index and be ignored as property which is the intention 
of the index, to be a dynamic calculated value(s) and not as property 
which requires the caller to call a post-construct.


And of course, all subclasses of a collection apply.

Thanks for the answer,

Guido.

-Original Message- From: Brian Roach
Sent: Tuesday, May 29, 2012 6:09 PM
To: Guido Medina
Cc: riak-users@lists.basho.com
Subject: Re: Java Client Riak Builders...

Guido -

The real fix is to enhance the client to support a Collection, I'll add an 
issue for this in github.


What you would need to do right now is write your own Converter (which 
would really just be a modification of our JSONConverter if you're using 
JSON) that does this for you.


If you look at the source for JSONConverter you'll see where the indexes 
are processed.  As it is, the index processing is handled by the 
RiakIndexConverter class which is where the limitation of requiring the 
annotated field to be a String is coming from (it's actually buried lower 
than that in the underlying annotation processing, but that's the starting 
point for the problem). The actual RiakIndexes class that encapsulates the 
data and exists in the IRiakObject doesn't have this problem.


The catch is that you'll need to do all the reflection ugliness yourself, 
as that's the part that's broken (the annotation processing).


Basically, in JSONConverter.fromDomain() you would need to replace
RiakIndexes indexes = riakIndexConverter.getIndexes(domainObject);
with your own annotation processing. The same would need to be done in 
JSONConverter.toDomain() at

riakIndexConverter.populateIndexes(…)

Obviously this is not ideal and I'm considering it a bug; I'll put this 
toward to top of the list of things I'm working on right now.


Thanks,
Brian Roach




On May 28, 2012, at 8:03 AM, Guido Medina wrote:


Hi,

I'm looking for a work around @RiakIndex annotation to support multiple 
values per index name, since the annotation is limited to one single 
value per annotated property (no collection support), I would like to 
know if there is a way of using the DomainBucketBuilder, mutation & 
conflict resolver and at the same time has access to a method signature 
like addIndex(String or int)...addIndex(String or int)...build() same as 
you can do with RiakObjectBuilder which lacks support for conflict 
resolution and mutation style.


Regards,

Guido.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com



___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Java Client Riak Builders...

2012-05-29 Thread Brian Roach

Actually, it was on purpose. As sort of a "step 1" to getting rid of it, the 
goal was just to get the code out of our repo and use maven to pull it in. As 
you note and as far as I could find, the latest development on github is not 
being published to maven central. 

Long term I want to eliminate it completely and use Jackson for everything. 

Thanks,
- Roach


On May 29, 2012, at 1:07 PM, Guido Medina wrote:

> Also, the coming riak client version removed the embedded json package from 
> it and put an old implementation from the main maven repo, I think that what 
> was meant to do was to put this version: 
> https://github.com/douglascrockford/JSON-java which has lot of performance 
> improvements but no maven repo, the old:
> 
>   
>   org.json
>   json
>   20090211
>   
> 
> uses lot of StringBuffer instead of StringBuilders and StringWritters 
> introduced later on.
> 
> I'm wondering about the benchmark of one vs the other.
> 
> Regards,
> 
> Guido.
> 
> -Original Message- From: Brian Roach
> Sent: Tuesday, May 29, 2012 7:05 PM
> To: Guido Medina
> Cc: riak-users@lists.basho.com
> Subject: Re: Java Client Riak Builders...
> 
> Guido -
> 
> Thanks, looking forward to it.
> 
> Also as an FYI, on Friday I fixed the bug that was causing the requirement of 
> the @JsonIgnore for Riak annotated fields without getters.
> 
> - Brian Roach
> 
> On May 29, 2012, at 11:52 AM, Guido Medina wrote:
> 
>> I will request a "pull request", I fixed it, I enabled @RiakIndex for 
>> collection fields AND methods (String, Integer or Collection of any of 
>> those), on our coding is working, but still I need to test it more before 
>> making it final.
>> 
>> I will share the details tomorrow, I already created a fork from your master 
>> branch.
>> 
>> Now you can have something like:
>> 
>> @RiakIndex
>> @JsonIgnore
>> Collection getNumbers()
>> 
>> Also this works as index and with no getter (as of 1.0.6-SNAPSHOT) will only 
>> be that, an index:
>> 
>> @RiakIndex
>> Collection numbers;
>> That will act as index and be ignored as property which is the intention of 
>> the index, to be a dynamic calculated value(s) and not as property which 
>> requires the caller to call a post-construct.
>> 
>> And of course, all subclasses of a collection apply.
>> 
>> Thanks for the answer,
>> 
>> Guido.
>> 
>> -Original Message- From: Brian Roach
>> Sent: Tuesday, May 29, 2012 6:09 PM
>> To: Guido Medina
>> Cc: riak-users@lists.basho.com
>> Subject: Re: Java Client Riak Builders...
>> 
>> Guido -
>> 
>> The real fix is to enhance the client to support a Collection, I'll add an 
>> issue for this in github.
>> 
>> What you would need to do right now is write your own Converter (which would 
>> really just be a modification of our JSONConverter if you're using JSON) 
>> that does this for you.
>> 
>> If you look at the source for JSONConverter you'll see where the indexes are 
>> processed.  As it is, the index processing is handled by the 
>> RiakIndexConverter class which is where the limitation of requiring the 
>> annotated field to be a String is coming from (it's actually buried lower 
>> than that in the underlying annotation processing, but that's the starting 
>> point for the problem). The actual RiakIndexes class that encapsulates the 
>> data and exists in the IRiakObject doesn't have this problem.
>> 
>> The catch is that you'll need to do all the reflection ugliness yourself, as 
>> that's the part that's broken (the annotation processing).
>> 
>> Basically, in JSONConverter.fromDomain() you would need to replace
>> RiakIndexes indexes = riakIndexConverter.getIndexes(domainObject);
>> with your own annotation processing. The same would need to be done in 
>> JSONConverter.toDomain() at
>> riakIndexConverter.populateIndexes(…)
>> 
>> Obviously this is not ideal and I'm considering it a bug; I'll put this 
>> toward to top of the list of things I'm working on right now.
>> 
>> Thanks,
>> Brian Roach
>> 
>> 
>> 
>> 
>> On May 28, 2012, at 8:03 AM, Guido Medina wrote:
>> 
>>> Hi,
>>> 
>>> I'm looking for a work around @RiakIndex annotation to support multiple 
>>> values per index name, since the annotation is limited to one single value 
>>> per annotated property (no collection support), I would like to know if 
>>> there is a way of using the DomainBucketBuilder, mutation & conflict 
>>> resolver and at the same time has access to a method signature like 
>>> addIndex(String or int)...addIndex(String or int)...build() same as you can 
>>> do with RiakObjectBuilder which lacks support for conflict resolution and 
>>> mutation style.
>>> 
>>> Regards,
>>> 
>>> Guido.
>>> 
>>> ___
>>> riak-users mailing list
>>> riak-users@lists.basho.com
>>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>> 
>> 
>> ___
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http

anyone seen that same error with leveldb ?

2012-05-29 Thread benoit ciceron

hello devs,
good folks in IRC suggested to ping the experts ;)
cannot get basho_bench to run  more than few hours with leveldb : 
https://gist.github.com/22b8d49dd1d22553d85c . has anyone been able to ?same 
cluster ok when i use bitcask for even longer period.
not riak and pb port are still responding at this point.swapping turned off.
ben-



peace . period.
  ___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


preflist_exhausted error...

2012-05-29 Thread Sati, Mohit
Hello All,

I have built a new 4 node cluster using riak version riak-1.1.2-1.el6.x86_64. I 
was reading several of the posts and inspite of doing the following, I'm still 
getting the preflist... error.

1. New riak version with mapred_builtins.js

-rw-r--r-- 1 root root 2936 Apr 17 05:25 
/usr/lib64/riak/lib/riak_kv-1.1.2/priv/mapred_builtins.js

2. change settings for one node :

{map_js_vm_count, 24 },
{reduce_js_vm_count, 18 },
..
..
{js_max_vm_mem, 48},

3. Verify the settings

 rpc:multicall(riak_kv_js_manager,pool_size,[riak_kv_js_map]) .
{[24,24,24,24],[]}
rpc:multicall(riak_kv_js_manager,pool_size,[riak_kv_js_reduce]) .
{[18,18,18,18],[]}
4. run the map query:

For one day's data it is getting 1850 keys with a file size of 600K


Error message for more than 1 day's data

{"phase":0,"error":"[preflist_exhausted]","input":"{ok,{r_object,<

Can anyone please help?

I also built secondary indexes but it did not help


Thanks
Mohit
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Problems with bitcask, file merge errors, too many 0 byte files

2012-05-29 Thread Jacob Chapel
Our Riak server which is running 1.0.2 at the moment using bitcask backend
and search is crashing often and when restarted will crash again
immediately due to system_limit error.

2012-05-29 19:28:54.808 [error] <0.1001.0>@riak_kv_vnode:init:245 Failed to
start riak_kv_bitcask_backend Reason:
{{badmatch,{error,system_limit}},[{bitcask,scan_key_files,3},{bitcask,init_keydir,2},{bitcask,open,2},{riak_kv_bitcask_backend,start,2},{riak_kv_vnode,init,1},{riak_core_vnode,init,1},{gen_fsm,init_it,6},{proc_lib,init_p_do_apply,3}]}

Before, we were getting emfile errors, so we upped the ulimit for open
files which helped. Soon after (about a week) it crashed again but due to
the above error. After looking into it and asking on IRC, there wasn't much
information but looked to be due to a ton of 0 (zero) byte files in the
bitcask folder. In fact when counting, at first there were over 20k 0 byte
files. Someone who had similar issues was instructed to delete them, and
common sense says that 0 byte files don't hold any data. So I backed them
up (just in case) and removed them from the bitcask folder. That allowed
the server to startup and run again.

Fast forward a few days to now, it appears to have crashed due to the same
issue. Having over 20k new 0 byte files. I asked on IRC again but not much
could be helped since they didn't know. I backed up and removed the files
again and it runs. How can I prevent these files?

Also, as this sample log output shows:
https://gist.github.com/a2e3c473e1d582bd87a2

We are getting a lot of file merge errors and child processes dying
randomly (not sure how to read the error).

I am not really sure where to go from here, we can't keep removing 0 byte
files to keep the server up, and I am sure there is some setting or
configuration problem that just isn't apparent. Help would be very much
appreciated.

Jacob Chapel
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Problems with bitcask, file merge errors, too many 0 byte files

2012-05-29 Thread Ryan Zezeski
Jacob,

I only glanced at this but have some comments inline.

On Tue, May 29, 2012 at 8:44 PM, Jacob Chapel wrote:

> Our Riak server which is running 1.0.2 at the moment using bitcask backend
> and search is crashing often and when restarted will crash again
> immediately due to system_limit error.
>
> 2012-05-29 19:28:54.808 [error] <0.1001.0>@riak_kv_vnode:init:245 Failed
> to start riak_kv_bitcask_backend Reason:
> {{badmatch,{error,system_limit}},[{bitcask,scan_key_files,3},{bitcask,init_keydir,2},{bitcask,open,2},{riak_kv_bitcask_backend,start,2},{riak_kv_vnode,init,1},{riak_core_vnode,init,1},{gen_fsm,init_it,6},{proc_lib,init_p_do_apply,3}]}
>

The system limit I imagine is related to the 20k 0 byte files causing you
to reach open file limit.  That is, not the cause but a symptom of the
problem.


>
> Before, we were getting emfile errors, so we upped the ulimit for open
> files which helped. Soon after (about a week) it crashed again but due to
> the above error. After looking into it and asking on IRC, there wasn't much
> information but looked to be due to a ton of 0 (zero) byte files in the
> bitcask folder. In fact when counting, at first there were over 20k 0 byte
> files. Someone who had similar issues was instructed to delete them, and
> common sense says that 0 byte files don't hold any data. So I backed them
> up (just in case) and removed them from the bitcask folder. That allowed
> the server to startup and run again.
>
>
The fact that you have many 0 byte files is an indication that bitcask is
crashing a lot.  You haven't noticed this probably because Riak has been
working fine until now.  Although, had you peeked at your logs you probably
would have noticed lots of vnode/bitcask crashes.


> Fast forward a few days to now, it appears to have crashed due to the same
> issue. Having over 20k new 0 byte files. I asked on IRC again but not much
> could be helped since they didn't know. I backed up and removed the files
> again and it runs. How can I prevent these files?
>
> Also, as this sample log output shows:
> https://gist.github.com/a2e3c473e1d582bd87a2
>

This, I believe, is the heart of your problem.  It looks like you have
corrupted data files.  IIRC bitcask merging is still susceptible to
crashing when dealing with corrupted data files.  So every time a merge is
triggered the bitcask instance crashes and restarts causing a new 0 byte
file to be created.  Once this happens enough times you have enough of
these files to reach the system limit.

Just confirmed, I'm pretty sure this is the issue you are running into.

https://issues.basho.com/show_bug.cgi?id=1160


>
> We are getting a lot of file merge errors and child processes dying
> randomly (not sure how to read the error).
>
> I am not really sure where to go from here, we can't keep removing 0 byte
> files to keep the server up, and I am sure there is some setting or
> configuration problem that just isn't apparent. Help would be very much
> appreciated.
>

Depending on how many partitions have corrupt data files you could delete
the bitcask data, kill the owning vnode, and perform list-keys +
read-repair to repair the replicas.  However, looking at your log it seems
like you have a lot of partitions with corrupt data files (I'm guessing
this was because of your previous emfile issue).  If you feel brave you
could probably remove the bad bits with a hex editor but I imagine we have
some code to automate this somewhere.  I gotta go right now but perhaps
someone else can point you to an easy fix.

-Z
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Link walking with a java client

2012-05-29 Thread Deepak Balasubramanyam
Brian,

Yes I read about the hack somewhere in your documentation. My understanding
is that the link walking operation will cease to work via HTTP after links
to a node grow beyond a particular number. This happens because a HTTP
header is used to send link related data and there are limits around how
much data a header can hold.

I was interested in exploring if the PB client can overcome this
limitation. The use case I am interested in is this - If a single node
links to 10k nodes, how much time would it take a link walker to visit all
nodes via the PB client ? What sort of client would you recommend for
something like this ?

I know that linking is only meant to be used as a lightweight feature.
Would 10k links to a node be considered lightweight ?

Thanks
Deepak

On Tue, May 29, 2012 at 9:33 PM, Brian Roach  wrote:

> Deepak -
>
> I'll take a look at it this week, but more than likely it's a bug.
>
> Link walking is a REST only operation as far as Riak’s interfaces are
> concerned. Link Walking in the protocol buffers Java client is a hack that
> issues two m/r jobs to the protocol buffers interface (the first constructs
> the inputs to the second by walking the links, the second returns the data).
>
> Thanks,
> Brian Roach
>
> On May 27, 2012, at 7:19 AM, Deepak Balasubramanyam wrote:
>
> > This looks like a bug. The code to walk links via a HTTP client works
> perfectly. The same code fails when the PB client is used. The POJO
> attached in this email reproduces the problem.
> >
> > I searched the email archives and existing issues and found no trace of
> this problem. Please run the POJO by swapping the clients returned from the
> getClient() method to reproduce the problem. I can create a bug report once
> someone from the dev team confirms this really is a bug.
> >
> > Riak client pom:
> >   
> >   com.basho.riak
> >   riak-client
> >   1.0.5
> >   
> >
> > Riak server version - 1.1.2. Built from source.
> > 4 nodes running on 1 machine. OS - Linux mint.
> >
> > On Sun, May 27, 2012 at 10:05 AM, Deepak Balasubramanyam <
> deepak.b...@gmail.com> wrote:
> > Hi,
> >
> > I have a cluster that contains 2 buckets. A bucket named 'usersMine'
> contains the key 'user2', which is linked to several keys (about 10) under
> a bucket named userPreferences. The relationship exists under the name
> 'myPref'. A user and a preference have String values.
> >
> > I can successfully traverse the link over HTTP using the following URL -
> >
> > curl -v localhost:8091/riak/usersMine/user2/_,myPref,1
> >
> >  
> 
> > > User-Agent: curl/7.21.6 (i686-pc-linux-gnu) libcurl/7.21.6
> OpenSSL/1.0.0e zlib/1.2.3.4 libidn/1.22 librtmp/2.3
> > > Host: localhost:8091
> > > Accept: */*
> > >
> > < HTTP/1.1 200 OK
> > < Server: MochiWeb/1.1 WebMachine/1.9.0 (someone had painted it blue)
> > < Expires: Sun, 27 May 2012 04:12:39 GMT
> > < Date: Sun, 27 May 2012 04:02:39 GMT
> > < Content-Type: multipart/mixed; boundary=IYGfKNqjGdco9ddfyjRP1Utzfi2
> > < Content-Length: 3271
> > <
> >
> > --IYGfKNqjGdco9ddfyjRP1Utzfi2
> > Content-Type: multipart/mixed; boundary=3YVES0x2tFnUDOdTzfn1OGS6uMt
> >
> > --3YVES0x2tFnUDOdTzfn1OGS6uMt
> > X-Riak-Vclock: a85hYGBgzGDKBVIcMRuuc/nvy7mSwZTImMfKUKpodpIvCwA=
> > Location: /riak/userPreferences/preference3004
> > Content-Type: text/plain; charset=UTF-8
> > Link: ; rel="up"
> > Etag: 5GucnGSk4TjQc8BO1eNLyI
> > Last-Modified: Sun, 27 May 2012 03:54:29 GMT
> >
> > junk
> >
> > <<  Truncated  >>
> > junk
> > --3YVES0x2tFnUDOdTzfn1OGS6uMt--
> >
> > --IYGfKNqjGdco9ddfyjRP1Utzfi2--
> >  
> 
> >
> > However when I use the java client to walk the link, I get a
> ClassCastException.
> >
> >  
> 
> > Java code:
> >  
> 
> > private void getAllLinks()
> > {
> > String user="user2";
> > IRiakClient riakClient = null;
> > try
> > {
> > long past = System.currentTimeMillis();
> > riakClient = RiakFactory.pbcClient("localhost",8081);
> > Bucket userBucket =
> riakClient.fetchBucket("usersMine").execute();
> > DefaultRiakObject user1 =(DefaultRiakObject)
> userBucket.fetch(user).execute();
> > List links = user1.getLinks();
> > System.out.println(links.size());
> > WalkResult execute =
> riakClient.walk(user1).addStep("userPreferences", "myPref",true).execute();
> > Iterator iterator = execute.iterator();
> > while(iterator.hasNext())
> > {
> > Object next = iterator.next();
> > System.out.println(next);
> > }
> > long now = System.cur