Hello Everybody!
I have read abundantly over the web that Riak is very well suited to store
and retrieve small binary objects such as images, docs, etc.
In our scenario we are planning to use Riak to store uploads to our portal
which is a Social Network. Uploads are mostly images with maximum siz
Hi, Praveen.
Nothing about what you have said would cause a problem for Riak. Go for it!
Justin
On May 29, 2012, at 8:36 AM, Praveen Baratam wrote:
> Hello Everybody!
>
> I have read abundantly over the web that Riak is very well suited to store
> and retrieve small binary objects such as
It'll be interesting if you can write a filesystem on top of Riak.
That would be a cool project to see on github :P
Shuhao
On Tue, May 29, 2012 at 8:36 AM, Praveen Baratam
wrote:
> Hello Everybody!
>
> I have read abundantly over the web that Riak is very well suited to store
> and retrieve sm
Like this perhaps: https://github.com/johnthethird/riak-fuse *cough* *cough*
On Tue, May 29, 2012 at 2:49 PM, Shuhao Wu wrote:
> It'll be interesting if you can write a filesystem on top of Riak.
>
> That would be a cool project to see on github :P
>
> Shuhao
>
>
> On Tue, May 29, 2012 at 8:36 A
Hello everybody!
Maybe some of you has faced my problem before. I would be glad to
receive any ideas.
I'm trying to perform a rolling upgrade of nodes one by one.
I'm stopping one node and update configuration file to meet
compatibility with riak 0.14. Namely I set
the following options in r
Deepak -
I'll take a look at it this week, but more than likely it's a bug.
Link walking is a REST only operation as far as Riak’s interfaces are
concerned. Link Walking in the protocol buffers Java client is a hack that
issues two m/r jobs to the protocol buffers interface (the first construct
Thank you very much Justin.
Here's another command to hopefully speed up the handoff process.
On any of the node, attach to Erland console, then:
rp([{N, rpc:call(N, application, get_env, [riak_core, handoff_concurrency])} ||
N <- [node() | nodes()]]).
This command will show the current handof
I've read somewhere here on the mailing list that storing blobs that
are more than 50KB isn't recommended.
Is that correct? If so, is it something specific to storage backend?
~Vlad
On Tue, May 29, 2012 at 3:51 PM, Alvaro Videla wrote:
> Like this perhaps: https://github.com/johnthethird/riak-fu
On Tue, May 29, 2012 at 12:51 PM, Vlad Gorodetsky wrote:
> I've read somewhere here on the mailing list that storing blobs that
> are more than 50KB isn't recommended.
> Is that correct? If so, is it something specific to storage backend?
>
>
Riak can probably handle objects up to about 10MBs. Th
Guido -
The real fix is to enhance the client to support a Collection, I'll add an
issue for this in github.
What you would need to do right now is write your own Converter (which would
really just be a modification of our JSONConverter if you're using JSON) that
does this for you.
If you lo
I will request a "pull request", I fixed it, I enabled @RiakIndex for
collection fields AND methods (String, Integer or Collection of any of
those), on our coding is working, but still I need to test it more before
making it final.
I will share the details tomorrow, I already created a fork fr
Guido -
Thanks, looking forward to it.
Also as an FYI, on Friday I fixed the bug that was causing the requirement of
the @JsonIgnore for Riak annotated fields without getters.
- Brian Roach
On May 29, 2012, at 11:52 AM, Guido Medina wrote:
> I will request a "pull request", I fixed it, I e
Also, the coming riak client version removed the embedded json package from
it and put an old implementation from the main maven repo, I think that what
was meant to do was to put this version:
https://github.com/douglascrockford/JSON-java which has lot of performance
improvements but no maven
Actually, it was on purpose. As sort of a "step 1" to getting rid of it, the
goal was just to get the code out of our repo and use maven to pull it in. As
you note and as far as I could find, the latest development on github is not
being published to maven central.
Long term I want to elimina
hello devs,
good folks in IRC suggested to ping the experts ;)
cannot get basho_bench to run more than few hours with leveldb :
https://gist.github.com/22b8d49dd1d22553d85c . has anyone been able to ?same
cluster ok when i use bitcask for even longer period.
not riak and pb port are still respo
Hello All,
I have built a new 4 node cluster using riak version riak-1.1.2-1.el6.x86_64. I
was reading several of the posts and inspite of doing the following, I'm still
getting the preflist... error.
1. New riak version with mapred_builtins.js
-rw-r--r-- 1 root root 2936 Apr 17 05:25
/usr/li
Our Riak server which is running 1.0.2 at the moment using bitcask backend
and search is crashing often and when restarted will crash again
immediately due to system_limit error.
2012-05-29 19:28:54.808 [error] <0.1001.0>@riak_kv_vnode:init:245 Failed to
start riak_kv_bitcask_backend Reason:
{{bad
Jacob,
I only glanced at this but have some comments inline.
On Tue, May 29, 2012 at 8:44 PM, Jacob Chapel wrote:
> Our Riak server which is running 1.0.2 at the moment using bitcask backend
> and search is crashing often and when restarted will crash again
> immediately due to system_limit erro
Brian,
Yes I read about the hack somewhere in your documentation. My understanding
is that the link walking operation will cease to work via HTTP after links
to a node grow beyond a particular number. This happens because a HTTP
header is used to send link related data and there are limits around
19 matches
Mail list logo