Hi Dan,
Docs are a big focus for us right now. We aim to keep everything up-to-date but
things slip through the cracks.
I just opened an issue to make sure this gets rectified:
https://github.com/basho/riak_wiki/issues/380
Feel free to add details to it if you think there's more that needs fi
Yeah, that is pretty awful. IMO, this should be top priority task. It
doesn't matter how good the product/code/architecture is, as long as
users cant understand what to expect when using it, or when they do, it
is old news!
If i did fully understand what the current situation is, I'd send over a
Thanks Kresten! The challenges of fast development with aging docs and
trying to keep up with everything. This indeed is very different and
now makes a lot more sense and is back inline with our original
assumptions (which apparently were wrong initially back with pre 1.0 and
now are correct
That comment is no longer correct since riak now (since 1.0 I believe) ignores
client IDs. See
https://github.com/basho/riak/blob/master/releasenotes/riak-1.0.org#getput-improvements
1. Sibling will only occur if you have allow_mult=true enabled on the bucket.
The following applies only in that
You could try running riak with the HanoiDB backend. It is not as fast as
LevelDB but uses less memory/open files, and otherwise has similar
characteristics.
https://github.com/basho-labs/riak_kv_hanoidb_backend
The nice things with Hanoi are that it has better worst case response time, and
Heh - I found my own answer staring me in the face -
http://wiki.basho.com/Vector-Clocks.html.
*Concurrent writes*If two writes occur simultaneously from clients with
different client IDs but the same vector clock value, Riak will not be
able to determine the correct object to store and the ob
Hello guys, i've put this issue on hold and now i'm back on it. I tried to
increase memory for riak user in limits.conf but still getting the same
problem. I start my riak node and begin storing some data when at a certain
of time (proccess "/usr/lib/riak/erts-5.9.1/bin/beam.smp" with 500mb
approxi
Thanks for the replies - this is very helpful. Our persistence
abstraction layer already sports a robust process pool as we do support
other persistence solutions (although Riak is our main gun). I just
needed to understand the relationship of the protobuffer client process
to the Riak clust
You *could* use Riak for all of that, but I personally find retrieving objects
linked to other objects remarkably painful in Riak; even with secondary indexes
Riak is still more suitable for fast growing data, and keeping the relational
data in Postgres or MySQL.
Example of my setup:
MySQL is
Hello,
I am developing a web feed reader, with Riak as the database. While i
have read a few examples about designing schemas in Riak, i have come up
with an idea about this but i dont feel very confident.
Its quite simple, there are Users, Feeds, and Articles. User logs in,
sees feeds he is
10 matches
Mail list logo