I don't know if this is a client (Java 1.1.1 client) or server issue
I recently upgraded to 1.4 and I am now seeing timeout issues when my app is
trying to read data either via mapreduce or getting all keys for a bucket.
Currently my server has no data (this is my test server and I wiped e
I had a similar problem. I mitigated the problem by appending a
current timestamp (in nanoseconds) to the keys I'm using in my tests.
This way I don't have to worry about waiting for Riak to reap
tombstones after 3 seconds and I don't have to use
delete_mode=immediate. I still do a list keys + dele
Hello,
I've been playing around with Riak for a new app I'm building. I'm currently
playing around with my own mini-cluster along with my test suite for the app.
The Riak cluster is a four node cluster running across a laptop and a desktop
(so two nodes per physical machine). I'm also runnin
There's not currently a way to do this that is a pure Riak secondary index
solution.
You could, however, rig something up by creating your own indexes in a
separate bucket. In effect, you end up creating your own inverted index.
Since bucket + key combinations must be unique, you can create an
us
Hello,
Is it possible to create a unique secondary index?
E.g.: The "user"-table would need a unique index:
* username
* email-address
Thanks a lot
Sandy
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listi
Hi everyone,
first time here. Thanks in advance.
I am experiencing issues with MapReduce and it seems to timeout after a certain
volume data threshold is reached. The reducer is only one and here is the
mapreduce initiation script:
#!/usr/bin/env ruby
[…]
@client = Riak::Client.new(
:nodes =>