I have a cron job that prunes old records using a secondary index range
search on created_at (milliseconds since epoch). The code looks like this:
Riak::SecondaryIndex.new(bucket, "created_at_bin", start..finish, {
max_results: per_page })
then it iterates through the keys and calls bucket.delete
Henning,
This happens when the process servicing a client connection receives a
"late message" from some previous request it made to the rest of the Riak
cluster, but perhaps was already in-flight when the server process decided
to move on. They are undesirable, but basically benign. I believe in
We have just started building up experience with Riak, therefore I
apologise if this question is stupid.
We are using three very small nodes for Riak development (2GB RAM,
1CPU). We are using a default Riak 1.4.8 on debian wheezy. The clients
are Java applications, using the 1.4.4 java library and