I think that writes to Riak will always be faster than other
operations. It has to do with the fact that writes are append only,
whereas deletes, for example, must write a tombstone into all of the
replicas.
I'm basically paraphrasing this message since I don't understand how
this works 100%:
htt
hi dmitry,
comments below:
On Thu, Apr 7, 2011 at 12:30 AM, Dmitry Demeshchuk wrote:
> What I haven't heard about bitcask yet is any production success
> stories. Which storage does Wikia use, for example? Or Vibrant Media?
> So, I look for stories of at least 2-3 months experience of using
> b
hi jeffrey,
before you run riak, run `ulimit -n 1024` (with a number 1024 or
greater) to bump up your max open files limit.
for the other errors, I suspect that the user you're attempting to run
riak as doesn't have the rights to execute those binaries. can you
double check the permissions on th
I don't use haproxy (we use hardware load balancers), but if I did, I
would want to handle it at that layer instead of in the riak client.
However, If you're using the java client you can implement a custom
HttpClient retry handler to do this:
http://hc.apache.org/httpcomponents-client-ga/tutorial
Hi Gui,
I recently pushed 70 million records of size 1K each into a 5-node
Riak cluster (which was replicating to another 5-node cluster) at
around 1000 writes/second using basho_bench and the REST interface. I
probably could have pushed it further, but I wanted to confirm that it
could maintain
If you find a way to generate keys in this way, I'd also suggest that
you provide the "If-None-Match: *" HTTP header on your PUTs in order
to detect the usage of duplicate keys.
I'm not sure if there is something similar in the protobuffs interface.
-matt
On Tue, Mar 22, 2011 at 9:41 AM, Alexan