I benchmark riak's performance using basho_bench in the last days. But the
performance is unsatisfying.
I have a 2 node cluster, each node running on the same vm(8 CPU, 32 GB
RAM). Riak's storage_backend is memory. The total ops could reach 1
with the put command. Can I do something to make
Hello,
I'm using riak KV in 2 nodes cluster.
I inserted hundreds of key/value pair and deleted all keys in a bucket.
After above process, I can get some keys if I get list of keys in the
bucket.
Why those keys remain? How do I delete keys reliably?
If I increase number of nodes to 5 , I can delete
Hi all, we are still (for a while longer) using Riak 1.4 and the matching
Java client. The client(s) connect to one node in the cluster (since that's
all it can do in this client version). The cluster itself has 4 nodes
(sorry, we can't use 5 in this scenario). There are 2 separate clients.
We've
Hi Yang,
Could you say a little more about what your requirements are, particularly
around reliability? For example, what is your n_val, if you are using a 2-node
cluster?
It would also help to know how many worker processes you have in your basho
bench config (the concurrent setting), as wel
Hi Vanessa,
Riak is definitely meant to run behind a load balancer. (Or, at the worst
case, to be load-balanced on the client side. That is, all clients connect
to all 4 nodes).
When you say "we did try putting all 4 Riak nodes behind a load-balancer
and pointing the clients at it, but it didn't
Hi Dmitri, thanks for the quick reply.
It was actually our sysadmin who tried the load balancer approach and had
no success, late last evening. However I haven't discussed the gory details
with him yet. The failure he saw was at the application level (i.e. failure
to read a key), but I don't know
Hello,
There are two things going on here: the W quorum value of the write and
delete operations, and possibly the delete_mode setting.
Let's walk through the scenario.
You're writing to a 2 node cluster, two copies of each object (n_val=2),
with your write quorum of 1 (W=1).
So that's possibili
Yeah, definitely find out what the sysadmin's experience was, with the load
balancer. It could have just been a wrong configuration or something.
And yes, that's the documentation page I recommend -
http://docs.basho.com/riak/latest/ops/advanced/configs/load-balancing-proxy/
Just set up HAProxy, a
Hi,
This is more of a question for developing riak as opposed to using it,
anyway I thought it worth asking.
You have the "Counter" convergent datatype which allows increment and
decrement of arbitrary integer values.
Would it be possible to implement an "Accumulator" datatype which would
be a
Hi List,
would you say that storing billions of very small (json) files is a good
usecase for riak kv or cs?
here's what I would do:
* create daily buckets ( i.e. 2015-10-07)
* up to 130 Million inserts per day
* about 150.000 read-ony accesses/day
* no updates on existing keys/files
* delete bu
Hi David,
If you don't use any level-db specific features, you could switch to
bitcask and use its expiry_secs option to handle deletes. That way you
don't have to worry about deleting data at all. Note that older bitcask
versions (prior to Riak 2.0) had issues with deleting data which could make
Conceptually I think it should work - the same PN counter design could
handle floats as well as integers. You'd have to make sure the P and N
values only grew by handling the sign - if you want to add a negative float
you would do N += abs(V). There are also concerns around error bounds and
all th
Hi David,
1) Storing billions of small files is definitely a good use case for Riak
KV. (Since they're small, there's no reason to use CS (now re-branded as
S2)).
2) As far as deleting an entire bucket, that part is tougher.
(Incidentally, if you were thinking of using Riak CS because it has a
On second thought, ignore the Search recommendation. Search + Expiry
doesn't work very well (when objects expire from Riak, their search index
entries persist, except now those are orphaned).
On Wed, Oct 7, 2015 at 4:11 PM, Dmitri Zagidulin
wrote:
> Hi David,
>
> 1) Storing billions of small fil
Thank you so much Daniel and Dmitri!
I will benchmark 2i against bitcask expiry and get back to you here in a
couple of days.
Best,
David
2015-10-07 16:23 GMT+02:00 Dmitri Zagidulin :
> On second thought, ignore the Search recommendation. Search + Expiry
> doesn't work very well (when objects
Hi Dmitri, well...we solved our problem to our satisfaction but it turned
out to be something unexpected.
The keys were two properties mentioned in a blog post on "configuring
Riak’s oft-subtle behavioral characteristics":
http://basho.com/posts/technical/riaks-config-behaviors-part-4/
notfound_o
Glad you sorted it out!
(I do want to encourage you to bump your R setting to at least 2, though.
Run some tests -- I think you'll find that the difference in speed will not
be noticeable, but you do get a lot more data resilience with 2.)
On Wed, Oct 7, 2015 at 6:24 PM, Vanessa Williams <
vaness
Hi Dmitri, what would be the benefit of r=2, exactly? It isn't necessary to
trigger read-repair, is it? If it's important I'd rather try it sooner than
later...
Regards,
Vanessa
On Wed, Oct 7, 2015 at 4:02 PM, Dmitri Zagidulin
wrote:
> Glad you sorted it out!
>
> (I do want to encourage you t
I'd also recommend to have "allow_mult" as true to prevent
resurrection caused by sloppy quorum, handoffs or any network
separation. Also, to make sure your data eventually consistent, R+W
should be > N. Your case is 1+1, not >2.
On Wed, Oct 7, 2015 at 10:19 PM, Dmitri Zagidulin wrote:
> Hello,
>
19 matches
Mail list logo