Hi Robert,
You have leveldb set to use 90% of available RAM, and the JVM to use
up to 22GB heap, but your machines only have 30GB RAM in total. At
some point during your load cycle, you are guaranteed to start
exhausting RAM, just as you're seeing. Reduce leveldb to 50 percent
(15GB), give the JVM
Hi Cosmin,
Unfortunately, the exception can't provide more information due to the
lack of data provided by Riak. We do hope to improve Riak's error
reporting in the future.
--
Luke Bakken
Engineer
lbak...@basho.com
On Tue, Jun 2, 2015 at 5:05 AM, Cosmin Marginean wrote:
> I’m simulating a failu
That makes sense. Thanks Luke
On Wednesday, 3 June 2015 at 14:16, Luke Bakken wrote:
> Hi Cosmin,
>
> Unfortunately, the exception can't provide more information due to the
> lack of data provided by Riak. We do hope to improve Riak's error
> reporting in the future.
> --
> Luke Bakken
> Engine
Hi list,
We’re looking for the best way to handle large scale expiration of
no-longer-useful data stored in Riak. We asked a while back, and the
recommendation was to store the data in time-segmented buckets (bucket per day
or per month), query on the current buckets, and use the streaming list
You could map your keys to a given bucket, and that bucket to a given
backend using multi_backend. There is some cost to having lots of backends
(memory overhead, FDs, etc...). When you want to do a mass drop, you could
down the node, and delete that given backend, and bring it up. Caveat: AAE,
MDC
We are actively investigating better options for deletion of large amounts of
keys. As Sargun mentioned, deleting the data dir for an entire backend via an
operationalized rolling restart is probably the best approach right now for
killing large amounts of keys.
But if your key space can fit i
Sadly, this is a production cluster already using leveldb as the backend. With
that constraint in mind, and rebuilding the cluster not really being an option
to enable multi-backends or bitcask, what would our best approach be?
Thanks!
—Peter
> On Jun 3, 2015, at 12:09 PM, Alexander Sicular
Another idea for a large-scale one-time removal of data, as well as an
opportunity for a fresh start, would be to:
1. set up multi-data center replication between 2 clusters
2. implement a recv/2 hook on the sink which refuses data from the buckets
/ keys you would like to ignore / delete
3. trigg
Can you recycle the keys rather than deleting them? We mark them as available
for deletion and keep a key recycle pool where we pull new keys from.
On Wednesday, June 3, 2015 12:37 PM, Peter Herndon
wrote:
Hi list,
We’re looking for the best way to handle large scale expiration of
Hi Jason,
Given the unique event you had here, I'm curious to hear whether you saw
further restarts. This kind of spontaneousness is the polar opposite of
what most users report from Riak, which is why I want to follow up.
Thanks,
Matt
*Matt Brender | Developer Advocacy Lead*
Basho Technologies
That’s not really an option, I think. Our buckets are named “--MM”
and the keys are user login names. Re-using the date-based buckets wouldn’t
make much sense in our case.
> On Jun 3, 2015, at 2:48 PM, Igor Birman wrote:
>
> Can you recycle the keys rather than deleting them? We mark them
Interesting thought. It might work for us, it might not, I’ll have to check
with our CTO to see whether the expense makes sense under our circumstances.
Thanks!
—Peter
> On Jun 3, 2015, at 2:21 PM, Drew Kerrigan wrote:
>
> Another idea for a large-scale one-time removal of data, as well as an
Hey Toby,
Which documentation pointed you to the wrong URL? And did the dash
address the issue? We can and should update that documentation.
On a related note, our packaging is up for discussion. A few of us
started discussing it here [1]. I'd love for you to weigh in.
[1] https://github.com/bas
For what it's worth, it looks good on the packagecloud directions as
of the most recent update [1]:
> curl -s
> https://packagecloud.io/install/repositories/basho/riak-cs/script.deb.sh
| sudo bash
> sudo apt-get install riak-cs=2.0.1-1
[1]
https://packagecloud.io/basho/riak-cs/packages/ubunt
Hi Toby,
Current versions for Riak CS system are: Riak CS 2.0.x and it is
tested with Riak 2.0.x. Sorry for confusion, but the document you
pointed is for the combination. You can use the configuration steps
using advanced.config in the doc.
We are now in development of Riak CS 2.1 and prefix_mul
Hi,
I think this is a bug?
Riak CS 2.0.1 recommends Stanchion 2.0.0 to be installed.
However if you follow the instructions to add the Riak CS repo, you get
Riak CS 2.0.1, and Stanchion 1.5.0.
Toby
___
riak-users mailing list
riak-users@lists.basho.com
Hi,
I've been happily using haproxy in front of Riak and Riak CS 1.x in
production for quite a while.
I've been trying to bring up a new cluster based on riak/cs 2.0.x recently,
as you've probably noticed from the flurry of emails to this list :)
I'm discovering that if I have haproxy sitting bet
17 matches
Mail list logo