You could map your keys to a given bucket, and that bucket to a given
backend using multi_backend. There is some cost to having lots of backends
(memory overhead, FDs, etc...). When you want to do a mass drop, you could
down the node, and delete that given backend, and bring it up. Caveat: AAE,
MDC, nor mutable data play well with this scenario.

On Wed, Jun 3, 2015 at 10:43 AM, Peter Herndon <tphern...@gmail.com> wrote:

> Hi list,
>
> We’re looking for the best way to handle large scale expiration of
> no-longer-useful data stored in Riak. We asked a while back, and the
> recommendation was to store the data in time-segmented buckets (bucket per
> day or per month), query on the current buckets, and use the streaming list
> keys API to handle slowly deleting the buckets that have aged out.
>
> Is that still the best approach for doing this kind of task? Or is there a
> better approach?
>
> Thanks!
>
> —Peter Herndon
> Sr. Application Engineer
> @Bitly
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
>
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to