Hi David, If you don't use any level-db specific features, you could switch to bitcask and use its expiry_secs option to handle deletes. That way you don't have to worry about deleting data at all. Note that older bitcask versions (prior to Riak 2.0) had issues with deleting data which could make deleted data re-appear.
When it comes to level-db and deletes, you should be aware that mass deletion may cause level db compaction, which might put strain on your cluster. I don't have enough experience with leveldb to give you any details, though. //Daniel On Wed, Oct 7, 2015 at 3:43 PM, David Heidt <david.he...@msales.com> wrote: > > Hi List, > > would you say that storing billions of very small (json) files is a good > usecase for riak kv or cs? > > here's what I would do: > > * create daily buckets ( i.e. 2015-10-07) > * up to 130 Million inserts per day > * about 150.000 read-ony accesses/day > * no updates on existing keys/files > * delete buckets (including keys/files) older than x days > > > I already have a working riak-kv/leveldb cluster (inserts and lookups are > going smoothly), but when it comes to mass deletion of keys I found no way > to do this. > > > Best, > > David > > _______________________________________________ > riak-users mailing list > riak-users@lists.basho.com > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com > >
_______________________________________________ riak-users mailing list riak-users@lists.basho.com http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com