I have an application that pushes a large amount of small updates (usually 
below 1KB).  Instead of reading massive numbers of keys, they are aggregated 
into keys of roughly 1MB.  The problem is as these keys near the 1MB limit the 
throughput obviously drops both from a disk and network perspective.

So two questions.  First off, is there any downside to using the memory backend 
as a temporary data store?  Obviously if all nodes for vnode go down at the 
same time, there will be data loss, but it isn't really any worse than using an 
external buffer.  Is there anything else I should be aware of?

Secondly, is there a downside to abusing siblings?  If instead of reading and 
writing rather large keys, is there any issues with writing several hundred 
siblings and resolving them via a scheduled task?
                                          
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to