Depends on what backend you are running, no? If leveldb then this list keys 
operation can be pretty cheap.

It’s a coverage query, but if it’s leveldb at least you will seek to the start 
of the bucket and iterate over only the keys in that bucket.

Cheers

Russell

On 8 Dec 2016, at 21:19, John Daily <jda...@basho.com> wrote:

> The size of the bucket has no real impact on the cost of a list keys 
> operation because each key on the cluster must be examined to determined 
> whether it resides in the relevant bucket.
> 
> -John
> 
>> On Dec 8, 2016, at 4:17 PM, Arun Rajagopalan <arun.v.rajagopa...@gmail.com> 
>> wrote:
>> 
>> Hello Riak Users
>> 
>> I have a use case where I would really like to list all keys of a bucket 
>> despite all the warnings about performance. The number of keys is relatively 
>> small - in the few thousands at the very most, Usually its no more than 100
>> 
>> I also have other buckets in the same cluster that have millions of keys and 
>> tens of tera bytes of data
>> 
>> Question: Will listing all keys on the small bucket adversely impact 
>> performance of the other larger buckets?
>> 
>> Thanks
>> Arun
>> _______________________________________________
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> 
> 
> _______________________________________________
> riak-users mailing list
> riak-users@lists.basho.com
> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to