(Changing subject to reflect the problem better and reposting ).

Any idea , based on the configuration inline as to what explains the
inconsistency number of keys read after a bulk write ( say 1M of payload
1000 bytes ).

Any appropriate write flush setting , that is missed on the client / server
side ?


--
  Karthik.


On Fri, Jan 6, 2012 at 5:54 PM, Karthik K <oss....@gmail.com> wrote:

> Further,
>  ulimit -n   is 10K on the box.
>
>
> #  tail -100 /var//log/riak/erlang.log.1
>
> ....
> 17:26:26.960 [info] alarm_handler: {set,{system_memory_high_watermark,[]}}
> /usr/lib/riak/lib/os_mon-2.2.6/priv/bin/memsup: Erlang has closed.
> Erlang has closed
> 17:26:34.443 [info] alarm_handler: {clear,system_memory_high_watermark}
>
>
> So - is there a commit / flush setting that is missed when it comes to
> high volume writes ?
>
>
> On Fri, Jan 6, 2012 at 5:37 PM, Karthik K <oss....@gmail.com> wrote:
>
>> I am using Riak with LevelDB as the storage engine.
>>
>> app.config:
>>
>>     {storage_backend, riak_kv_eleveldb_backend},
>>
>>
>>  {eleveldb, [
>>              {data_root, "/var/lib/riak/leveldb"},
>>              {write_buffer_size, 4194304}, %% 4MB in bytes
>>             {max_open_files, 50}, %% Maximum number of files open at once
>> per partition
>>             {block_size, 65536}, %% 4K blocks
>>             {cache_size, 33554432}, %% 32 MB default cache size
>> per-partition
>>             {verify_checksums, true} %% make sure data is what we
>> expected it to be
>>             ]},
>>
>>
>>
>>
>> I want to insert a million keys into the store ( into a given bucket ) .
>>
>> pseudo-code:
>>             riakClient = RiakFactory.pbcClient();
>>             myBucket =
>> riakClient.createBucket("myBucket").nVal(1).execute();
>>             for (int i = 1; i <= 1000000; ++i) {
>>                 final String key = String.valueOf(i);
>>                 myBucket.store(key, new
>> String(payload)).returnBody(false);
>>             }
>>
>>
>> after this operation, when I do:
>>
>>    int count = 0;
>>    for (String key : myBucket.keys() ) {
>>          ++count;
>>    }
>>    return count;
>>
>> This returns a total of 14K keys, while I was expecting close to 1
>> million or so.
>>
>> I am using riak-java-client (pbc).
>>
>> Which setting / missing client code can explain the discrepancy ?
>>  Thanks.
>>
>>
>>
>
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to