Hello,
So I tried populating data on 1.2
The way I store is, for each new date, I create a bucket A- and store
the main data here as follows
(K1,V1), (K2, V2)
Then in another bucket B, I add links for each of the above keys
So, K1 contains links for (A-, K1), (A-, K1), ...
(A-, K1)
As time pa
Unfortunately No.
I am still using 1.1.
However, once I push my current implementation out, I will look into 1.2.
Will update you guys then.
--
Yousuf
http://fauzism.com
On Tue, Jul 31, 2012 at 12:25 PM, Matthew Tovbin wrote:
> Yousuf,
>
> Thanks for the update! Did you try to reproduce with
Yousuf,
Thanks for the update! Did you try to reproduce with 1.2.X?
-Matthew
On Sun, Jul 1, 2012 at 1:08 PM, Mik Quinlan wrote:
> Hi, do the LevelDB buffer size settings write_buffer_size_min and
> write_buffer_size_max make a different to point 7?
>
>
> __
Hi, do the LevelDB buffer size settings write_buffer_size_min
and write_buffer_size_max make a different to point 7?
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
I had a discussion with Tryn Mirell on Riak IRC. Here are a few things
discussed
1. LevelDB (till 1.1.x) suffers from compaction stalls
2. A benchmark to illustrate compaction stall (
http://cl.ly/2U3F1h3N2U3L461l000H)
3. Significant work being done in 1.2.x branch on (i) bloom filter support
for
Hello,
Record size ~ 600 bytes, Indexed on 3 fields
For a new bucket, I am getting around 1-1.5K writes/second. However, when
the bucket size gets large (15 Million records in my case) then the write
speed drops 5-6 times.
Is this an expected behavior or am I doing something wrong?
--
Yousuf Fa