read-repair and map-reduce

2012-11-04 Thread Igor Karymov
Hi all. Does I understand correctly that reading with mapreduce not trigger
read-repair mechanism?
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: read-repair and map-reduce

2012-11-04 Thread Igor Karymov
And how about link walking? It's same situation because link walking it's
just special sort of mapreduce?
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Add the riak_search_kv_hook precommit hook into app.config

2012-11-04 Thread thefosk
Hi,

I'm not an Erlang developer but I'm having an hard time to add the
riak_search_kv_hook precommit hook directly into the app.config (to make it
global through every bucket). I'm typing:

{precommit, [{riak_search_kv_hook}]}

But I'm getting the following error (most probably due to a wrong syntax):

{error,badarg,
[{erlang,iolist_to_binary,
 [{invalid_hook_def,{riak_search_kv_hook}}],
 []},
 {wrq,append_to_response_body,2,[{file,"src/wrq.erl"},{line,204}]},
 {riak_kv_wm_object,handle_common_error,3,
 [{file,"src/riak_kv_wm_object.erl"},{line,998}]},
 {webmachine_resource,resource_call,3,
 [{file,"src/webmachine_resource.erl"},{line,169}]},
 {webmachine_resource,do,3,
 [{file,"src/webmachine_resource.erl"},{line,128}]},
 {webmachine_decision_core,resource_call,1,
 [{file,"src/webmachine_decision_core.erl"},{line,48}]},
 {webmachine_decision_core,accept_helper,0,
 [{file,"src/webmachine_decision_core.erl"},{line,583}]},
 {webmachine_decision_core,decision,1,
 [{file,"src/webmachine_decision_core.erl"},{line,447}]}]}}

What's the correct syntax for setting it?



--
View this message in context: 
http://riak-users.197444.n3.nabble.com/Add-the-riak-search-kv-hook-precommit-hook-into-app-config-tp4025921.html
Sent from the Riak Users mailing list archive at Nabble.com.

___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: avg write io wait time regression in 1.2.1

2012-11-04 Thread Dave Brady
Great!

Thanks, Matthew!

--
Dave Brady

- Original Message -
From: Matthew Von-Maszewski 
To: Dave Brady 
Cc: riak-users@lists.basho.com
Sent: Fri, 2 Nov 2012 20:20:13 + (GMT+00:00)
Subject: Re: avg write io wait time regression in 1.2.1

Dave,

One of the other developer is looking into whether the compaction counters can 
appear in riak-admin.  I should know by Tuesday if this is possible for the 1.3 
release.  1.3's code freeze is Friday 11/9.  A leveldb counters interface 
should be in 1.3.  It allows external programs to monitor (if the riak-admin 
does not make the cut).

Matthew

On Nov 2, 2012, at 11:18 AM, Dave Brady wrote:

> I would like to cast my vote for having compactions exposed by "riak-admin".
> 
> We can already observe hinted handoff transfers.  It would be very beneficial 
> to watch compactions in real-time, too.
> 
> Log scrapping carries too many potential headaches.
> 
> From: "Matthew Von-Maszewski" 
> To: "Dietrich Featherston" 
> Cc: riak-users@lists.basho.com
> Sent: Friday, November 2, 2012 2:19:40 PM
> Subject: Re: avg write io wait time regression in 1.2.1
> 
> Dietrich,
> 
> I can make two guesses into the increased disk writes.  But I am also willing 
> to review your actual LOG files to isolate root cause.  If you could run the 
> following and post the resulting file from one server, I will review it over 
> the weekend or early next week:
> 
> sort /var/lib/riak/leveldb/*/LOG >LOG.all
> 
> The file will compress well.  And no need to stop the server, just gather the 
> LOG data live.
> 
> Guess 1:  your data is in a transition phase.  1.1 used 2 Megabyte files 
> exclusively.  1.2 is resizing the files to much larger sizes during a 
> compaction.  You could be seeing a larger number of files than usual 
> participating in each compaction as the file sizes change.  While this is 
> possible, I have doubts … hence this is a guess.
> 
> Guess 2:  I increased the various leveldb file sizes to reduce the number of 
> open and closes, both for writes and random reads.  This helped latencies in 
> both the compactions and random reads.  Any compaction in 1.2 is likely to 
> reread and write larger total number of bytes.  While this is possible, I 
> again have doubts … the number of read operations should also go up if this 
> guess is correct.  Your read operations have not increased.  This guess might 
> still be valid if the read operations were satisfied by the Linux memory data 
> cache.  I do not how those would be counted or not counted.
> 
> 
> Matthew
> 
> 
> On Nov 1, 2012, at 10:01 PM, Dietrich Featherston wrote:
> 
> Will check on that.
> 
> Can you think of anything that would explain the 5x increase in disk writes 
> we are seeing with the same workload?
> 
> 
> On Thu, Nov 1, 2012 at 6:03 PM, Matthew Von-Maszewski  
> wrote:
> Look for any activity in the LOG.  Level-0 "creations" are fast and not 
> typically relevant.  You would be most interested in LOG lines containing 
> "Compacting" (start) and "Compacted" (end).  The time in between will 
> throttle.  The idea is that these compaction events can pile up, one after 
> another and multiple overlapping.  It is these heavy times where the throttle 
> saves the user experience.
> 
> Matthew
> 
> 
> On Nov 1, 2012, at 8:54 PM, Dietrich Featherston wrote:
> 
> Thanks. The amortized stalls may very well describe what we are seeing. If I 
> combine leveldb logs from all partitions on one of the upgraded nodes what 
> should I look for in terms of compaction activity to verify this?
> 
> 
> On Thu, Nov 1, 2012 at 5:48 PM, Matthew Von-Maszewski  
> wrote:
> Dietrich,
> 
> I can see your concern with the write IOS statistic.  Let me comment on the 
> easy question first:  block_size.
> 
> The block_size parameter in 1.1 was not getting passed to leveldb from the 
> erlang layer.  You were using a 4096 byte block parameter no matter what you 
> typed in the app.config.  The block_size is used by leveldb as a threshold.  
> Once you accumulate enough data above that threshold, the current block is 
> written to disk and a new one started.  If you have 10k data values, your get 
> one data item per block and its size is ~10k.  If you have 1k data values, 
> you get about four per block and the block is about 4k.
> 
> We recommend 4k blocks to help read performance.  The entire block has to run 
> through decompression and potentially CRC calculation when it comes off the 
> disk.  That CPU time really kills any disk performance gains by having larger 
> blocks.  Ok, that might change in 1.3 as we enable hardware CRC … but only if 
> you have "verify_checksums, true" in app.config.
> 
> 
> Back to performance:  I have not seen the change your graph details when 
> testing with SAS drives under moderate load.  I am only today starting 
> qualification tests with SSD drives.
> 
> But my 1.2 and 1.3 tests focus on drive / Riak saturation.  1.1 has the nasty 
> tendency to stall (intentionally) when we saturate