Stats when  recent spike  happens for 15 minutes around it
 get  (826)
 save  (341)
 listByIndex  (1161)
 mapReduce  (621)  //Input is IDs list
 SOLR  (4294)

6 Solr requests was longer than 9sec ( all returns 0 rows )
4 Solr requests was longer within 4-5s ( both returns 0 rows )
11 listByIndex requests was longer than within 4-5s ( both returns 0 rows )
all another requests was less than 300ms


Sometimes more load do not make such spikes
Some graphs from  maintanance tasks:
1. http://i.imgur.com/xAE6B06.png
    3 simple tasks, first 2  of them reads  all keys, decide to do
nothing and continue so just read happens, third task resave all data
in bucket.
    since  rate is pretty good, some peaks happens

2. More complex task
http://i.imgur.com/7nwHb3Q.png,  it have  more serious computing, and
updating typed bucked( map ), but no peaks to 9s



sysctl -a | fgrep vm.dirty_:

vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 10
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 20
vm.dirty_writeback_centisecs = 500

On Tue, Dec 9, 2014 at 5:46 PM, Luke Bakken <lbak...@basho.com> wrote:
> Hi Alexander,
>
> Can you comment on the read vs. write load of this cluster/
>
> Could you please run the following command and reply with the output?
>
> sysctl -a | fgrep vm.dirty_
>
> We've seen cases where dirty pages get written in a synchronous manner
> all at once, causing latency spikes due to I/O blocking.
> --
> Luke Bakken
> Engineer / CSE
> lbak...@basho.com
>
>
> On Tue, Dec 9, 2014 at 4:58 AM, Alexander Popov <mogada...@gmail.com> wrote:
>> I have Riak 2.0.1 cluster with 5 nodes ( ec2 m3-large ) with elnm in front
>> sometimes I got spikes  up to 10 seconds
>>
>> I can't say that I have  huge load at this time,  max 200 requests per
>> second for all 5 nodes.
>>
>> Most expensive queries is
>> * list by secondary index ( usually returns from 0 to 100 records  )
>> * and solr queries( max 10 records )
>>
>> save operations  is slowdown sometimes but not so much ( up to 1 sec )
>>
>> It's slowdown not for specific requests, same one work pretty fast later.
>>
>> Does it any possibilities to profile|log somehow to determine reason
>> why this happen?
>>
>> _______________________________________________
>> riak-users mailing list
>> riak-users@lists.basho.com
>> http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to