Failed to calculate stat

2013-07-04 Thread fenix . serega
Hi

What does it mean !?

On newly installed node after joining to cluster in log:


2013-07-04 11:24:58.733 [warning] <0.4686.0>@riak_core_stat_q:log_error:123
Failed to calculate stat {riak_kv,vnode,backend,leveldb,read_block_error}
with error:badarg
2013-07-04 11:25:03.762 [warning] <0.4742.0>@riak_core_stat_q:log_error:123
Failed to calculate stat {riak_kv,vnode,backend,leveldb,read_block_error}
with error:badarg
2013-07-04 11:25:08.790 [warning] <0.5057.0>@riak_core_stat_q:log_error:123
Failed to calculate stat {riak_kv,vnode,backend,leveldb,read_block_error}
with error:badarg
2013-07-04 11:25:13.811 [warning] <0.5093.0>@riak_core_stat_q:log_error:123
Failed to calculate stat {riak_kv,vnode,backend,leveldb,read_block_error}
with error:badarg
2013-07-04 11:25:18.828 [warning] <0.5228.0>@riak_core_stat_q:log_error:123
Failed to calculate stat {riak_kv,vnode,backend,leveldb,read_block_error}
with error:badarg
2013-07-04 11:25:23.847 [warning] <0.5435.0>@riak_core_stat_q:log_error:123
Failed to calculate stat {riak_kv,vnode,backend,leveldb,read_block_error}
with error:badarg
2013-07-04 11:25:28.868 [warning] <0.5754.0>@riak_core_stat_q:log_error:123
Failed to calculate stat {riak_kv,vnode,backend,leveldb,read_block_error}
with error:badarg
2013-07-04 11:25:33.894 [warning] <0.5787.0>@riak_core_stat_q:log_error:123
Failed to calculate stat {riak_kv,vnode,backend,leveldb,read_block_error}
with error:badarg
2013-07-04 11:25:38.921 [warning] <0.6027.0>@riak_core_stat_q:log_error:123
Failed to calculate stat {riak_kv,vnode,backend,leveldb,read_block_error}
with error:badarg
2013-07-04 11:25:43.943 [warning] <0.6062.0>@riak_core_stat_q:log_error:123
Failed to calculate stat {riak_kv,vnode,backend,leveldb,read_block_error}
with error:badarg
2013-07-04 11:25:48.971 [warning] <0.6301.0>@riak_core_stat_q:log_error:123
Failed to calculate stat {riak_kv,vnode,backend,leveldb,read_block_error}
with error:badarg
2013-07-04 11:25:53.990 [warning] <0.6335.0>@riak_core_stat_q:log_error:123
Failed to calculate stat {riak_kv,vnode,backend,leveldb,read_block_error}
with error:badarg
2013-07-04 11:25:58.007 [warning] <0.6588.0>@riak_core_stat_q:log_error:123
Failed to calculate stat {riak_kv,vnode,backend,leveldb,read_block_error}
with error:badarg
2013-07-04 11:26:03.037 [warning] <0.6625.0>@riak_core_stat_q:log_error:123
Failed to calculate stat {riak_kv,vnode,backend,leveldb,read_block_error}
with error:badarg
2013-07-04 11:26:08.058 [warning] <0.6834.0>@riak_core_stat_q:log_error:123
Failed to calculate stat {riak_kv,vnode,backend,leveldb,read_block_error}
with error:badarg
2013-07-04 11:26:13.082 [warning] <0.6870.0>@riak_core_stat_q:log_error:123
Failed to calculate stat {riak_kv,vnode,backend,leveldb,read_block_error}
with error:badarg
2013-07-04 11:26:18.104 [warning] <0.7080.0>@riak_core_stat_q:log_error:123
Failed to calculate stat {riak_kv,vnode,backend,leveldb,read_block_error}
with error:badarg
2013-07-04 11:26:23.126 [warning] <0.7114.0>@riak_core_stat_q:log_error:123
Failed to calculate stat {riak_kv,vnode,backend,leveldb,read_block_error}
with error:badarg
2013-07-04 11:26:28.142 [warning] <0.7370.0>@riak_core_stat_q:log_error:123
Failed to calculate stat {riak_kv,vnode,backend,leveldb,read_block_error}
with error:badarg
...
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Failed to calculate stat

2013-07-04 Thread fenix . serega
Hi, thanks.

1.3.1


erlydtl_version : <<"0.7.0">>
riak_control_version : <<"1.3.0">>
cluster_info_version : <<"1.2.3">>
riak_search_version : <<"1.3.0">>
merge_index_version : <<"1.3.0">>
riak_kv_version : <<"1.3.1">>
riak_api_version : <<"1.3.1">>
riak_pipe_version : <<"1.3.1">>
riak_core_version : <<"1.3.1">>
bitcask_version : <<"1.6.1">>
basho_stats_version : <<"1.0.3">>
webmachine_version : <<"1.9.3">>
mochiweb_version : <<"1.5.1p3">>
inets_version : <<"5.9.2">>
erlang_js_version : <<"1.2.2">>
runtime_tools_version : <<"1.8.9">>
os_mon_version : <<"2.2.10">>
riak_sysmon_version : <<"1.1.3">>
ssl_version : <<"5.1.2">>
public_key_version : <<"0.17">>
crypto_version : <<"2.2">>
sasl_version : <<"2.2.1">>
lager_version : <<"1.2.2">>
syntax_tools_version : <<"1.6.9">>
compiler_version : <<"4.8.2">>
stdlib_version : <<"1.18.3">>
kernel_version : <<"2.15.3">>...





2013/7/4 Russell Brown 

> Hi,
> That warning means that riak failed to calculate the leveldb read block
> error count stat.
>
> This is caused by a bug fixed in 1.3.2. The stat code picks a random vnode
> from 1 to num_partitions on the node and asks it for the read block error
> stat. If your node has 1 or fewer partitions this error occurs.
>
> It is an edge case, but one that people seem to hit from time to time. As
> I said, fixed for 1.3.2 and .1.4, as far as I know: what version of Riak
> are you running when you see this?
>
> Cheers
>
> Russell
>
> On 4 Jul 2013, at 05:31, fenix.ser...@gmail.com wrote:
>
> > Hi
> >
> > What does it mean !?
> >
> > On newly installed node after joining to cluster in log:
> >
> > 
> > 2013-07-04 11:24:58.733 [warning]
> <0.4686.0>@riak_core_stat_q:log_error:123 Failed to calculate stat
> {riak_kv,vnode,backend,leveldb,read_block_error} with error:badarg
> > 2013-07-04 11:25:03.762 [warning]
> <0.4742.0>@riak_core_stat_q:log_error:123 Failed to calculate stat
> {riak_kv,vnode,backend,leveldb,read_block_error} with error:badarg
> > 2013-07-04 11:25:08.790 [warning]
> <0.5057.0>@riak_core_stat_q:log_error:123 Failed to calculate stat
> {riak_kv,vnode,backend,leveldb,read_block_error} with error:badarg
> > 2013-07-04 11:25:13.811 [warning]
> <0.5093.0>@riak_core_stat_q:log_error:123 Failed to calculate stat
> {riak_kv,vnode,backend,leveldb,read_block_error} with error:badarg
> > 2013-07-04 11:25:18.828 [warning]
> <0.5228.0>@riak_core_stat_q:log_error:123 Failed to calculate stat
> {riak_kv,vnode,backend,leveldb,read_block_error} with error:badarg
> > 2013-07-04 11:25:23.847 [warning]
> <0.5435.0>@riak_core_stat_q:log_error:123 Failed to calculate stat
> {riak_kv,vnode,backend,leveldb,read_block_error} with error:badarg
> > 2013-07-04 11:25:28.868 [warning]
> <0.5754.0>@riak_core_stat_q:log_error:123 Failed to calculate stat
> {riak_kv,vnode,backend,leveldb,read_block_error} with error:badarg
> > 2013-07-04 11:25:33.894 [warning]
> <0.5787.0>@riak_core_stat_q:log_error:123 Failed to calculate stat
> {riak_kv,vnode,backend,leveldb,read_block_error} with error:badarg
> > 2013-07-04 11:25:38.921 [warning]
> <0.6027.0>@riak_core_stat_q:log_error:123 Failed to calculate stat
> {riak_kv,vnode,backend,leveldb,read_block_error} with error:badarg
> > 2013-07-04 11:25:43.943 [warning]
> <0.6062.0>@riak_core_stat_q:log_error:123 Failed to calculate stat
> {riak_kv,vnode,backend,leveldb,read_block_error} with error:badarg
> > 2013-07-04 11:25:48.971 [warning]
> <0.6301.0>@riak_core_stat_q:log_error:123 Failed to calculate stat
> {riak_kv,vnode,backend,leveldb,read_block_error} with error:badarg
> > 2013-07-04 11:25:53.990 [warning]
> <0.6335.0>@riak_core_stat_q:log_error:123 Failed to calculate stat
> {riak_kv,vnode,backend,leveldb,read_block_error} with error:badarg
> > 2013-07-04 11:25:58.007 [warning]
> <0.6588.0>@riak_core_stat_q:log_error:123 Failed to calculate stat
> {riak_kv,vnode,backend,leveldb,read_block_error} with error:badarg
> > 2013-07-04 11:26:03.037 [warning]
> <0.6625.0>@riak_core_stat_q:log_error:123 Failed to calculate stat
> {riak_kv,vnode,backend,leveldb,read_block_error} with error:badarg
> > 2013-07-04 11:26:08.058 [warning]
> <0.6834.0>@riak_core_stat_q:log_error:123 Failed to calculate stat
> {riak_kv,vnode,backend,leveldb,read_block_error} with error:badarg
> > 2013-07-04 11:26:13.082 [warning]
> <0.6870.0>@riak_core_stat_q:log_error:123 Failed to calculate stat
> {riak_kv,vnode,backend,leveldb,read_block_error} with error:badarg
> > 2013-07-04 11:26:18.104 [warning]
> <0.7080.0>@riak_core_stat_q:log_error:123 Failed to calculate stat
> {riak_kv,vnode,backend,leveldb,read_block_error} with error:badarg
> > 2013-07-04 11:26:23.126 [warning]
> <0.7114.0>@riak_core_stat_q:log_error:123 Failed to calculate stat
> {riak_kv,vnode,backend,leveldb,read_block_error} with error:badarg
> > 2013-07-04 11:26:28.142 [warning]
> <0.7370.0>@riak_core_stat_q:log_error:123 Failed to calculate stat
> {riak_kv,vnode,backend,leveldb,read_block_error} with error:badarg
> > ...
> >
> > ___

Can't handoff 1 partition, errors

2013-07-04 Thread fenix . serega
Hi

Cluster 5 nodes, 1.3.1

Can't handofff 1 hinted parttion -
17126972312471518572699431633393941636592959488

root@de5:/mnt/operational/riak# /opt/riak/bin/riak-admin transfers
'riak@de5' waiting to handoff 1 partitions
'riak@de2' waiting to handoff 1 partitions
'riak@de1' waiting to handoff 1 partitions

Active Transfers:

transfer type: hinted_handoff
vnode type: riak_kv_vnode
partition: 17126972312471518572699431633393941636592959488
started: 2013-07-04 13:12:31 [952.22 ms ago]
last update: no updates seen

 unknown
r...@de1.nntp.ge ===> r...@de4.nntp.ge
 unknown



Errors:

013-07-04 15:14:32.252 [info]
<0.2761.4>@riak_core_handoff_sender:start_fold:130 Starting hinted_handoff
transfer of riak_kv_vnode from 'riak@de1'
17126972312471518572699431633393941636592959488 to 'riak@de4'
17126972312471518572699431633393941636592959488

2013-07-04 15:15:32.452 [error]
<0.2761.4>@riak_core_handoff_sender:start_fold:219 hinted_handoff transfer
of riak_kv_vnode from 'riak@de1'
17126972312471518572699431633393941636592959488 to 'riak@de4'
17126972312471518572699431633393941636592959488 failed because of closed

2013-07-04 15:15:32.499 [error]
<0.160.0>@riak_core_handoff_manager:handle_info:282 An outbound handoff of
partition riak_kv_vnode 17126972312471518572699431633393941636592959488 was
terminated for reason: {shutdown,{error,closed}}


2013-07-04 15:10:51.578 [info]
<0.10780.1>@riak_core_handoff_receiver:process_message:99 Receiving handoff
data for partition
riak_kv_vnode:17126972312471518572699431633393941636592959488
2013-07-04 15:11:25.389 [info]
<0.11061.1>@riak_core_handoff_receiver:process_message:99 Receiving handoff
data for partition
riak_kv_vnode:17126972312471518572699431633393941636592959488
2013-07-04 15:11:51.581 [error]
<0.10780.1>@riak_core_handoff_receiver:handle_info:80 Handoff receiver for
partition 17126972312471518572699431633393941636592959488 exited abnormally
after processing 0 objects:
{timeout,{gen_fsm,sync_send_all_state_event,[<0.1291.0>,{handoff_data,<<141,87,121,56,212,219,27,31,198,58,34,107,106,8,89,210,54,162,76,217,34,132,136,36,235,88,134,239,48,51,102,48,24,227,26,84,68,174,44,41,74,178,148,107,228,86,246,125,29,220,146,48,246,68,215,218,165,25,219,216,178,164,108,253,190,186,61,191,219,115,255,249,253,206,121,158,115,222,243,158,243,121,191,239,121,207,231,156,231,253,194,184,72,33,152,0,81,69,85,85,85,44,0,156,82,67,32,79,99,145,8,53,156,186,38,66,29,131,81,69,156,69,34,207,104,170,105,170,169,35,213,206,192,89,251,35,61,121,61,32,124,100,87,95,12,17,235,78,241,129,64,32,223,241,187,194,255,101,194,27,92,200,225,9,245,128,240,147,93,221,125,73,20,44,137,226,201,239,1,225,242,32,184,83,0,30,64,24,172,124,192,21,64,213,83,152,248,175,226,201,177,171,251,142,255,222,236,126,146,219,156,64,242,10,248,239,146,93,61,231,110,35,186,59,185,231,135,125,4,37,216,15,244,11,240,3,171,55,64,0,220,65,153,2,246,190,0,9,56,9,80,1,4,216,135,0,152,239,104,201,93,160,128,3,226,42,1,240,66,216,217,0,120,0,9,224,192,89,13,176,247,7,44,128,32,192,6,180,17,8,88,3,24,192,18,196,170,130,146,33,128,5,71,84,128,240,147,119,208,239,222,17,72,30,88,170,39,231,174,12,243,243,13,160,96,61,92,9,36,10,198,42,162,47,229,135,54,192,207,147,64,162,126,215,66,138,151,69,136,255,222,160,248,15,87,204,129,0,10,194,194,215,131,128,35,96,61,60,161,24,208,184,51,6,178,103,249,41,6,34,112,208,229,39,20,231,207,27,176,192,82,0,79,78,47,136,244,79,99,132,225,143,168,24,145,220,65,123,36,188,23,132,11,31,66,240,35,18,65,36,12,42,27,199,7,249,94,160,157,206,62,190,82,53,34,59,79,92,75,36,34,126,139,210,143,145,110,56,91,185,152,22,39,32,238,83,20,93,39,221,243,89,170,134,57,68,244,105,83,110,243,79,114,117,53,43,169,207,190,165,162,24,106,199,231,38,149,68,223,60,177,158,154,18,205,236,152,253,82,183,215,62,165,115,147,238,112,121,170,236,126,226,254,21,171,251,248,121,29,11,252,148,114,182,235,135,237,212,26,171,50,231,40,20,44,101,5,165,123,160,115,50,123,241,178,171,244,183,133,121,189,27,212,245,162,204,174,180,193,103,127,110,173,178,211,27,183,89,209,184,215,135,239,163,26,182,87,113,90,233,230,99,13,161,195,91,95,22,207,222,217,31,215,198,156,107,177,58,93,235,80,80,248,77,35,117,100,108,44,225,189,73,243,140,247,72,87,209,0,239,101,88,235,70,74,114,100,11,33,217,152,45,160,250,121,1,83,235,93,59,131,37,145,154,74,53,187,203,232,124,121,25,105,95,74,174,105,93,98,221,226,44,199,228,75,215,203,24,213,36,14,46,51,191,185,240,104,164,176,217,241,105,54,225,140,81,51,119,164,198,11,143,154,150,153,103,203,49,167,34,180,155,128,242,26,146,190,24,123,184,123,202,62,184,210,177,209,26,243,110,100,196,188,175,237,118,192,19,67,200,203,242,142,180,152,203,193,191,99,152,29,68,31,250,146,1,1,96,188,120,47,217,55,24,77,166,245,140,93,44,52,121,214,34,169,249,180,239,99,30,29,214,57,251,49,176,102,82,15,85,147,186,5,80,180,210,101,45,7,50,51,101,44,117,101,203,107,214,25,87,124,39,194,200,117,90,7,182,201,1,143,37,63,150,229,169,214,75,127,35,159,179,96,1

Fallback node

2013-07-19 Thread fenix . serega
Riak 1.3.1
5 nodes

Cluster are healthy. There 39 stale handoffs on 1,2,3,5 nodes.
4d node - all KV nodes in fallback mode.

Could you please clarify - what does it mean !? Why 4nd node in fallback
mode !?

riak@de3:/opt/riak/etc$ ../bin/riak-admin ring-status
== Claimant
===
Claimant:  'riak@de3 '
Status: up
Ring Ready: true

== Ownership Handoff
==
No pending changes.

== Unreachable Nodes
==
All nodes are up and reachable




riak@de3:/opt/riak/etc$ ../bin/riak-admin member-status
= Membership
==
Status RingPendingNode
---
valid  19.9%  --  'riak@de1 '
valid  19.9%  --  'riak@de2 '
valid  19.9%  --  'riak@de3 '
valid  20.3%  --  'riak@de4 '
valid  19.9%  --  'riak@de5 '
---
Valid:5 / Leaving:0 / Exiting:0 / Joining:0 / Down:0





riak@de3:/opt/riak/etc$ ../bin/riak-admin transfers
'riak@de5 ' waiting to handoff 39 partitions
'riak@de3 ' waiting to handoff 39 partitions
'riak@de2 ' waiting to handoff 39 partitions
'riak@de1 ' waiting to handoff 39 partitions

Active Transfers:
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak - mochiweb - http log

2013-07-24 Thread fenix . serega
Hi all,

Is there any possible way to enable mochiweb http logs in riak !? For http
requests monitoring purposes.


Thanks,
Sergey
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Missing primary partitions

2013-10-22 Thread fenix . serega
Hi all

What to do in case of loss of the primary partitions !?

6 node cluster, leveldb, 1.3.2

5-6 nodes always waiting to handoff 46 partitions

'riak@de6' waiting to handoff 46 partitions
'riak@de5' waiting to handoff 46 partitions

Active Transfers:

transfer type: hinted_handoff
vnode type: riak_kv_vnode
partition: 919147514102638163401536164325474867830488825856
started: 2013-10-22 08:18:35 [-131940361.00 us ago]
last update: no updates seen
objects transferred: unknown

 unknown
riak@de5 ===> riak@de2
 unknown

transfer type: hinted_handoff
vnode type: riak_kv_vnode
partition: 667951920186389224335277833702363723827125420032
started: 2013-10-22 08:18:40 [-136954679.00 us ago]
last update: no updates seen
objects transferred: unknown

 unknown
riak@de6 ===> riak@de2
 unknown

...


How to fix/disable these handoffs and errors:

2013-10-21 23:59:56.894 [error]
<0.9317.693>@riak_core_handoff_sender:start_fold:226 hinted_handoff
transfer of riak_kv_vnode from 'riak@de5'
987655403352524237692333890859050634376860663808 to 'riak@de2'
987655403352524237692333890859050634376860663808 failed because of
error:{badmatch,{error,timeout}}
[{riak_core_handoff_sender,start_fold,5,[{file,"src/riak_core_handoff_sender.erl"},{line,101}]}]


Thanks,
Sergey
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Re: Missing primary partitions

2013-10-24 Thread fenix . serega
Hi Mark,

Thank you very much, you are right. It was wrong hosts/ip in /etc/hosts. As
result - connection timeout to these hosts.

Best regards,
Sergey


2013/10/23 Mark Phillips 

> Hi Sergey,
>
> This looks like the initial tcp connection is timing out when riak@de5
> and riak@de6 are first trying to talk to the handoff/ip port for
> riak@de2 (which would be configured by in riak@de2's app.config).
>
> You may have already gotten to the bottom of why that's happening, but
> the first thing to try would be to set the cluster handoff_concurrency
> limit to "0" and then back to the default to interrupt and restart any
> in-progress transfers and then watch the network traffic on the
> handoff ports. You can do this with "riak-admin transfer-limit" [1].
> If you don't specify a "node" (as shown in the docs) it will set it
> for the whole cluster (which is what you want to do).
>
> Hope that helps. Keep us posted.
>
> Mark
>
> [1]
> http://docs.basho.com/riak/latest/ops/running/tools/riak-admin/#transfer-limit
>
>
>
>
> On Tue, Oct 22, 2013 at 2:27 AM,   wrote:
> > Hi all
> >
> > What to do in case of loss of the primary partitions !?
> >
> > 6 node cluster, leveldb, 1.3.2
> >
> > 5-6 nodes always waiting to handoff 46 partitions
> >
> > 'riak@de6' waiting to handoff 46 partitions
> > 'riak@de5' waiting to handoff 46 partitions
> >
> > Active Transfers:
> >
> > transfer type: hinted_handoff
> > vnode type: riak_kv_vnode
> > partition: 919147514102638163401536164325474867830488825856
> > started: 2013-10-22 08:18:35 [-131940361.00 us ago]
> > last update: no updates seen
> > objects transferred: unknown
> >
> >  unknown
> > riak@de5 ===> riak@de2
> >  unknown
> >
> > transfer type: hinted_handoff
> > vnode type: riak_kv_vnode
> > partition: 667951920186389224335277833702363723827125420032
> > started: 2013-10-22 08:18:40 [-136954679.00 us ago]
> > last update: no updates seen
> > objects transferred: unknown
> >
> >  unknown
> > riak@de6 ===> riak@de2
> >  unknown
> >
> > ...
> >
> >
> > How to fix/disable these handoffs and errors:
> >
> > 2013-10-21 23:59:56.894 [error]
> > <0.9317.693>@riak_core_handoff_sender:start_fold:226 hinted_handoff
> transfer
> > of riak_kv_vnode from 'riak@de5'
> > 987655403352524237692333890859050634376860663808 to 'riak@de2'
> > 987655403352524237692333890859050634376860663808 failed because of
> > error:{badmatch,{error,timeout}}
> >
> [{riak_core_handoff_sender,start_fold,5,[{file,"src/riak_core_handoff_sender.erl"},{line,101}]}]
> > 
> >
> > Thanks,
> > Sergey
> >
> > ___
> > riak-users mailing list
> > riak-users@lists.basho.com
> > http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
> >
>
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


AAE Alredy held by process

2014-06-24 Thread fenix . serega
Riak cluster 7 nodes, 1.3.2 

Sometime riak node crashes and not responde at all.

On one node:

2014-06-24 14:29:56 =ERROR REPORT
Error in process <0.8303.2> on node 'riak@de3' with exit value:
{badarg,[{erlang,binary_to_term,[<<19047
bytes>>],[]},{hashtree,get_disk_bucket,3,[{file,"src/hashtree.erl"},{line,597}]},{hashtree,'-update_levels/3-fun-1-',3,[{file,"src/hashtree.erl"},{line,539}]},{lists,foldl,3,[{file,"lists.erl"},{line,1197}]},{hashtree...

2014-06-24 14:29:56 =CRASH REPORT
  crasher:
initial call: riak_kv_index_hashtree:init/1
pid: <0.8936.2>
registered_name: []
exception exit: {{{badmatch,{error,{db_open,"IO error: lock
./data/anti_entropy/1358739803456073806767488242915919369836374786048/LOCK:
already held by
process"}}},[{hashtree,new_segment_store,2,[{file,"src/hashtree.erl"},{line,499}]},{hashtree,new,2,[{file,"src/hashtree.erl"},{line,215}]},{riak_kv_index_hashtree,do_new_tree,2,[{file,"src/riak_kv_index_hashtree.erl"},{line,426}]},{lists,foldl,3,[{file,"lists.erl"},{line,1197}]},{riak_kv_index_hashtree,init_trees,2,[{file,"src/riak_kv_index_hashtree.erl"},{line,368}]},{riak_kv_index_hashtree,init,1,[{file,"src/riak_kv_index_hashtree.erl"},{line,225}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,304}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]},[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,328}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}
ancestors: [<0.1665.0>,riak_core_vnode_sup,riak_core_sup,<0.129.0>]
messages: []
links: []
dictionary: []
trap_exit: false
status: running
heap_size: 987
stack_size: 24
reductions: 604
  neighbours:
2014-06-24 14:30:50 =ERROR REPORT
Error in process <0.9292.2> on node 'riak@de3' with exit value:
{badarg,[{erlang,binary_to_term,[<<19399
bytes>>],[]},{hashtree,get_disk_bucket,3,[{file,"src/hashtree.erl"},{line,597}]},{hashtree,'-update_levels/3-fun-1-',3,[{file,"src/hashtree.erl"},{line,539}]},{lists,foldl,3,[{file,"lists.erl"},{line,1197}]},{hashtree...

2014-06-24 14:30:50 =CRASH REPORT
  crasher:
initial call: riak_kv_index_hashtree:init/1
pid: <0.10819.2>
registered_name: []
exception exit: {{{badmatch,{error,{db_open,"IO error: lock
./data/anti_entropy/536645132457440915277915524513010171279912730624/LOCK:
already held by
process"}}},[{hashtree,new_segment_store,2,[{file,"src/hashtree.erl"},{line,499}]},{hashtree,new,2,[{file,"src/hashtree.erl"},{line,215}]},{riak_kv_index_hashtree,do_new_tree,2,[{file,"src/riak_kv_index_hashtree.erl"},{line,426}]},{lists,foldl,3,[{file,"lists.erl"},{line,1197}]},{riak_kv_index_hashtree,init_trees,2,[{file,"src/riak_kv_index_hashtree.erl"},{line,368}]},{riak_kv_index_hashtree,init,1,[{file,"src/riak_kv_index_hashtree.erl"},{line,225}]},{gen_server,init_it,6,[{file,"gen_server.erl"},{line,304}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]},[{gen_server,init_it,6,[{file,"gen_server.erl"},{line,328}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,227}]}]}
ancestors: [<0.1140.0>,riak_core_vnode_sup,riak_core_sup,<0.129.0>]
messages: []
links: []
dictionary: []
trap_exit: false
status: running
heap_size: 987
stack_size: 24
reductions: 601
  neighbours:


What does it mean ?

It seems - always same partitions:

536645132457440915277915524513010171279912730624
924856504873462002925769308203272848376019812352

Thanks.
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com


Riak

2017-08-14 Thread fenix . serega
https://hub.docker.com/r/nisaacson/riak-2.0/~/dockerfile/
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com