n
> "carpe diem quam minimum credula postero"
>
For data accessed through a single path, I use the same trick: pickle, bz2
and insert.
--
Alexis Lê-Quôc | Datadog, Inc. | @alq
Alain,
Can you post your mdadm --detail /dev/md0 output here as well as your
iostat -x -d when that happens. A bad ephemeral drive on EC2 is not unheard
of.
Alexis | @alq | http://datadog.com
P.S. also, disk utilization is not a reliable metric, iostat's await and
svctm are more useful imho.
O
5117307932921825928971026432
1.2.3.193 Up Normal 53.73 GB50.00%
127605887595351923798765477786913079296
1.2.3.252 Up Normal 43.11 GB12.52%
148904621249875869977532879268261763219
--
Alexis Lê-Quôc
Could this be caused by old hinted handoffs for 2.3.4.193 that were processed
at that time, causing the rest of the nodes to think that the 2.3.4.193 is
still present (albeit down)?
Should cleanup be run periodically? I run repair every few days (my
gcgraceperiod is 10 days).
--
Alexis Lê-Q
nderstand given that nodetool ring on
Node 1 yields:
...
Node2 Up Normal 52.04 GB 51.03% token1
and the same command on Node 2 yields:
...
Node1 Up Normal 50.89 GB 23.97% token2
Any light shed on both issues is appreciated.
--
Alexis Lê-Quôc (@datadoghq)
l -host
> 192.168.0.5 ring
> Address Status State Load Owns
> Token
>
> 127605887595351923798765477786913079296
> 192.168.0.253 Up Normal 171.17 MB 25.00%
> 0
> 192.168.0.4 ? Normal 212.11 MB 54.39%
> 92535295865117307932921825928971026432
> 192.168.0.
vice.instance().sendReply(response, id, msg.getFrom());
43 }
44 }
Before I dig deeper in the code, has anybody dealt with this before?
Thanks,
--
Alexis Lê-Quôc