. We
still don't fully understand why this kernel bug didn't affect *all *our
nodes (in the end we had three nodes with that kernel, only two of them
exhibited this issue), but there we go.
Thanks everyone for your help
Cheers,
Griff
On 14 January 2016 at 15:14, James Griffin
wrote:
16 at 15:08, Kai Wang wrote:
> James,
>
> I may miss something. You mentioned your cluster had RF=3. Then why does
> "nodetool status" show each node owns 1/3 of the data especially after a
> full repair?
>
> On Thu, Jan 14, 2016 at 9:56 AM, James Griffin <
>
93bf2b28a88cc4b38c554?ytl=http%3A%2F%2Fidioplatform.com%2F>
for
more information.
On 14 January 2016 at 14:22, Kai Wang wrote:
> James,
>
> Can you post the result of "nodetool netstats" on the bad node?
>
> On Thu, Jan 14, 2016 at 9:09 AM, James Griffin <
> ja
> or concurrent mode failures?
>
> If you are on CMS, you need to fine tune your heap options to address full
> gc.
>
>
>
> Thanks
> Anuj
>
> Sent from Yahoo Mail on Android
> <https://overview.mail.yahoo.com/mobile/?.src=Android>
>
> On Thu, 14 Jan, 2
n node 3.May be you can try investigating logs to see whats
> happening.
>
> Others on the mailing list could also share their views on the situation.
>
> Thanks
> Anuj
>
>
>
> Sent from Yahoo Mail on Android
> <https://overview.mail.yahoo.com/mobile/?.src=Android>
>
6af4-4c80-b280-e7fdd61924d3 rack1
>
>
>
> Thanks
> Anuj
>
> Sent from Yahoo Mail on Android
> <https://overview.mail.yahoo.com/mobile/?.src=Android>
>
> On Wed, 13 Jan, 2016 at 10:34 pm, James Griffin
> wrote:
> Hi all,
>
> We’ve spent a few days running t
Hi all,
We’ve spent a few days running things but are in the same position. To add
some more flavour:
- We have a 3-node ring, replication factor = 3. We’ve been running in
this configuration for a few years without any real issues
- Nodes 2 & 3 are much newer than node 1. These two no