On Tue, Dec 23, 2014 at 12:29 AM, Jiri Horky wrote:
> just a follow up. We've seen this behavior multiple times now. It seems
> that the receiving node loses connectivity to the cluster and thus
> thinks that it is the sole online node, whereas the rest of the cluster
> thinks that it is the only
Hi,
just a follow up. We've seen this behavior multiple times now. It seems
that the receiving node loses connectivity to the cluster and thus
thinks that it is the sole online node, whereas the rest of the cluster
thinks that it is the only offline node, really just after the streaming
is over. I
> and would it really hurt anything to add something like "can't
> handle load" to the exception message?
Feel free to add a ticket with your experience.
The event you triggered is a safety valve to stop the server failing.
> - My total replication factor is 4 over two DCs -- I suppose you mean
Thanks!
Node writing to log because it cannot handle load is much different than
node writing to log "just because". Although the amount of logging is still
excessive and would it really hurt anything to add something like "can't
handle load" to the exception message?
On the subject of RF:3 -- co
> Replication is configured as DC1:2,DC2:2 (i.e. every node holds the entire
> data).
I really recommend using RF 3.
The error is the coordinator node protecting it's self.
Basically it cannot handle the volume of local writes + the writes for HH. The
number of in flight hints is greater tha
On Tue, Jan 22, 2013 at 2:57 PM, Sergey Olefir wrote:
> Do you have a suggestion as to what could be a better fit for counters?
> Something that can also replicate across DCs and survive link breakdown
> between nodes (across DCs)? (and no, I don't need 100.00% precision
> (although it would be ni
Do you have a suggestion as to what could be a better fit for counters?
Something that can also replicate across DCs and survive link breakdown
between nodes (across DCs)? (and no, I don't need 100.00% precision
(although it would be nice obviously), I just need to be "pretty close" for
the values
On Tue, Jan 22, 2013 at 5:03 AM, Sergey Olefir wrote:
> I am load-testing counter increments at the rate of about 10k per second.
Do you need highly performant counters that count accurately, without
meaningful chance of over-count? If so, Cassandra's counters are
probably not ideal.
> We wanted
on?
>
> Thanks!
>
> Rene
>
> From: aaron morton [mailto:aa...@thelastpickle.com]
> Sent: woensdag 1 februari 2012 21:03
> To: user@cassandra.apache.org
> Subject: Re: Node down
>
> Without knowing too much more information I would try this…
>
> * R
ring view". Can it be that this stored ring view was out
of sync with the actual (gossip) situation?
Thanks!
Rene
From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: woensdag 1 februari 2012 21:03
To: user@cassandra.apache.org
Subject: Re: Node down
Without knowing too much more inf
Without knowing too much more information I would try this…
* Restart node each node in turn, watch the logs to see what it says about the
other.
* If that restart did not fix it, try using the
Dcassandra.load_ring_state=false JVM option when starting the node. That will
tell it to ignore it'
Coordination in a distributed system is difficult. I don't think we
can fix HH's existing edge cases, without introducing other more
complicated edge cases.
So weekly-or-so repair will remain a common maintenance task for the
forseeable future.
On Wed, Jul 14, 2010 at 4:17 PM, B. Todd Burruss w
thx, but disappointing :)
is this just something we have to live with and periodically "repair"
the nodes? or is there future work to tighten up the window?
thx
On Wed, 2010-07-14 at 12:13 -0700, Jonathan Ellis wrote:
> On Wed, Jul 14, 2010 at 1:43 PM, B. Todd Burruss wrote:
> > there is a wi
On Wed, Jul 14, 2010 at 1:43 PM, B. Todd Burruss wrote:
> there is a window of time from when a node goes down and when the rest
> of the cluster actually realizes that it is down.
>
> what happens to writes during this time frame? does hinted handoff
> record these writes and then "handoff" when
14 matches
Mail list logo