1) I am using RackUnwarePartioner.
2) All nodes were rebuilt since we installed the system, we didn't do
cleanup although.
But Node1's data on Node3 are new data. I checked Cassandra source
code, and can't figure out yet. Here may be the case.
A NodeW write data on Node1, the FailureDetector may mark Node1 is
live, but the writing may fail. What will Cassandra do next after a
failed writing?
Because the Consistency Level is One, will NodeW write data on Node2?
If it will, will Node2 replace data on Node3?
Thanks,
Zhong Li
On Aug 23, 2010, at 12:03 AM, Jonathan Ellis wrote:
possibilities include
1) you're using something other than rackunwarepartitioner, which is
the only one that behaves the way you describe
2) you've moved nodes around w/o running cleanup afterwards
On Sun, Aug 22, 2010 at 10:09 PM, Zhong Li <z...@voxeo.com> wrote:
Today, I checked all nodes data and logs, there are very few nodes
reported
connections up/down. I found some data on each nodes which I don't
understand.
The ReplicationFactor is 2, write Consistency Level is one.
Example, the
ring like Node1(Token1)->Node2(Token2)->Node3(Token3)->.......
Node1 has
token1, suppose all data with key as token1 should be on Node1 and
Node2,
but why I can find some Node1/Node2 data on Node3 also? I dumped
the data on
Node3 to my local, red them and found some Node1/Node2's data on
the Node3
and those data should be deleted.
Why Node3 has Node1/Node2's data?
Thanks.
On Aug 18, 2010, at 10:44 AM, Jonathan Ellis wrote:
HH would handle it if it were a FD false positive, but if a node
actually does go down then it can miss writes before HH kicks in.
On Wed, Aug 18, 2010 at 9:30 AM, Raj N <raj.cassan...@gmail.com>
wrote:
Guys,
Correct me if I am wrong. The whole problem is because a node
missed
an
update when it was down. Shouldn’t HintedHandoff take care of
this case?
Thanks
-Raj
-----Original Message-----
From: Jonathan Ellis [mailto:jbel...@gmail.com]
Sent: Wednesday, August 18, 2010 9:22 AM
To: user@cassandra.apache.org
Subject: Re: data deleted came back after 9 days.
Actually, tombstones are read repaired too -- as long as they are
not
expired. But nodetool repair is much less error-prone than
relying on RR
and your memory of what deletes you issued.
Either way, you'd need to increase GCGraceSeconds first to make the
tombstones un-expired first.
On Wed, Aug 18, 2010 at 12:43 AM, Benjamin Black <b...@b3k.us> wrote:
On Tue, Aug 17, 2010 at 7:49 PM, Zhong Li <z...@voxeo.com> wrote:
Those data were inserted one node, then deleted on a remote
node in
less than 2 seconds. So it is very possible some node lost
tombstone
when connection lost.
My question, is a ConstencyLevel.ALL read can retrieve lost
tombstone
back instead of repair?
No. Read repair does not replay operations. You must run
nodetool
repair.
b
--
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra
support
http://riptano.com
--
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com
--
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com