al Message-
From: Jonathan Ellis [mailto:jbel...@gmail.com]
Sent: Wednesday, August 18, 2010 9:22 AM
To: user@cassandra.apache.org
Subject: Re: data deleted came back after 9 days.
Actually, tombstones are read repaired too -- as long as they
are not
expired. But nodetool repair is much less error-pr
ong. The whole problem is because a node
missed
an
update when it was down. Shouldn’t HintedHandoff take care of
this case?
Thanks
-Raj
-Original Message-
From: Jonathan Ellis [mailto:jbel...@gmail.com]
Sent: Wednesday, August 18, 2010 9:22 AM
To: user@cassandra.apache.org
Subject: R
>>> -Raj
>>>
>>> -----Original Message-
>>> From: Jonathan Ellis [mailto:jbel...@gmail.com]
>>> Sent: Wednesday, August 18, 2010 9:22 AM
>>> To: user@cassandra.apache.org
>>> Subject: Re: data deleted came back after 9 days.
a.apache.org
Subject: Re: data deleted came back after 9 days.
Actually, tombstones are read repaired too -- as long as they are not
expired. But nodetool repair is much less error-prone than relying
on RR
and your memory of what deletes you issued.
Either way, you'd need to increase GCG
was down. Shouldn’t HintedHandoff take care of this case?
>
> Thanks
> -Raj
>
> -Original Message-
> From: Jonathan Ellis [mailto:jbel...@gmail.com]
> Sent: Wednesday, August 18, 2010 9:22 AM
> To: user@cassandra.apache.org
> Subject: Re: data deleted came back after
@cassandra.apache.org
Subject: Re: data deleted came back after 9 days.
Actually, tombstones are read repaired too -- as long as they are not
expired. But nodetool repair is much less error-prone than relying on RR
and your memory of what deletes you issued.
Either way, you'd need to inc
Actually, tombstones are read repaired too -- as long as they are not
expired. But nodetool repair is much less error-prone than relying on
RR and your memory of what deletes you issued.
Either way, you'd need to increase GCGraceSeconds first to make the
tombstones un-expired first.
On Wed, Aug
Best practice is to schedule repair more often than GCGraceSeconds,
say weekly, rather than doing it manually when you notice the FD mark
someone dead.
On Tue, Aug 17, 2010 at 3:11 PM, Ned Wolpert wrote:
> (gurus, please check my logic here... I'm trying to validate my
> understanding of this sit
Corrected, thanks. (Better would be to edit the wiki yourself, of course. :)
On Tue, Aug 17, 2010 at 2:58 PM, Jeremy Dunck wrote:
> On Tue, Aug 17, 2010 at 2:49 PM, Jonathan Ellis wrote:
>> It doesn't have to be disconnected more than GC grace seconds to cause
>> what you are seeing, it just ha
On Tue, Aug 17, 2010 at 7:49 PM, Zhong Li wrote:
> Those data were inserted one node, then deleted on a remote node in less
> than 2 seconds. So it is very possible some node lost tombstone when
> connection lost.
> My question, is a ConstencyLevel.ALL read can retrieve lost tombstone back
> inste
Those data were inserted one node, then deleted on a remote node in
less than 2 seconds. So it is very possible some node lost tombstone
when connection lost.
My question, is a ConstencyLevel.ALL read can retrieve lost tombstone
back instead of repair?
On Aug 17, 2010, at 4:11 PM, Ned Wol
(gurus, please check my logic here... I'm trying to validate my
understanding of this situation.)
Isn't the issue that while a server was disconnected, a delete could have
occurred, and thus the disconnected server never got the 'tombstone'?
(http://wiki.apache.org/cassandra/DistributedDeletes) W
On Tue, Aug 17, 2010 at 2:49 PM, Jonathan Ellis wrote:
> It doesn't have to be disconnected more than GC grace seconds to cause
> what you are seeing, it just has to be disconnected at all (thus
> missing delete commands).
>
> Thus you need to be running repair more often than gcgrace, or
> confid
It doesn't have to be disconnected more than GC grace seconds to cause
what you are seeing, it just has to be disconnected at all (thus
missing delete commands).
Thus you need to be running repair more often than gcgrace, or
confident that read repair will handle it for you (which clearly is
not t
864000
It is default 10 days.
I checked all system.log, all nodes are connected, although not all
the time, but they reconnected after a few minutes. None of node
disconnected more than GC grace seconds.
Best,
On Aug 17, 2010, at 11:53 AM, Peter Schuller wrote:
We have 10 nodes cross 5
>> We have 10 nodes cross 5 datacenters. Today I found a strange thing. On
>> one node, few data deleted came back after 8-9 days.
>>
>> The data saved on a node and retrieved/deleted on another node in a remote
>> datacenter. The CF is a super column.
>>
>> What is possible causing this?
What is
Cassandra version is 0.6.3
On Aug 17, 2010, at 11:39 AM, Zhong Li wrote:
Hi All,
We have strange issue here.
We have 10 nodes cross 5 datacenters. Today I found a strange
thing. On one node, few data deleted came back after 8-9 days.
The data saved on a node and retrieved/deleted on anot
Hi All,
We have strange issue here.
We have 10 nodes cross 5 datacenters. Today I found a strange thing.
On one node, few data deleted came back after 8-9 days.
The data saved on a node and retrieved/deleted on another node in a
remote datacenter. The CF is a super column.
What is possi
18 matches
Mail list logo