HI Michael,

Thanks for your reply.
I don't think this issue is related to CASSANDRA-12765
<https://issues.apache.org/jira/browse/CASSANDRA-12765> as in my case the
sstable which has tombstone does not have maxLocalDeletionTime ==
Integer.MAX_VALUE .  I am able to reproduce this issue on 2.1.17 as well.

I am attaching the steps to reproduce on 2.1.17 (with minor change from
previous steps to make sure one request must go to the node which has old
mutation). I have also attached the trace of range read query.

Should I raise a jira for the same ?

On Wed, Jun 12, 2019 at 9:00 AM Michael Shuler <mich...@pbandjelly.org>
wrote:

> (dropped dev@ x-post; user@ was correct)
>
> Possibly #12765, fixed in 2.1.17. Wouldn't hurt to update to latest 2.1.21.
>
> https://issues.apache.org/jira/browse/CASSANDRA-12765
> https://github.com/apache/cassandra/blob/cassandra-2.1/CHANGES.txt#L1-L36
>
> Michael
>
> On 6/11/19 9:58 PM, Laxmikant Upadhyay wrote:
> > Does range query ignore purgable tombstone (which crossed grace period)
> > in some cases?
> >
> > On Tue, Jun 11, 2019, 2:56 PM Laxmikant Upadhyay
> > <laxmikant....@gmail.com <mailto:laxmikant....@gmail.com>> wrote:
> >
> >     In a 3 node cassandra 2.1.16 cluster where, one node has old
> >     mutation and two nodes have evict-able (crossed gc grace period)
> >     tombstone produced by TTL. A read range  query with local quorum
> >     return the old mutation as result. However expected result should be
> >     empty. Next time running the same query results no data as expected.
> >     Why this strange behaviour?
> >
> >
> >     *Steps to Reproduce :*
> >     Create a cassandra-2.1.16  3 node cluster. Disable hinted handoff
> >     for each node.
> >
> >     #ccm node1 nodetool ring
> >     Datacenter: datacenter1
> >     ==========
> >     Address    Rack        Status State   Load            Owns
> >           Token
> >
> >            3074457345618258602
> >     127.0.0.1  rack1       Up     Normal  175.12 KB       100.00%
> >            -9223372036854775808
> >     127.0.0.2  rack1       Up   Normal  177.87 KB       100.00%
> >          -3074457345618258603
> >     127.0.0.3  rack1       Up   Normal  175.13 KB       100.00%
> >          3074457345618258602
> >
> >     #Connect to cqlsh and set CONISISTENCY LOCAL_QUORUM;
> >
> >     cqlsh> CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION = {
> >     'class' : 'NetworkTopologyStrategy', 'datacenter1' : 3 };
> >     cqlsh> CREATE TABLE test.table1 (key text, col text, val
> >     text,PRIMARY KEY ((key), col));
> >     cqlsh> ALTER TABLE test.table1  with GC_GRACE_SECONDS = 120;
> >
> >     cqlsh> INSERT INTO test.table1  (key, col, val) VALUES ('key2',
> >     'abc','xyz');
> >
> >     #ccm flush
> >
> >     #ccm node3 stop
> >
> >     cqlsh> INSERT INTO test.table1  (key, col, val) VALUES ('key2',
> >     'abc','xyz') USING TTL 60;
> >
> >     #ccm flush
> >
> >     #wait for 3 min so that the tombstone crosses its gc grace period.
> >
> >     #ccm node3 start
> >
> >     cqlsh> select * from test.table1 where token (key) >
> >     3074457345618258602 and token (key) < -9223372036854775808 ;
> >
> >       key  | col | val
> >     ------+-----+-----
> >       key2 | abc | xyz
> >
> >     (1 rows)
> >
> >     #ccm flush
> >     -> Here read repair triggers and the old mutation moves to the one
> >     of the node where tombstone is present (not both the node)
> >
> >
> >     cqlsh> select * from test.vouchers where token (key) >
> >     3074457345618258602 and token (key) < -9223372036854775808 ;
> >
> >       key | col | val
> >     -----+-----+-----
> >
> >     (0 rows)
> >
> >
> >     --
> >
> >     regards,
> >     Laxmikant Upadhyay
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>

-- 

regards,
Laxmikant Upadhyay
# Create a ccm cluster on version 2.1.17

  # ccm create test1 -v 2.1.17
  
# Populate cluster with 3 nodes.

  # ccm populate -n 3
  
# Disable hinted hand off in all nodes of the cluster 
# Start all the nodes of the cluster

  #ccm start

# Run the nodetool ring on any of the node to find the token ranges.

  # ccm node1 nodetool ring
  
Datacenter: datacenter1
==========
Address    Rack        Status State   Load            Owns                Token
                                                                          
3074457345618258602
127.0.0.1  rack1       Up     Normal  46.6 KB         66.67%              
-9223372036854775808
127.0.0.2  rack1       Up     Normal  91.86 KB        66.67%              
-3074457345618258603
127.0.0.3  rack1       Up     Normal  46.65 KB        66.67%              
3074457345618258602

# Connect to cqlsh and set CONISISTENCY LOCAL_QUORUM;

  # ccm node1 cqlsh
  cqlsh> CONISISTENCY LOCAL_QUORUM;
  
# Create keyspace with replication factor 3
   
   cqlsh> CREATE KEYSPACE IF NOT EXISTS test WITH REPLICATION = { 'class' : 
'NetworkTopologyStrategy', 'datacenter1' : 3 };

# Create a table and alter gc_grace_seconds to 120 seconds.

   cqlsh> CREATE TABLE test.table1 (key text, col text, val text,PRIMARY KEY 
((key), col));
   cqlsh> ALTER TABLE test.table1  with GC_GRACE_SECONDS = 120;
   
# Insert data into table without ttl first

   cqlsh> INSERT INTO test.table1  (key, col, val) VALUES ('key2', 'abc','xyz');
   
# Flush the data

   #ccm flush   

# Stop one of the node of the cluster

   # ccm node3 stop
   
# Insert data with the same partition key again, this time with ttl of 60 
seconds.

   cqlsh> INSERT INTO test.table1  (key, col, val) VALUES ('key2', 'abc','xyz') 
USING TTL 60;
   
# Flush the data

   #ccm flush

# Wait for 3 minutes so that the data becomes a tombstone and is evictable.
# Start the stopped node

   # ccm node3 start

# Stop one of the node (node1 or node2) other than which was stopped earlier 
(node3) to make sure during range query request must go to node3 which has old 
mutation.

   # ccm node2 stop

# Select with range query.    

   cqlsh> select * from test.table1 where token (key) > 3074457345618258602 and 
token (key) < 9223372036854775807 ;

Here result set will have a row , however expected should not empty response.

# Flush the data
  
   # ccm flush
      
    

cqlsh> select * from test.table1 where token (key) > 3074457345618258602 and 
token (key) < 9223372036854775807 ;

 key  | col | val
------+-----+-----
 key2 | abc | xyz

(1 rows)

Tracing session: d790d030-8d9b-11e9-bdf4-8d011a7653f5

 activity                                                                       
                                                         | timestamp            
      | source    | source_elapsed
-----------------------------------------------------------------------------------------------------------------------------------------+----------------------------+-----------+----------------
                                                                                
                                      Execute CQL3 query | 2019-06-13 
10:56:48.756000 | 127.0.0.1 |              0
                                                     PAGED_RANGE message 
received from /127.0.0.1 [MessagingService-Incoming-/127.0.0.1] | 2019-06-13 
10:56:48.760000 | 127.0.0.3 |             30
 Parsing select * from test.table1 where token (key) > 3074457345618258602 and 
token (key) < 9223372036854775807 ; [SharedPool-Worker-1] | 2019-06-13 
10:56:48.765000 | 127.0.0.1 |             77
                     Executing seq scan across 1 sstables for 
(max(3074457345618258602), min(9223372036854775807)] [SharedPool-Worker-2] | 
2019-06-13 10:56:48.766000 | 127.0.0.3 |           5297
                                                                                
 Read 1 live and 0 tombstone cells [SharedPool-Worker-2] | 2019-06-13 
10:56:48.768000 | 127.0.0.3 |           5812
                                                                                
               Preparing statement [SharedPool-Worker-1] | 2019-06-13 
10:56:48.768000 | 127.0.0.1 |            235
                                                                                
      Scanned 1 rows and matched 1 [SharedPool-Worker-2] | 2019-06-13 
10:56:48.768000 | 127.0.0.3 |           5894
                                                                                
         Computing ranges to query [SharedPool-Worker-1] | 2019-06-13 
10:56:48.768000 | 127.0.0.1 |            541
                                                                                
  Enqueuing response to /127.0.0.1 [SharedPool-Worker-2] | 2019-06-13 
10:56:48.769000 | 127.0.0.3 |           5923
                     Submitting range requests on 1 ranges with a concurrency 
of 1 (307.2 rows per range expected) [SharedPool-Worker-1] | 2019-06-13 
10:56:48.769000 | 127.0.0.1 |            658
                                                   Sending REQUEST_RESPONSE 
message to /127.0.0.1 [MessagingService-Outgoing-/127.0.0.1] | 2019-06-13 
10:56:48.769000 | 127.0.0.3 |           6911
                                                                                
   Enqueuing request to /127.0.0.3 [SharedPool-Worker-1] | 2019-06-13 
10:56:48.770000 | 127.0.0.1 |            777
                                                                                
   Enqueuing request to /127.0.0.1 [SharedPool-Worker-1] | 2019-06-13 
10:56:48.773000 | 127.0.0.1 |            812
                                                           Submitted 1 
concurrent range requests covering 1 ranges [SharedPool-Worker-1] | 2019-06-13 
10:56:48.774000 | 127.0.0.1 |            883
                                                        Sending PAGED_RANGE 
message to /127.0.0.3 [MessagingService-Outgoing-/127.0.0.3] | 2019-06-13 
10:56:48.788000 | 127.0.0.1 |           3085
                                                        Sending PAGED_RANGE 
message to /127.0.0.1 [MessagingService-Outgoing-/127.0.0.1] | 2019-06-13 
10:56:48.791000 | 127.0.0.1 |           6932
                                                     PAGED_RANGE message 
received from /127.0.0.1 [MessagingService-Incoming-/127.0.0.1] | 2019-06-13 
10:56:48.793000 | 127.0.0.1 |           7163
                     Executing seq scan across 2 sstables for 
(max(3074457345618258602), min(9223372036854775807)] [SharedPool-Worker-3] | 
2019-06-13 10:56:48.794000 | 127.0.0.1 |           7528
                                                                                
 Read 0 live and 0 tombstone cells [SharedPool-Worker-3] | 
---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org

Reply via email to