I did some test on this issue, and it turns out the problem caused by local
time stamp.
In our traffic, the update and delete happened very fast, within 1 seconds,
even within 100ms.
And at that time, the ntp service seems not work well, the offset is same
times even larger then 1 second.
Then the
Jason,
Are you able document the steps to reproduce this on a clean install ?
Is so do you have time to create an issue on
https://issues.apache.org/jira/browse/CASSANDRA
Thanks
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 2
For the create/update/deleteColumn/deleteRow test case, for Quorum
consistency level, 6 nodes, replicate factor 3, for one thread around 1/100
round, I can have this reproduced.
And if I have 20 client threads to run the test client, the ratio is bigger.
And the test group will be executed by one
Is this Cassandra 1.1.1?
How often do you observe this? How many columns are in the row? Can
you reproduce when querying by column name, or only when "slicing" the
row?
On Thu, Jun 28, 2012 at 7:24 AM, Jason Tang wrote:
> Hi
>
> First I delete one column, then I delete one row. Then try to
Hi
First I delete one column, then I delete one row. Then try to read all
columns from the same row, all operations from same client app.
The consistency level is read/write quorum.
Check the Cassandra log, the local node don't perform the delete
operation but send the mutation to other