Per Aleksey Yeschenko's comment on that ticket, it does seem like a
timestamp granularity issue, but it should work properly if it is within
the same session. gocql by default uses 2 connections and 128 streams per
connection. If you set it to 1 connection with 1 stream this problem goes
away. I su
The more I think about it, the more this feels like a column timestamp
issue. If two inserts have the same timestamp then the values are compared
lexically to decide which one to keep (which I think explains the
"99"/"100" "999"/"1000" mystery).
We can verify this by also selecting out the WRITETI
Yeah I thought that was suspicious too, it's mysterious and fairly
consistent. (By the way I had error checking but removed it for email
brevity, but thanks for verifying :) )
On Mon, Mar 2, 2015 at 4:13 PM, Peter Sanford
wrote:
> Hmm. I was able to reproduce the behavior with your go program on
Hmm. I was able to reproduce the behavior with your go program on my dev
machine (C* 2.0.12). I was hoping it was going to just be an unchecked
error from the .Exec() or .Scan(), but that is not the case for me.
The fact that the issue seems to happen on loop iteration 10, 100 and 1000
is pretty s
Done: https://issues.apache.org/jira/browse/CASSANDRA-8892
On Mon, Mar 2, 2015 at 3:26 PM, Robert Coli wrote:
> On Mon, Mar 2, 2015 at 11:44 AM, Dan Kinder wrote:
>
>> I had been having the same problem as in those older post:
>> http://mail-archives.apache.org/mod_mbox/cassandra-user/201411.mb
On Mon, Mar 2, 2015 at 11:44 AM, Dan Kinder wrote:
> I had been having the same problem as in those older post:
> http://mail-archives.apache.org/mod_mbox/cassandra-user/201411.mbox/%3CCAORswtz+W4Eg2CoYdnEcYYxp9dARWsotaCkyvS5M7+Uo6HT1=a...@mail.gmail.com%3E
>
As I said on that thread :
"It soun
Hey all,
I had been having the same problem as in those older post:
http://mail-archives.apache.org/mod_mbox/cassandra-user/201411.mbox/%3CCAORswtz+W4Eg2CoYdnEcYYxp9dARWsotaCkyvS5M7+Uo6HT1=a...@mail.gmail.com%3E
To summarize it, on my local box with just one cassandra node I can update
and then s
For cqlengine we do quite a bit of write then read to ensure data was
written correctly, across 1.2, 2.0, and 2.1. For what it's worth,
I've never seen this issue come up. On a single node, Cassandra only
acks the write after it's been written into the memtable. So, you'd
expect to see the most
On Thu, Nov 6, 2014 at 6:14 AM, Brian Tarbox wrote:
> We write values to our keyspaces and then immediately read the values back
> (in our Cucumber tests). About 20% of the time we get the old value.if
> we wait 1 second and redo the query (within the same java method) we get
> the new value
Thanks. Right now its just for testing but in general we can't guard
against multiple users ending up the one writes and then one reads.
It would be one thing if the read just got old data but we're seeing it
return wrong data...i.e. data that doesn't correspond to any particular
version of the
If this is just for doing tests to make sure you get back the data you
expect, I would recommend looking some sort of eventually construct in your
testing. We use Specs2 as our testing framework, and our write-then-read
tests look something like this:
someDAO.write(someObject)
eventually {
s
We're doing development on a single node cluster (and yes of course we're
not really deploying that way), and we're getting inconsistent behavior on
reads after writes.
We write values to our keyspaces and then immediately read the values back
(in our Cucumber tests). About 20% of the time we get
12 matches
Mail list logo