Hi Alain,
Thanks for your reply.
We understood that there is a chance that this would be left unresolved, since
we are really way behind the official Cassandra releases.
Here is what have further found for the OOM issue, which seems to be related to
# of gossip message accumulated on a live no
Hi,
If i have inconsistent data on nodes
Scenario :
I have 2 DCs each with 3 nodes
and I have inconsistent data on them
node 1,2,3,4,5 have
P1,100,A,val1,w1
P1,100,B,val2,w2
node 6 has
P1,100,A,val1,w1
P1,100,B,val2,w2
P1,200,C,val3,w3
P1,200,D,val4,w4
col1, col2, col3,col4,col5 in table
Primary
Hello,
As far as I know, lightweight transactions only apply to a single
partition, so in your case it will only execute on the nodes responsible
for that partition. And as a consequence, those nodes will all be in the
same state when the transaction ends (If it would apply).
Please refer to this
Hi,
We are using dsc 3.0.3 on total of *6 Nodes*,* 2 DC's, 3 Node Each, RF-3*
so every node has complete data. Now we are facing a situation on a table
with 1 partition key, 2 clustering columns and 4 normal columns.
Out of the 6 5 nodes has a single value and Partition key, 2 clustering key
for
On Tue, May 10, 2016 at 6:41 AM, Atul Saroha
wrote:
> I have concern over using secondary index on field with low cardinality.
> Lets say I have few billion rows and each row can be classified in 1000
> category. Lets say we have 50 node cluster.
>
> Now we want to fetch data for a single categor
Hi, I missed out on some info
node 1,2,3 are in DC1
node 4,5,6 are in DC2
and RF is 3
so all data is on all nodes
@Carlos : There was only one query. And yes all nodes have same data for
col5 only
node 6 has
P1,100,A,val1,w1
P1,100,B,val2,w2
P1,200,C,val3,w_x
P1,200,D,val4,w4
node 1,2,3,4,5 have
Hi there,
Is there any detailed document about the internal storage format for
Cassandra schema? My guess is that Cassandra is using an internal format.
If that's true, I am wondering if we've considered using Thrift or Avro
format, which might easy the integration between Cassandra and Hadoop.
T
I'd like to get feedback/opinions on a possible work around for a timestamp +
data consistency edge case issue.
Context for this question:
When using client timestamp (default timestamp), on C* that supports it (v3
protocol), on occasion a record update is lost when executing updates in rapid
su
to clarify - the currentRecordTs would be saved on a field on the record being
persisted
From: Jen Smith
To: "user@cassandra.apache.org"
Sent: Thursday, May 12, 2016 10:32 AM
Subject: client time stamp - force to be continuously increasing?
I'd like to get feedback/opinions on a po
Hi,
Among the ideas worth exploring, please note that the DataStax Java driver
for Cassandra now includes a modified version of its monotonic timestamp
generators that will indeed strive to provide rigorously increasing
timestamps, even in the event of a system clock skew (in which case, they
woul
Thank you - I think this driver solution may address a portion of the problem?
Since this solution is from the driver, is it correct to assume that although
this could potentially fix the issue within a single (client) session, it could
not fix it for a pool of clients, where client A sent the
11 matches
Mail list logo