Likely in the next few weeks.
On Mon., 23 Jul. 2018, 01:17 Abdul Patel, wrote:
> Any idea when 3.11.3 is coming in?
>
> On Tuesday, June 19, 2018, kurt greaves wrote:
>
>> At this point I'd wait for 3.11.3. If you can't, you can get away with
>> backporting a few repair fixes or just doing sub
Due to how Spring Data binding works, you have to write queries
explicitly to use the "...FROM keyspace.table ..." in either the
template-method classes (CqlTemplate, etc) or via @Query annontations
to avoid the 'use keyspace' overhead. For example, a Repository
implementation for a User class (do
Hi Folks:
I am working with on a project to save Spark dataframe to cassandra and am
getting an exception regarding row size not valid (see below). I tried to trace
the code in the connector and it appears that the row size (3 below) is
different from the column count (which turns out be 1). I
Hi Folks:
Just checking if anyone has any pointers for the cassandra spark connector
issue I've mentioned :
IllegalArgumentException on executing save after repartition by cassandra
replica:val customersRdd =
customers.rdd.repartitionByCassandraReplica("test","customers")
customersRdd.saveToC
Hi Shalom,
Thanks very much for the response!
We are only using batches on one Cassandra partition to improve
performance. Batches are NEVER used in this app across Cassandra partition.
And if you look at the trace
messages I showed, there is only one statement per batch anyway.
In fact, what I
Any idea when 3.11.3 is coming in?
On Tuesday, June 19, 2018, kurt greaves wrote:
> At this point I'd wait for 3.11.3. If you can't, you can get away with
> backporting a few repair fixes or just doing sub range repairs on 3.11.2
>
> On Wed., 20 Jun. 2018, 01:10 Abdul Patel, wrote:
>
>> Hi All,
Thanks Jeff, At time of crash it said: .../linux-4.4.0/mm/pgtable-generic.c:33:
bad pmd So i just run this on all of my nodes: echo never | sudo tee
/sys/kernel/mm/transparent_hugepage/defrag Sent using Zoho Mail
Forwarded message From : Jeff Jirsa To :
Date : Su
I'm using RF=2 and Write consistency = ONE, is there a counter in cassandra jmx
to report number of writes that only acknowledged by one node (instead of both
replica's)? Although i don't care all replicas acknowledge the write, but i
consider this as normal status of cluster. Sent using Zoho M
Hi Gareth,
If you're using batches for multiple partitions, this may be the root cause
you've been looking for.
https://inoio.de/blog/2016/01/13/cassandra-to-batch-or-not-to-batch/
If batches are optimally used and only one node is misbehaving, check if
NTP on the node is properly synced.
Hope