Thanks for the link, I hadn't seen that before.
It's unfortunate that they don't explain what they mean by "closest replica".
The nodes in the remote DC should not be regarded as "closest". Also, it's not
clear what the black arrows mean… the coordinator sends the read to all three
replicas, bu
Dmesg will often print a message saying that it had to kill a process if the
server was short of memory, so you will have to dump the output to a file and
check.
If a process is killed to reclaim memory for the system, then it will dump a
list of all processes and the actual process that was kil
Hi,
I have several non-primitive columns in my cassandra tables.
Some of them are user-defined-types UDTs.
While querying them through datastax driver, I want to convert such UDTs
into JSON values.
More specifically, I want to get JSON string for the value object below:
Row row = itr
Hi Folks,
The Apache Gora team are pleased to announce the immediate availability of
Apache Gora 0.7.
The Apache Gora open source framework provides an in-memory data model and
persistence for big data. Gora supports persisting to column stores, key
value stores, document stores and RDBMSs, and an
Hello Cassandra-Users and Cassandra-dev,
One of the handy features in sstablemetadata that was part of Cassandra
2.1.15 was that it displayed Ancestor information of an SSTable. Here is a
sample output of the sstablemetadata tool with the ancestors information in
C* 2.1.15:
[centos@chen-datos test
That information was removed, because it was really meant to be used for a
handful of internal tasks, most of which were no longer used. The remaining
use was cleaning up compaction leftovers, and the compaction leftover code
was rewritten in 3.0 / CASSANDRA-7066 (note, though, that it's somewhat
i
Thanks, Jeff. Did all the internal tasks and the compaction tasks move to a
timestamp-based approach?
Regards,
Rajath
Rajath Subramanyam
On Thu, Mar 23, 2017 at 2:12 PM, Jeff Jirsa wrote:
> That information was removed, because it was really meant to be used for a
> h
The ancestors were used primarily to clean up leftovers in the case that
cassandra was killed right as compaction finished, where the
source/origin/ancestors were still on the disk at the same time as the
compaction result.
It's not timestamp based, though - that compaction process has moved to
us
Assuming an even distribution of data in your cluster, and an even
distribution across those keys by your readers, you would not need to
increase RF with cluster size to increase read performance. If you have 3
nodes with RF=3, and do 3 million reads, with good distribution, each node
has served 1
Thanks Jayesh,
I found the fix for the same.
I make below changes :
In /etc/sysctl.conf I make below change:
vm.max_map_count = 1048575
in the /etc/security/limits.d file:
root - memlock unlimited
root - nofile 10
root - nproc 32768
root - as unlimited
Thanks & Regards,
Abhishek Kumar Ma
On 24/03/2017 01:00, Eric Stevens wrote:
Assuming an even distribution of data in your cluster, and an even
distribution across those keys by your readers, you would not need to
increase RF with cluster size to increase read performance. If you have
3 nodes with RF=3, and do 3 million reads, wit
11 matches
Mail list logo