How much memory do you have? Recently people have been seeing really great
performance using G1GC with heaps > 8GB and offheap memtable objects.
On Thu, Jun 18, 2015 at 1:31 AM Jason Wee wrote:
> okay, iirc memtable has been removed off heap, google and got this
> http://www.datastax.com/dev/bl
okay, iirc memtable has been removed off heap, google and got this
http://www.datastax.com/dev/blog/off-heap-memtables-in-Cassandra-2-1
apparently, there are still some reference on heap.
On Thu, Jun 18, 2015 at 1:11 PM, Marcus Eriksson wrote:
> It is probably this: https://issues.apache.org/ji
It is probably this: https://issues.apache.org/jira/browse/CASSANDRA-9549
On Wed, Jun 17, 2015 at 7:37 PM, Michał Łowicki wrote:
> Looks that memtable heap size is growing on some nodes rapidly (
> https://www.dropbox.com/s/3brloiy3fqang1r/Screenshot%202015-06-17%2019.21.49.png?dl=0).
> Drops ar
Looks that memtable heap size is growing on some nodes rapidly (
https://www.dropbox.com/s/3brloiy3fqang1r/Screenshot%202015-06-17%2019.21.49.png?dl=0).
Drops are the places when nodes have been restarted.
On Wed, Jun 17, 2015 at 6:53 PM, Michał Łowicki wrote:
> Hi,
>
> Two datacenters with 6 no
Hi,
Two datacenters with 6 nodes (2.1.6) each. In each DC garbage collection is
launched at the same time on each node (See [1] for total GC duration per 5
seconds). RF is set to 3. Any ideas?
[1]
https://www.dropbox.com/s/bsbyew1jxbe3dgo/Screenshot%202015-06-17%2018.49.48.png?dl=0
--
BR,
Micha
Regarding the Datastax repair service I saw the same error over here.
Here is the datastax answer fwiw:
"The repair service timeout message is telling you that the service has not
received a response from the nodetool repair process running on Cassandra
within the configured (default) 3600 second
Do you do a ton of random updates amd deletes? That would not be a good
workload for DTCS.
Where are all your tombstones coming from?
On Jun 17, 2015 3:43 AM, "Alain RODRIGUEZ" wrote:
> Hi David, Edouard,
>
> Depending on your data model on event_data, you might want to consider
> upgrading to
Hi
I have a cassandra cluster of 6 nodes, with DateTiered compaction for
the tables/CFs
For some reason the minor compaction never happens.
I have enabled debug logging and I don't see any debug logs related to
compaction like the following
https://github.com/apache/cassandra/blob/cassandra-2.0/s
Hi, spark-sql estimated input for Cassandra table with 3 rows as 8 TB.
sometimes it's estimated as -167B.
I run it on laptop, I don't have 8 TB space for the data.
We use DSE 4.7 with bundled spark and spark-sql-thriftserver
Here is the stat for a dummy select foo from bar where bar three rows an
Hello Alex,
thanks for your answer! I'll try posting there as well then!
Best,
Jonathan
On 06/16/2015 07:05 PM, Alex Popescu wrote:
Jonathan,
I'm pretty sure you'll have better chances to get this answered on the
Python driver mailing list
https://groups.google.com/a/lists.datastax.com/for
Hi David, Edouard,
Depending on your data model on event_data, you might want to consider
upgrading to use DTCS (C* 2.0.11+).
Basically if those tombstones are due to a a Constant TTL and this is a
time series, it could be a real improvement.
See:
https://labs.spotify.com/2014/12/18/date-tiered-
11 matches
Mail list logo