In our Twitter-like application users have their own timelines with news from
subscriptions. To populate timelines we're using fanout on write. But we
forced to trim it to keep free disk space under control.
We use wide rows pattern and trim them with "DELETE by primary key USING
TIMESTAMP". But i
In our case major compaction (using sstableresetlevel) will take 15 days for
15 nodes plus trimming time. So it turns into never ending maintenance mode.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/disk-space-and-tombstones-tp7596356p7596361
No, it is in seconds
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/disk-space-and-tombstones-tp7596356p7596363.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
Hi all.
I have many GC freezes on my cassandra cluster
Im using G1 GC and CMS gives similar freezes
JVM_OPTS="$JVM_OPTS -XX:+UseG1GC"
JVM_OPTS="$JVM_OPTS -XX:SurvivorRatio=1"
JVM_OPTS="$JVM_OPTS -XX:NewRatio=1"
JVM_OPTS="$JVM_OPTS -XX:MaxTenuringThreshold=15"
JVM_OPTS="$JVM_OPTS -XX:-UseAdaptiveSi
JNA is installed
java -version
java version "1.7.0_45"
Java(TM) SE Runtime Environment (build 1.7.0_45-b18)
Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode)
cassandra 2.0.4, vnodes
LCS, LZ4 Compression
some cassandra config params:
--
View this message in context:
http://cas
iostat is clean
vm.max_map_count = 131072
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-mad-GC-tp7592248p7592251.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
I set G1 because GS started to work wrong(dropped messages) with standard GC
settings.
In my opinion, Cassandra started to work more stable with G1 (it's getting
less count of timeouts now) but it's not ideally yet.
I just want cassandra to works fine.
--
View this message in context:
http://c
I think "Read 1001 live and 1518" is not too many tombstones and its normal
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-mad-GC-tp7592248p7592297.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabb
no triggers, no custom comparators
i have a data model that creates a lot of tombstones (users home timeline
with many inserts and deletes). how can i reduce tombstones count it this
case?
>Is this all from cassandra ?
yes, with multithread compaction (for c3.4 - 6 threads),
compaction_through
btw, cassandra cluster is more stable with turned off multithread compaction.
One node have more keys than other nodes
normal node
Keyspace: Social
Read Count: 65530294
Read Latency: 2.010432367020969 ms.
Write Count: 183948607
Write Latency: 0.04994240148825917 ms.
No one advice did't help to me for reduce GC load
I tried these:
MAX_HEAP_SIZE from default(8GB) to 16G with HEAP_NEWSIZE from 400M to 9600M
key cache on/off
compacting memory size and other limits
15 c3.4xlarge nodes (adding 5 nodes to 10 nodes cluster did't help):
and many other
Reads ~5000 o
Does cassandra delete tombstones during simple LCS compaction or I should use
node tool repair?
Thanks.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Tombstones-tp7594467.html
Sent from the cassandra-u...@incubator.apache.org mailing list arc
Thanks!
How could I find leveled json manifest?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Tombstones-tp7594467p7594535.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
oh,
its for for cassandra 1.x, right?
I use 2.0.7.
How could I reset leveled manifest in this case?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Tombstones-tp7594467p7594559.html
Sent from the cassandra-u...@incubator.apache.org mailing list
14 matches
Mail list logo