Hi Robin,
We have the same headache and hope that
https://issues.apache.org/jira/browse/CASSANDRA-3974 will be a balsam.
On 10 September 2012 13:47, Robin Verlangen wrote:
> Hi there,
>
> I'm working on a project that might want to set TTL to roughly 7 years.
> However it might occur that the T
I started digging in the logfiles, and here's the full log of a crash. We
have 4GB of heap, and if you watch either OpsCenter or through a JMX
console, used heap size is always below 3GB, and it swings between 1.5 to
3GB ever 5 minutes or so. So why would it suddenly run out of heap space?
INFO
Generally tuning the garbage collector is a waste of time. Just follow
someone else's recommendation and use that.
The problem with tuning is that workloads change then you have to tune
again and again. New garbage collectors come out and you have to tune again
and again. Someone at your company r
It was a config issue at our end.
This can happen, if you are trying to insert events into a CF that's not
created. Probably, can happen if you have a cluster and it's not in sync as
well. Not sure about the latter case though.
On Sat, Sep 15, 2012 at 2:16 PM, S.Sree Hari Nagarajan <
athiyur.sree
> Generally tuning the garbage collector is a waste of time.
Sorry, that's BS. It can be absolutely critical, when done right, and
only "useless" when done wrong. There's a spectrum in between.
> Just follow
> someone else's recommendation and use that.
No, don't.
Most recommendations out there
Hi Sergey,
That's exactly what I mean. I really hope that this will get released soon!
Best regards,
Robin Verlangen
*Software engineer*
*
*
W http://www.robinverlangen.nl
E ro...@us2.nl
Disclaimer: The information contained in this message and attachments is
intended solely for the attention a
" We are confident that we are doing everything right in both cases (no
bugs), yet the results are baffling. Tests in smaller, single-node
environments results in consistent counts between the two methods, but we
don't have the same amount of data nor the same topology. "
Are you somehow using an
We have a cluster using leveled compaction and there are only a couple
CF. The cluster does not seem to be able to keep up with compaction.
When running "top", I always see core that is 100% busy, which I think
is most likely the compaction thread.
I wanted to enable multithreaded_compaction, but
Generally the main knob for compaction performance is
compaction_throughput_in_mb in cassandra.yaml. It defaults to 16. You can use
nodetool setcompactionthroughput' to set it on a running server. The next time
Cassandra server starts it will use what's in the yaml again. You might try
usin
we currently have setcompactionthroughput set to 0. we are also running on
ssds.
even so, the compaction falls behind.
what kind of compaction throughput should we be seeing with no threshold
limit? I am usually seeing around 2-3MB/s in the system.log, which seems
super slow to me.
(sorry for sh
10 matches
Mail list logo