or a G1 heap. Raise your heap or try CMS instead.
>
>
>
> 71% of your heap is collections – may be a weird data model quirk, but try
> CMS first and see if that behaves better.
>
>
>
>
>
>
>
> *From: *Mikhail Strebkov
> *Reply-To: *"user@cassandra.apa
> 71% of your heap is collections – may be a weird data model quirk, but try
> CMS first and see if that behaves better.
>
>
>
>
>
>
>
> *From: *Mikhail Strebkov
> *Reply-To: *"user@cassandra.apache.org"
> *Date: *Wednesday, December 9, 2015 at 5:26 PM
>
Hi everyone,
While upgrading our 5 machines cluster from DSE version 4.7.1 (Cassandra
2.1.8) to DSE version: 4.8.2 (Cassandra 2.1.11) one of the nodes can't
start with OutOfMemoryError.
We're using HotSpot 64-Bit Server VM/1.8.0_45 and G1 garbage collector with
8 GiB heap.
Average node size is
We had the same issue with huge number of sstables on this version and 2.1.3.
After updating to 2.1.8 the issue slowly faded out (it took a long time for
Cassandra to compact thousands of sstables)
On Mon, Jul 27, 2015 at 4:05 AM, Peer, Oded wrote:
> It’s noticeable from the log file that you
he world’s most innovative enterprises.
> Datastax is built to be agile, always-on, and predictably scalable to any
> size. With more than 500 customers in 45 countries, DataStax is the
> database technology and transactional backbone of choice for the worlds
> most innovative companies su
Looks like it dies with OOM:
https://gist.github.com/kluyg/03785041e16333015c2c
On Tue, Jul 14, 2015 at 12:01 PM, Mikhail Strebkov
wrote:
> OpsCenter 5.1.3 and datastax-agent-5.1.3-standalone.jar
>
> On Tue, Jul 14, 2015 at 12:00 PM, Sebastian Estevez <
> sebastian.este...@data
ing to matching versions
> fixed the issue.
> On Jul 14, 2015 2:58 PM, "Mikhail Strebkov" wrote:
>
>> Hi everyone,
>>
>> Recently I've noticed that most of the nodes have OpsCenter agents
>> running at 300% CPU. Each node has 4 cores, so agents are using
Hi everyone,
Recently I've noticed that most of the nodes have OpsCenter agents running
at 300% CPU. Each node has 4 cores, so agents are using 75% of total
available CPU.
We're running 5 nodes with OpenSource Cassandra 2.1.8 in AWS using
Community AMI. OpsCenter version is 5.1.3. We're using Ora
Hi Kevin,
Here is what we use, works for us in production:
https://gist.github.com/kluyg/46ae3dee9000a358edf9
To unit test it, you'll need to check that your custom retry policy returns
the RetryDecision you want for the inputs.
To verify that it works in production, you can wrap it in a
Logging
Hi Saladi,
Recently I faced a similar problem, I had a lot of CFs to fix, so I wrote
this: https://github.com/kluyg/cassandra-schema-fix
I think it can be useful to you.
Kind regards,
Mikhail
On Mon, Jul 13, 2015 at 11:51 AM, Saladi Naidu
wrote:
> Sebastian,
> Thank you so much for providing d
We have observed the same issue in our production Cassandra cluster (5 nodes in
one DC). We use Cassandra 2.1.3 (I joined the list too late to realize we
shouldn’t user 2.1.x yet) on Amazon machines (created from community AMI).
In addition to count variations with 5 to 10% we observe variati
It is open sourced but works only with C* 1.x as far as I know.
Mikhail
On Tuesday, January 27, 2015, Mohammed Guller
wrote:
> I believe Aegisthus is open sourced.
>
>
>
> Mohammed
>
>
>
> *From:* Jan [mailto:cne...@yahoo.com
> ]
> *Sent:* Monday, January 26, 2015 11:20 AM
> *To:* user@cassand
I see, well that's what I expected, but it still should improve a read
latency, since it will reduce the number of disk seeks per row request, is
my assumption correct?
On Wed, Dec 31, 2014 at 11:51 AM, Robert Coli wrote:
> On Wed, Dec 31, 2014 at 11:35 AM, Mikhail Strebkov
> wrote:
. Partition merge counts were
{1:50249, 2:5800, 3:836, 4:122, }
On Wed, Dec 31, 2014 at 10:11 AM, Robert Coli wrote:
> On Wed, Dec 31, 2014 at 12:01 AM, Mikhail Strebkov
> wrote:
>
>> I set compaction_throughput_mb_per_sec to 0 and restarted Cassandra.
>>
>
> You c
n.
On Tue, Dec 30, 2014 at 4:30 PM, Robert Coli wrote:
> On Tue, Dec 30, 2014 at 3:12 PM, Mikhail Strebkov
> wrote:
>
>> We have a table in our production Cassandra that is stored on 11369
>> SSTables. The average SSTable count for the other tables is around 15, and
>&
Hi,
We have a table in our production Cassandra that is stored on 11369
SSTables. The average SSTable count for the other tables is around 15, and
the read latency for them is much smaller.
I tried to run manual compaction (nodetool compact my_keyspace my_table)
but then the node starts spending ~
16 matches
Mail list logo