Hi,
I think it's a bit late for this reply, but anyway...
We hired support from http://thelastpickle.com/ to solve our problem and
thanks to them we were able to solve our issue as well.
What was causing this behavior was a large query being executed by mistake
in our code.
It was needed to open t
Ok, in my case it was straightforward. It is just warning, which however
says that batches with large data size (above 5Kb) can sometimes lead to
node instability (why?). This limit seems to be hard-coded, I didn't find
anyway to configure it externally. Anyway, removing batch and giving up
atomici
Logged batch.
On Fri, Jun 20, 2014 at 2:13 PM, DuyHai Doan wrote:
> I think some figures from "nodetool tpstats" and "nodetool
> compactionstats" may help seeing clearer
>
> And Pavel, when you said batch, did you mean LOGGED batch or UNLOGGED
> batch ?
>
>
>
>
>
> On Fri, Jun 20, 2014 at 8:02
I think some figures from "nodetool tpstats" and "nodetool compactionstats"
may help seeing clearer
And Pavel, when you said batch, did you mean LOGGED batch or UNLOGGED batch
?
On Fri, Jun 20, 2014 at 8:02 PM, Marcelo Elias Del Valle <
marc...@s1mbi0se.com.br> wrote:
> If you have 32 Gb RAM
If you have 32 Gb RAM, the heap is probably 8Gb.
200 writes of 100 kb / s would be 20MB / s in the worst case, supposing all
writes of a replica goes to a single node.
I really don't see any reason why it should be filling up the heap.
Anyone else?
But did you check the logs for the GCInspector?
I
Hi Marcelo,
No pending write tasks, I am writing a lot, about 100-200 writes each up to
100Kb every 15[s].
It is running on decent cluster of 5 identical nodes, quad cores i7 with
32Gb RAM and 480Gb SSD.
Regards,
Pavel
On Fri, Jun 20, 2014 at 12:31 PM, Marcelo Elias Del Valle <
marc...@s1mbi0
Pavel,
In my case, the heap was filling up faster than it was draining. I am still
looking for the cause of it, as I could drain really fast with SSD.
However, in your case you could check (AFAIK) nodetool tpstats and see if
there are too many pending write tasks, for instance. Maybe you really a
The cluster is new, so no updates were done. Version 2.0.8.
It happened when I did many writes (no reads). Writes are done in small
batches of 2 inserts (writing to 2 column families). The values are big
blobs (up to 100Kb).
Any clues?
Pavel
On Thu, Jun 19, 2014 at 8:07 PM, Marcelo Elias Del Va
Pavel,
Out of curiosity, did it start to happen before some update? Which version
of Cassandra are you using?
[]s
2014-06-19 16:10 GMT-03:00 Pavel Kogan :
> What a coincidence! Today happened in my cluster of 7 nodes as well.
>
> Regards,
> Pavel
>
>
> On Wed, Jun 18, 2014 at 11:13 AM, Marce
I know now it's been caused by the heap filling up in some nodes. When it
fills up, the node goes does, GC runs more, then the node goes up again.
Looking for GCInspector in the log, I see GC takes more time to run each
time it runs, as shown bellow.
I have set key cache to 100 mb and I was used to
What a coincidence! Today happened in my cluster of 7 nodes as well.
Regards,
Pavel
On Wed, Jun 18, 2014 at 11:13 AM, Marcelo Elias Del Valle <
marc...@s1mbi0se.com.br> wrote:
> I have a 10 node cluster with cassandra 2.0.8.
>
> I am taking this exceptions in the log when I run my code. What
11 matches
Mail list logo