On Mon, Sep 27, 2010 at 3:48 PM, Alaa Zubaidi wrote:
> RF=2
With RF=2, QUORUM and ALL are the same. Again, your logs show you are
attempting to insert about 180,000 columns/sec. The only way that is
possible with your hardware is if you are using CL.ZERO. The
available information does not ad
RF=2
Each process is processing 75 "rows".
So, do you think that the cause of my problems is the high rate of
inserts I am doing (coupled with the reads)?
taking into consideration that, the first errors were heap overflow and
after I disabled swapping it was stack overflow?
I will try anothe
Does that mean you are doing 600 rows/sec per process or 600/sec total
across all processes?
On Mon, Sep 27, 2010 at 3:14 PM, Alaa Zubaidi wrote:
> Its actually split to 8 different processes that are doing the insertion.
>
> Thanks
>
> On 9/27/2010 2:03 PM, Peter Schuller wrote:
>>
>> [note: i
What is your RF?
On Mon, Sep 27, 2010 at 3:13 PM, Alaa Zubaidi wrote:
> Sorry 3 means QUORUM.
>
>
> On 9/27/2010 2:55 PM, Benjamin Black wrote:
>>
>> On Mon, Sep 27, 2010 at 2:51 PM, Benjamin Black wrote:
>>>
>>> On Mon, Sep 27, 2010 at 12:59 PM, Alaa Zubaidi
>>> wrote:
Thanks for th
I can test the single node on Windows now..
On 9/27/2010 2:02 PM, Jonathan Ellis wrote:
How reproducible is this stack overflow?
If you can reproduce it at will then I would like to see if you can
also reproduce against
(a) a single node Windows machine
(b) a single node Linux machine
On
Its actually split to 8 different processes that are doing the insertion.
Thanks
On 9/27/2010 2:03 PM, Peter Schuller wrote:
[note: i put user@ back on CC but I'm not quoting the source code]
Here is the code I am using (this is only for testing Cassandra it is not
going the be used in produ
Sorry 3 means QUORUM.
On 9/27/2010 2:55 PM, Benjamin Black wrote:
On Mon, Sep 27, 2010 at 2:51 PM, Benjamin Black wrote:
On Mon, Sep 27, 2010 at 12:59 PM, Alaa Zubaidi wrote:
Thanks for the help.
we have 2 drives using basic configurations, commitlog on one drive and data
on another.
and Y
On Mon, Sep 27, 2010 at 2:51 PM, Benjamin Black wrote:
> On Mon, Sep 27, 2010 at 12:59 PM, Alaa Zubaidi wrote:
>> Thanks for the help.
>> we have 2 drives using basic configurations, commitlog on one drive and data
>> on another.
>> and Yes the CL for writes is 3, however, the CL for reads is 1.
On Mon, Sep 27, 2010 at 12:59 PM, Alaa Zubaidi wrote:
> Thanks for the help.
> we have 2 drives using basic configurations, commitlog on one drive and data
> on another.
> and Yes the CL for writes is 3, however, the CL for reads is 1.
>
It is simply not possible that you are inserting at CL.ALL
[note: i put user@ back on CC but I'm not quoting the source code]
> Here is the code I am using (this is only for testing Cassandra it is not
> going the be used in production) I am new to Java, but I tested this and it
> seems to work fine when running for short amount of time:
If you mean to a
How reproducible is this stack overflow?
If you can reproduce it at will then I would like to see if you can
also reproduce against
(a) a single node Windows machine
(b) a single node Linux machine
On Fri, Sep 24, 2010 at 3:03 PM, Alaa Zubaidi wrote:
> Nothing is working, after disabling swap
> You are saying I am doing 36000 inserts per second, when I am inserting 600
> rows, I thought that every row goes into one Node, so the work is done for a
> row not a column, so my assumption is NOT true, the work is done on a column
> level? so if I reduce the number of columns I will get a "sub
Thanks for the help.
we have 2 drives using basic configurations, commitlog on one drive and
data on another.
and Yes the CL for writes is 3, however, the CL for reads is 1.
You are saying I am doing 36000 inserts per second, when I am inserting
600 rows, I thought that every _row_ goes into
> It is odd that you are able to do 36000/sec _at all_ unless you are
> using CL.ZERO, which would quickly lead to OOM.
The problem with the hypothesis as far as I can tell is that the
hotspot error log's heap information does not indicate that he's close
to maxing out his heap. And I don't believ
Looking further, I would expect your 36000 writes/sec to trigger a
memtable flush every 8-9 seconds (which is already crazy), but you are
actually flushing them every ~1.7 seconds, leading me to believe you
are writing a _lot_ faster than you think you are.
INFO [ROW-MUTATION-STAGE:21] 2010-09-24
The log posted shows _10_ pending in MPF stage, and the errors show
repeated failures trying to flush memtables at all:
INFO [GC inspection] 2010-09-24 13:16:11,281 GCInspector.java (line
156) MEMTABLE-POST-FLUSHER 110
You are also flushing _really_ small memtables to disk (l
> My rows consist of only 60 columns and these 60 columns looks like this:
> ColumnName: Sensor59 -- Value: 434.2647915698039 -- TTL: 10800
The hotspot error log indicates the OOM is actually the result of a
*stack* overflow rather than a heap overflow. While the first OOM in
system.log indicates
My rows consist of only *60 *columns and these 60 columns looks like this:
ColumnName: Sensor59 -- Value: 434.2647915698039 -- TTL: 10800
On 9/24/2010 3:42 PM, Jonathan Ellis wrote:
looks like you're OOMing trying to compact a very large row. solution:
smaller rows, or larger heap.
On Fri, S
looks like you're OOMing trying to compact a very large row. solution:
smaller rows, or larger heap.
On Fri, Sep 24, 2010 at 3:03 PM, Alaa Zubaidi wrote:
> Nothing is working, after disabling swap entirely, the heap is not
> exhausted but Cassandra crashed with out of memory error.
> I even slow
Disabling swap entirely is usually the easiest fix, yes.
On Mon, Sep 20, 2010 at 8:10 PM, Alaa Zubaidi wrote:
> Thanks Peter,
> I decreased the heap size, it did not help, however, it delayed the problem.
> I noticed that its swapping, so, do you think that I should set windows to
> Not to swap?
> I decreased the heap size, it did not help, however, it delayed the problem.
> I noticed that its swapping, so, do you think that I should set windows to
> Not to swap?
I'm not sure what's best done on Windows. For Linux/Unix there is
some discussion on:
https://issues.apache.org/jira/brows
Thanks Peter,
I decreased the heap size, it did not help, however, it delayed the problem.
I noticed that its swapping, so, do you think that I should set windows
to Not to swap?
Do you think its related to this issue?
https://issues.apache.org/jira/browse/CASSANDRA-1014
Thanks,
Alaa
On 9/18
Thread pools are part of the architecture, take a look at the SEDA paper referenced at the bottom of this page http://wiki.apache.org/cassandra/ArchitectureInternalsThe number of threads in the pool are used to govern the resources available to that part of the processing pipeline. AaronOn 19 Sep,
Hi Peter
I actually checked after 15-20 of observation of monitor and logs when
everything calmed down then it was showing this many processes, shouldnt it
be good to reduce the no. of threads once server is idle or almost idle. As
I am not a Java guy the only thing that I can think of is that ma
> Even I would like to add here something and correct me if I am wrong, I
> downloaded 0.7 beta and ran it, just by chance I checked 'top' to see how
> the new version is doing and there were 64 processes running though
> Cassandra was on single node with default configuration options ( ran it as
>
Hi
Even I would like to add here something and correct me if I am wrong, I
downloaded 0.7 beta and ran it, just by chance I checked 'top' to see how
the new version is doing and there were 64 processes running though
Cassandra was on single node with default configuration options ( ran it as
is, as
> I see a spike in heap memory usage on Node 2 where it goes from around 1G to
> 6GB (max) in less than an hour, and then goes our of memory.
> There are some errors in the log file that are reported by other people, but
> I don't think that these errors are the reason, because it use to happen
> e
27 matches
Mail list logo