I have 2 seeds in my cluster, with a replication of 2. I am using cassandra
0.6.2.

It keeps running out of memory so I don't know if there are some memory
leaks.
This is what is in the log:

        at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        ... 2 more
ERROR [GC inspection] 2010-07-22 18:41:57,157 CassandraDaemon.java (line 78)
Fatal exception in thread Thread[GC inspection,5,main]
java.lang.OutOfMemoryError: Java heap space
        at java.util.AbstractList.iterator(AbstractList.java:273)
        at
org.apache.cassandra.service.GCInspector.logIntervalGCStats(GCInspector.java:82)
        at
org.apache.cassandra.service.GCInspector.access$000(GCInspector.java:38)
        at
org.apache.cassandra.service.GCInspector$1.run(GCInspector.java:74)
        at java.util.TimerThread.mainLoop(Timer.java:512)
        at java.util.TimerThread.run(Timer.java:462)


On Mon, Jul 26, 2010 at 2:14 AM, Aaron Morton <aa...@thelastpickle.com>wrote:

> You may need to provide a some more information. What's the cluster
> configuration, what version, what's in the logs etc.
>
> Aaron
>
>
> On 24 Jul, 2010,at 03:40 AM, Michelan Arendse <miche...@hermanus.cc>
> wrote:
>
> Hi
>
> I have recently started working on Cassandra as I need to make a distribute
> Lucene index and found that Lucandra was the best for this. Since then I
> have configured everything and it's working ok.
>
> Now the problem comes in when I need to write this Lucene index to
> Cassandra
> or convert it so that Cassandra can read it. The test index is 32 gigs and
> i
> find that Cassandra times out alot.
>
> What happens can't Cassandra take that load? Please any help will be great.
>
> Kind Regards,
>
>

Reply via email to