On Wed, Jul 27, 2011 at 7:40 AM, lebron james wrote:
> Why when i give jvm 1 GB heap size and try launch cassandra with 37GB
> database, cassandra start loading database, but after memory usage full,
> they fall with OutOfMemory Exception. Cassandra can`t work with low memory?
> or she critical n
Why when i give jvm 1 GB heap size and try launch cassandra with 37GB
database, cassandra start loading database, but after memory usage full,
they fall with OutOfMemory Exception. Cassandra can`t work with low memory?
or she critical need more RAM if database grows?
here is link to download cassnd
I zip last version of cassandra with yaml file and 37gb database, can
anybody download and do major compaction to check what is wrong? and why
cassandra fall in outofmemory exception. Tanks!
here is link to download
http://213.186.117.181/apache-cassandra-0.8.2-bin.zip
I have only one CF with one UTF8 column and without indexes. in column
always 1 byte of data and keys is 16byte strings.
Have you tried some of the ideas about reducing the memory pressure ?
How many CF's + second indexes do you have?
Cheers
-
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com
On 26 Jul 2011, at 17:10, lebron james wrote:
> I have only 4GB on s
I have only 4GB on server, so i give jvm 3 GB of heap, but this dont help,
cassandra still fall when i launch major compaction on 37 GB database.
How much memory you need depends on a few things such as how many CF's you
have, what your data is like, and what the usage patterns are like. There is no
exact formula.
Generally…
* i would say 4GB of JVM heap is a good start
* key and row caches are set when the CF is created, see "help creat
>There are many things you can do to lower caches,optimize memtables, and
tune jvms.
Please tell what thins i can do to lower caches,optimize memtables, and
tune jvms?
>From experience with similar-sized data sets, 1.5GB may be too little.
Recently I bumped our java HEAP limit from 3GB to 4GB t
From experience with similar-sized data sets, 1.5GB may be too little.
Recently I bumped our java HEAP limit from 3GB to 4GB to get past an OOM doing
a major compaction.
Check "nodetool -h localhost info" while the compaction is running for a simple
view into the memory state.
If you can, al
On Sunday, July 24, 2011, lebron james wrote:
> Hi, Please help me with my problem. For better performance i turn off
compaction and run massive inserts, after database reach 37GB i stop massive
inserts and start compaction with "NodeTool compaction Keyspace CFamily".
after half hour of work cas
10 matches
Mail list logo