sorry, i dont understand, what right syntax to set memtable_throughput?
>
Hi!
Need set memtable_troughput for cassandra
I try do this with help cassandra-cli by write this command
"update column family columnfamily2 memtable_troughput=155;"
but i get error
"missing EOF at memtable_troughput"
help! how i can set attribute memtable_troughput?
Why when i give jvm 1 GB heap size and try launch cassandra with 37GB
database, cassandra start loading database, but after memory usage full,
they fall with OutOfMemory Exception. Cassandra can`t work with low memory?
or she critical need more RAM if database grows?
here is link to download cassnd
I zip last version of cassandra with yaml file and 37gb database, can
anybody download and do major compaction to check what is wrong? and why
cassandra fall in outofmemory exception. Tanks!
here is link to download
http://213.186.117.181/apache-cassandra-0.8.2-bin.zip
I have only one CF with one UTF8 column and without indexes. in column
always 1 byte of data and keys is 16byte strings.
I have only 4GB on server, so i give jvm 3 GB of heap, but this dont help,
cassandra still fall when i launch major compaction on 37 GB database.
>There are many things you can do to lower caches,optimize memtables, and
tune jvms.
Please tell what thins i can do to lower caches,optimize memtables, and
tune jvms?
>From experience with similar-sized data sets, 1.5GB may be too little.
Recently I bumped our java HEAP limit from 3GB to 4GB t
Hi, Please help me with my problem. For better performance i turn off
compaction and run massive inserts, after database reach 37GB i stop massive
inserts and start compaction with "NodeTool compaction Keyspace CFamily".
after half hour of work cassandra fall with error "Out of memory" i give
150
it happend again i turn off compaction by setting max and min compaction
tresholds to zero, and run, 5 threads of inserts, after base reach 27GB size
cassandra fall with same error. OS Windows Server 2008 datacenter, JVM have
1.5 GB heap. cassandra version 0.8.1 all parameters in conf file are
defa
ERROR [pool-2-thread-3] 2011-07-22 10:34:59,102 Cassandra.java (line 3294)
Internal error processing insert
java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut
down
at
org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecu
Why cassandra fall when i start comaction with nodetool on 35+gb database.
all parameter are default.
ERROR [pool-2-thread-1] 2011-07-21 15:25:36,622 Cassandra.java (line 3294)
Internal error processing insert
java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut
down
Hi! Tell me please, how i can manage compacting process, turn them off and
start manualy when i need. How i can improve performance of compacting
process? Thanks!
Please help me solve one problem I have server with 4 GB RAM and 2x 4 cores
CPU When i start do massive writes in cassandra all works fine. but after
couple hours with 10K inserts per second database grows up to 25+ GB
performance go down to 500 insert per seconds I find out this because
compacting
13 matches
Mail list logo