It is a little more involved than just changing the heap size. Every
cluster is different, so there isn't much of a set formula. Some areas to
look into, though:
**Caveat, we're still running in the 1.2 branch and 2.0 has some
differences in what is on versus off heap memory usage, but the basic
i am just reading/writing 4k+/-1k of data to a single column in a single
column family. i do some writes of fresh data and some read/write of
existing data. i will end up in the 100 million row range, maintaining
about a 2 million row of "hot data". so i have small rows, but _lots_
of them.
Can you give some details about the use case that you are using cassandra
for? I am actually looking to store almost the data in the same manner,
except with more of a variance in data 1k to 5k with about 20 million
rows.
I have been benchmarking cassandra on 5 verses 6, and v6 has significant
sp
i only anticipate about 2,000,000 hot rows, each with about 4k of data.
however, we will have a LOT of rows that just aren't used. right now,
the data is just one column with a blob of text in it. but i have new
data coming in constantly, so not sure how this affects the cache, etc.
i'm ske
The cache is a "second-chance FIFO" from this library:
http://code.google.com/p/concurrentlinkedhashmap/source/browse/trunk/src/java/com/reardencommerce/kernel/collections/shared/evictable/ConcurrentLinkedHashMap.java
That sounds like an awful lot of churn given the size of the queue and
the numbe
Right.
As a rule of thumb you should only use one of {key cache, row cache}
at a time on a given CF.
On Tue, Mar 16, 2010 at 3:17 PM, B. Todd Burruss wrote:
> i think i better make sure i understand how the row/key cache works. i
> currently have both set to 10%. so if cassandra needs to read
it could be the "really big row during compaction" limitation, then, too.
On Tue, Mar 16, 2010 at 3:04 PM, B. Todd Burruss wrote:
> the row/key cache is set to 10%, but memory usage stays good until an
> anticompaction, hinted handoff, etc starts. (of course maybe i simply don't
> pay attention
i think i better make sure i understand how the row/key cache works. i
currently have both set to 10%. so if cassandra needs to read data from
an sstable that has 100 million rows, it will cache 10,000,000 rows of
data from that sstable? so if my row is ~4k, then we're looking at
~40gb used
the row/key cache is set to 10%, but memory usage stays good until an
anticompaction, hinted handoff, etc starts. (of course maybe i simply
don't pay attention to memory until something bad happens)
doesn't sound like anyone else is having trouble, so i'll keep review my
settings for cache, k
it's almost certainly GC storming due to memory pressure. (matching
the thread dump / the threads using the CPU in top will confirm this.)
reducing your cache sizes might be the best option since you already
have a 44GB heap.
On Tue, Mar 16, 2010 at 12:17 PM, B. Todd Burruss wrote:
> thx, i'll t
thx, i'll try that next time, already restarted node .. but i will say
the exact thing happened on another node as well.
Jonathan Ellis wrote:
You can still get a thread list w/ jstack, though.
On Tue, Mar 16, 2010 at 11:46 AM, Gary Dusbabek wrote:
On Tue, Mar 16, 2010 at 11:39, B. Todd B
You can still get a thread list w/ jstack, though.
On Tue, Mar 16, 2010 at 11:46 AM, Gary Dusbabek wrote:
> On Tue, Mar 16, 2010 at 11:39, B. Todd Burruss wrote:
>> any other ideas on how to troubleshoot? i have tried kill -3 in
>> the past but don't know where cassandra writes the console out
On Tue, Mar 16, 2010 at 11:39, B. Todd Burruss wrote:
> any other ideas on how to troubleshoot? i have tried kill -3 in
> the past but don't know where cassandra writes the console out. i'll look
> at scripts.
>
I have a sneaking suspicion that unless you're running with '-f' that
the thread d
13 matches
Mail list logo