rwise, force a heap dump after a full GC and take a look to see
> what's referencing all the memory.
>
> On Fri, May 6, 2011 at 12:25 PM, Serediuk, Adam
> wrote:
>> We're troubleshooting a memory usage problem during batch reads. We've spent
>> the last few day
We're troubleshooting a memory usage problem during batch reads. We've spent
the last few days profiling and trying different GC settings. The symptoms are
that after a certain amount of time during reads one or more nodes in the
cluster will exhibit extreme memory pressure followed by a gc stor
Having a well known node configuration that is trivial (one step) to create is
your best maintenance bet. We are using 4 disk nodes in the following
configuration:
disk1: boot_raid1 os_raid1 cassandra_commit_log
disk2: boot_raid1 os_raid1 cassandra_data_dir_raid0
disk3: cassandra_data_dir_raid0
; more heavily loaded than the others, and are correctly pushing queries
> to other replicas.
>
> On Tue, May 3, 2011 at 12:47 PM, Serediuk, Adam
> wrote:
>> I just ran a test and we do not see that behavior with dynamic snitch
>> disabled. All nodes appear to be
12:31 PM, Serediuk, Adam
> wrote:
>> We appear to have encountered an issue with cassandra 0.7.5 after upgrading
>> from 0.7.2. While doing a batch read using a get_range_slice against the
>> ranges an individual node is master for we are able to reproduce
>> consistently
We appear to have encountered an issue with cassandra 0.7.5 after upgrading
from 0.7.2. While doing a batch read using a get_range_slice against the ranges
an individual node is master for we are able to reproduce consistently that the
last two nodes in the ring, regardless of the ring size (we