On 04/05/2011 03:04 PM, Chris Burroughs wrote:
> I have gc logs if anyone is interested.
This is from a node with standard io, jna enabled, but limits were not
set for mlockall to succeed. One can see -/+ buffers/cache free
shrinking and the C* pid's RSS growing.
Includes several days of:
gc l
On 04/05/2011 04:38 PM, Peter Schuller wrote:
>> - Different collectors: -XX:+UseParallelGC -XX:+UseParallelOldGC
>
> Unless you also removed the -XX:+UseConcMarkSweepGC I *think* it takes
> precedence, so that the above options would have no effect. I didn't
> test. In either case, did you defini
> - Different collectors: -XX:+UseParallelGC -XX:+UseParallelOldGC
Unless you also removed the -XX:+UseConcMarkSweepGC I *think* it takes
precedence, so that the above options would have no effect. I didn't
test. In either case, did you definitely confirm CMS was no longer
being used? (Should be p
This is a minor followup to this thread which includes required context:
http://www.mail-archive.com/user@cassandra.apache.org/msg09279.html
I haven't solved the problem, but since negative results can also be
useful I thought I would share them. Things I tried unsuccessfully (on
individual node
The test was inconclusive because we decomissioned that cluster before
it'd be running long enough to exhibit the problem.
-ryan
On Wed, Mar 16, 2011 at 7:27 PM, Zhu Han wrote:
>
>
> On Thu, Feb 3, 2011 at 1:49 AM, Ryan King wrote:
>>
>> On Wed, Feb 2, 2011 at 6:22 AM, Chris Burroughs
>> wrote
On Thu, Mar 17, 2011 at 10:27 AM, Zhu Han wrote:
>
>
> On Thu, Feb 3, 2011 at 1:49 AM, Ryan King wrote:
>
>> On Wed, Feb 2, 2011 at 6:22 AM, Chris Burroughs
>> wrote:
>> > On 01/28/2011 09:19 PM, Chris Burroughs wrote:
>> >> Thanks Oleg and Zhu. I swear that wasn't a new hotspot version when I
On Thu, Feb 3, 2011 at 1:49 AM, Ryan King wrote:
> On Wed, Feb 2, 2011 at 6:22 AM, Chris Burroughs
> wrote:
> > On 01/28/2011 09:19 PM, Chris Burroughs wrote:
> >> Thanks Oleg and Zhu. I swear that wasn't a new hotspot version when I
> >> checked, but that's obviously not the case. I'll update
On Wed, Feb 2, 2011 at 10:29 AM, Chris Burroughs
wrote:
> On 02/02/2011 12:49 PM, Ryan King wrote:
>> We're seeing a similar problem with one of our clusters (but over a
>> longer time scale). Its possible that its not a leak, but just
>> fragmentation. Unless you've told it otherwise, the jvm use
On 02/02/2011 12:49 PM, Ryan King wrote:
> We're seeing a similar problem with one of our clusters (but over a
> longer time scale). Its possible that its not a leak, but just
> fragmentation. Unless you've told it otherwise, the jvm uses glibc's
> malloc implementation for off-heap allocations. We
On Wed, Feb 2, 2011 at 6:22 AM, Chris Burroughs
wrote:
> On 01/28/2011 09:19 PM, Chris Burroughs wrote:
>> Thanks Oleg and Zhu. I swear that wasn't a new hotspot version when I
>> checked, but that's obviously not the case. I'll update one node to the
>> latest as soon as I can and report back.
On 01/28/2011 09:19 PM, Chris Burroughs wrote:
> Thanks Oleg and Zhu. I swear that wasn't a new hotspot version when I
> checked, but that's obviously not the case. I'll update one node to the
> latest as soon as I can and report back.
RSS over 48 hours with java 6 update 23:
http://img716.ima
On 01/28/2011 04:12 AM, Zhu Han wrote:
> Chris,
>
> Somebody else and I have the same problem as you, and reported it here:
> http://www.apacheserver.net/Very-high-memory-utilization-not-caused-by-mmap-on-sstables-at1082970.htm
>
> [NB: It is not solved although the titles said so. Some response
On 01/28/2011 12:42 PM, sridhar basam wrote:
> What about your permgen usage? Do you track that? Use something like "jstat
> -gc -t 5s 100" to track it. Or turn up verbose GC on your command
> line options to what is happening.
>
http://img59.imageshack.us/img59/1056/permgen.png
This is ove
What about your permgen usage? Do you track that? Use something like "jstat
-gc -t 5s 100" to track it. Or turn up verbose GC on your command
line options to what is happening.
Sridhar
On Fri, Jan 28, 2011 at 11:38 AM, Chris Burroughs wrote:
> On 01/28/2011 11:29 AM, Jake Luciani wrote:
>
On 01/28/2011 11:29 AM, Jake Luciani wrote:
> Are you using a row cache? if so what is it set too? in general it should
> not be a percentage.
>
row_cache_size == row_cache_capacity before the start of RSS data
collection. According to jconsole heap size is not growing larger than
the
Are you using a row cache? if so what is it set too? in general it should
not be a percentage.
On Thu, Jan 27, 2011 at 12:23 PM, Chris Burroughs wrote:
> We have a 6 node Cassandra 0.6.8 cluster running on boxes with 4 GB of
> RAM. Over the course of several weeks cached memory slowly decreases
On 01/28/2011 10:51 AM, sridhar basam wrote:
> On Thu, Jan 27, 2011 at 12:23 PM, Chris Burroughs > wrote:
>
>> java -version
>> java version "1.6.0_20"
>> Java(TM) SE Runtime Environment (build 1.6.0_20-b02)
>> Java HotSpot(TM) 64-Bit Server VM (build 16.3-b01, mixed mode)
>>
>> cmd line arg (path
On Thu, Jan 27, 2011 at 12:23 PM, Chris Burroughs wrote:
> java -version
> java version "1.6.0_20"
> Java(TM) SE Runtime Environment (build 1.6.0_20-b02)
> Java HotSpot(TM) 64-Bit Server VM (build 16.3-b01, mixed mode)
>
> cmd line arg (paths edited):
> /usr/java/jdk1.6.0_20/bin/java -Xms1500M -c
Chris,
Somebody else and I have the same problem as you, and reported it here:
http://www.apacheserver.net/Very-high-memory-utilization-not-caused-by-mmap-on-sstables-at1082970.htm
[NB: It is not solved although the titles said so. Some response from me in
the thread is not accurate.]
IMHO, you
On Fri, Jan 28, 2011 at 4:15 PM, Oleg Anastasyev wrote:
> >
> > http://img24.imageshack.us/img24/1754/cassandrarss.png
> >
> This looks like cassandra leaking memory inside java heap.
> I remember, there was some leaking issues with java versions <1.6.u21,
> correct
> me if I wrong. Try to upgrad
We have a 6 node Cassandra 0.6.8 cluster running on boxes with 4 GB of
RAM. Over the course of several weeks cached memory slowly decreases
until Cassandra is restarted or something bad happens (ie oom killer).
Performance obviously suffers as cached memory is no longer available.
Here is a graph
21 matches
Mail list logo