Re: ParNew (promotion failed)

2011-04-01 Thread ruslan usifov
Also after all this messages in stdout.log i see follow: [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor3] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor2] [Unloading class sun.reflect.GeneratedSerializationConstructorAccessor1] [Unloading class sun.r

Re: ParNew (promotion failed)

2011-03-28 Thread Peter Schuller
> But he's talking about "promotion failed" which is about heap > fragmentation, not "concurrent mode failure" which would indicate CMS > too late.  So increasing young generation size + tenuring threshold is > probably the way to go (especially in a read-heavy workload; > increasing tenuring will

Re: ParNew (promotion failed)

2011-03-27 Thread Jonathan Ellis
On Sat, Mar 26, 2011 at 5:21 PM, Peter Schuller wrote: >> So to resolve this i must tune young generation (HEAP_NEWSIZE) -Xmn, and >> tune in_memory_compaction_limit_in_mb config parameter? > > More likely adjust the initial occupancy trigger and/or the heap size. > Probably just the latter. This

Re: ParNew (promotion failed)

2011-03-26 Thread Peter Schuller
>> So to resolve this i must tune young generation (HEAP_NEWSIZE) -Xmn, and >> tune in_memory_compaction_limit_in_mb config parameter? > > More likely adjust the initial occupancy trigger and/or the heap size. > Probably just the latter. This is assuming you're on 0.7 with mostly > default JVM opti

Re: ParNew (promotion failed)

2011-03-26 Thread Peter Schuller
> So to resolve this i must tune young generation (HEAP_NEWSIZE) -Xmn, and > tune in_memory_compaction_limit_in_mb config parameter? More likely adjust the initial occupancy trigger and/or the heap size. Probably just the latter. This is assuming you're on 0.7 with mostly default JVM options. See

Re: ParNew (promotion failed)

2011-03-26 Thread ruslan usifov
2011/3/23 ruslan usifov > Hello > > Sometimes i seen in gc log follow message: > > 2011-03-23T14:40:56.049+0300: 14897.104: [GC 14897.104: [ParNew (promotion > failed) > Desired survivor size 41943040 bytes, new threshold 2 (max 2) > - age 1:5573024 bytes,5573024 total > - age 2:5

Re: ParNew (promotion failed)

2011-03-24 Thread ruslan usifov
2011/3/24 Erik Onnen > It's been about 7 months now but at the time G1 would regularly > segfault for me under load on Linux x64. I'd advise extra precautions > in testing and make sure you test with representative load. > Which java version do you use?

Re: ParNew (promotion failed)

2011-03-23 Thread Erik Onnen
It's been about 7 months now but at the time G1 would regularly segfault for me under load on Linux x64. I'd advise extra precautions in testing and make sure you test with representative load.

Re: ParNew (promotion failed)

2011-03-23 Thread Narendra Sharma
I haven't used G1. I remember someone shared his experience in detail on G1. The bottom line is you need to test it for your deployment and based on test and results conclude if it will work for you. I believe for a small heap G1 will do well. -Naren On Wed, Mar 23, 2011 at 1:47 PM, ruslan usifo

Re: ParNew (promotion failed)

2011-03-23 Thread ruslan usifov
2011/3/23 Narendra Sharma > I understand that. The overhead could be as high as 10x of memtable data > size. So overall the overhead for 16CF collectively in your case could be > 300*10 = 3G. > > > And how about G1 GC, it must prevent memory fragmentation. but some post on this email, told that i

Re: ParNew (promotion failed)

2011-03-23 Thread Narendra Sharma
I understand that. The overhead could be as high as 10x of memtable data size. So overall the overhead for 16CF collectively in your case could be 300*10 = 3G. Thanks, Naren On Wed, Mar 23, 2011 at 11:18 AM, ruslan usifov wrote: > > > 2011/3/23 Narendra Sharma > >> I think it is due to fragment

Re: ParNew (promotion failed)

2011-03-23 Thread ruslan usifov
2011/3/23 Narendra Sharma > I think it is due to fragmentation in old gen, due to which survivor area > cannot be moved to old gen. 300MB data size of memtable looks high for 3G > heap. I learned that in memory overhead of memtable can be as high as 10x of > memtable data size in memory. So eithe

Re: ParNew (promotion failed)

2011-03-23 Thread Narendra Sharma
I think it is due to fragmentation in old gen, due to which survivor area cannot be moved to old gen. 300MB data size of memtable looks high for 3G heap. I learned that in memory overhead of memtable can be as high as 10x of memtable data size in memory. So either increase the heap or reduce the me