> I was thinking of decreasing concurrent_compactors and
> in_memory_compaction_limit to go easy on GC
I've used that technique to reduce gc pressure during compactions before.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 6/07/2012, at
Our Young size=800 MB,SurvivorRatio=8,edenSize=640MB. All objects/bytes
generated during compaction are garbage right?
During compaction, with in_memory_compaction_limit=64MB and
concurrent_compactors=8, there is a lot of pressure on ParNew sweeps.
I was thinking of decreasing concurrent_compact
@ravi, u can increase young gen size, keep a high tenuring rate or
increase survivor ratio..
On Fri, Jul 6, 2012 at 4:03 AM, aaron morton wrote:
> Ideally we would like to collect maximum garbage from ParNew itself, during
> compactions. What are the steps to take towards to achieving this?
>
>
> Ideally we would like to collect maximum garbage from ParNew itself, during
> compactions. What are the steps to take towards to achieving this?
I'm not sure what you are asking.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 5/07/2012,
We have modified maxTenuringThreshold from 1 to 5. May be it is causing
problems. Will change it back to 1 and see how the system is.
concurrent_compactors=8. We will reduce this, as anyway our system won't be
able to handle this number of compactions at the same time. Think it will
ease GC also t
It *may* have been compaction from the repair, but it's not a big CF.
I would look at the logs to see how much data was transferred to the node. Was
their a compaction going on while the GC storm was happening ? Do you have a
lot of secondary indexes ?
If you think it correlated to compaction