[
https://issues.apache.org/jira/browse/CASSANDRA-16471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
David Capwell updated CASSANDRA-16471:
--------------------------------------
Fix Version/s: (was: 4.0.x)
> org.apache.cassandra.io.util.DataOutputBuffer#scratchBuffer is around 50% of
> all memory allocations
> ---------------------------------------------------------------------------------------------------
>
> Key: CASSANDRA-16471
> URL: https://issues.apache.org/jira/browse/CASSANDRA-16471
> Project: Cassandra
> Issue Type: Improvement
> Components: Local/Caching
> Reporter: David Capwell
> Assignee: David Capwell
> Priority: Normal
> Fix For: 4.x
>
> Attachments: Screen Shot 2021-02-25 at 3.34.28 PM.png, Screen Shot
> 2021-02-25 at 4.14.19 PM.png
>
>
> While running workflows to compare 3.0 with trunk we found that allocations
> and GC are significantly higher for a write mostly workload (22% read, 3%
> delete, 75% write); below is what we saw for a 2h run
> Allocations
> 30: 1.64TB
> 40: 2.99TB
> GC Events
> 30: 7.39k events
> 40: 13.93k events
> When looking at the allocation output we saw the follow for memory allocations
> !https://issues.apache.org/jira/secure/attachment/13021238/Screen%20Shot%202021-02-25%20at%203.34.28%20PM.png!
> Here we see that org.apache.cassandra.io.util.DataOutputBuffer#expandToFit is
> around 52% of the memory allocations. When looking at this logic I see that
> allocations are on-heap and constantly throw away the buffer (as a means to
> allow GC to clean up).
> With the patch, allocations/gc are the following
> Allocations
> 30: 1.64TB
> 40 w/ patch: 1.77TB
> 40: 2.99TB
> GC Events
> 30: 7.39k events
> 40 w/ patch: 8k events
> 40: 13.93k events
> With the patch only 0.8% allocations
> !https://issues.apache.org/jira/secure/attachment/13021239/Screen%20Shot%202021-02-25%20at%204.14.19%20PM.png!
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]