[
https://issues.apache.org/jira/browse/KAFKA-3300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jiangjie Qin updated KAFKA-3300:
--------------------------------
Affects Version/s: 0.9.0.1
> Calculate the initial size allocation of offset index files and reduce the
> memory footprint for memory mapped files.
> --------------------------------------------------------------------------------------------------------------------
>
> Key: KAFKA-3300
> URL: https://issues.apache.org/jira/browse/KAFKA-3300
> Project: Kafka
> Issue Type: Improvement
> Affects Versions: 0.9.0.1
> Reporter: Jiangjie Qin
> Assignee: Jiangjie Qin
> Fix For: 0.10.0.0
>
>
> Currently the initial/max size of offset index file is configured by
> {{log.index.max.bytes}}. This will be the offset index file size for active
> log segment until it rolls out.
> Theoretically, we can calculate the upper bound of offset index size using
> the following formula:
> {noformat}
> log.segment.bytes / index.interval.bytes * 8
> {noformat}
> With default setting the bytes needed for an offset index size is 1GB / 4K *
> 8 = 2MB. And the default log.index.max.bytes is 10MB.
> This means we are over-allocating at least 8MB on disk and mapping it to
> memory.
> We can probably do the following:
> 1. When creating a new offset index, calculate the size using the above
> formula,
> 2. If the result in (1) is greater than log.index.max.bytes, we allocate
> log.index.max.bytes instead.
> This should be able to significantly save memory if a broker has a lot of
> partitions on it.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)