Evgeniy Efimov created KAFKA-7305:
-------------------------------------
Summary: Offsets should not expire at the record-entry level
Key: KAFKA-7305
URL: https://issues.apache.org/jira/browse/KAFKA-7305
Project: Kafka
Issue Type: Bug
Components: clients
Affects Versions: 1.1.1
Reporter: Evgeniy Efimov
Hello!
I'm using kafka 1.1.1 and set up __consumer_offsets topic to keep log entries
forever. I have a consumer, starting from time to time to read some topic. And
there are already stored offsets for this consumer group. When offset expires
according to ExpirationTime set in a record inside __consumer_group topic, the
consumer will no longer able to continue processing the next time it starts.
The subsequent calls to _OffsetFetch_ API will always return -1 as a result.
This is unobvious if there is an entry for this consumer group in
__consumer_offsets topic. The expected behavior in this situation is reloading
offset back to cache at broker level and return it to the client. Current
workaround for this case is increasing _offsets.retention.minutes_ parameter.
As a solution for such cases I suggest:
* to remove expiration time from offset record at all and use topic's
retention time as a parameter for controlling offset expiration;
* change caching algorithm on broker. New algorithm should be limited by
memory consumption parameter defined in server configuration file with
opportunity to set _no limit_ value;
* if memory consumption is less than configuration value, the offsets are kept
in cache and everything works the same as it is now;
* when memory consumption is higher that configured value, broker should
remove some entry in a cache according to cache replacement policy, LRU is
preferable;
* if client issues _OffsetFetch_ request that is not in cache, broker reads
__consumer_offsets topic, loads that offset into cache and return it to the
client
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)