Hey,
I think that is a bug, and  should be fixed in this task 
https://issues.apache.org/jira/browse/KAFKA-6030.
We experience that in our kafka cluster, we just check out the 11.0.2 version, 
build it ourselves.

Best,
Xin
________________________________________
Xin Li Data EngineeringXin.Li@ <mailto:xin...@xin.li>trivago.com 
<mailto:y...@trivago.com>www.trivago.com <http://www.trivago.com/>F +49 (0) 211 
540 65 115We're hiring! Check out our vacancies 
http://company.trivago.com/jobs/Court of registration: Amtsgericht Düsseldorf, 
HRB 51842
Managing directors: Rolf Schrömgens · Malte Siewert · Peter Vinnemeier · Andrej 
Lehnert · Johannes Thomas
trivago GmbH · Bennigsen-Platz 1 · D – 40474 Düsseldorf
* This email message may contain legally privileged and/or confidential 
information.
You are hereby notified that any disclosure, copying, distribution, or use of 
this email message is strictly prohibited.

On 25.10.17, 12:04, "Manikumar" <manikumar.re...@gmail.com> wrote:

    any errors in log cleaner logs?
    
    On Wed, Oct 25, 2017 at 3:12 PM, Elmar Weber <i...@elmarweber.org> wrote:
    
    > Hello,
    >
    > I'm having trouble getting Kafka to compact a topic. It's over 300GB and
    > has enough segments to warrant cleaning. It should only be about 40 GB
    > (there is a copy in a db that is unique on the key). Below are the configs
    > we have (default broker) and topic override.
    >
    >
    > Is there something I'm missing on which setting is overriding which one or
    > something still wrongly?
    >
    > retention.ms and delete.retentions.ms I set manually after creation on
    > the topic and some segments have been created already.
    >
    > Kafka version 0.11
    >
    > Server Defaults for new segments of the topic:
    >
    > The settings used when a new log was created for the topic:
    >
    > {compression.type -> producer, message.format.version -> 0.11.0-IV2,
    > file.delete.delay.ms -> 60000, max.message.bytes -> 2097152,
    > min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime,
    > min.insync.replicas -> 1, segment.jitter.ms -> 0, preallocate -> false,
    > min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096,
    > unclean.leader.election.enable -> false, retention.bytes -> -1,
    > delete.retention.ms -> 86400000, cleanup.policy -> compact, flush.ms ->
    > 9223372036854775807, segment.ms -> 604800000, segment.bytes ->
    > 1073741824, retention.ms -> -1, message.timestamp.difference.max.ms ->
    > 9223372036854775807, segment.index.bytes -> 10485760, flush.messages ->
    > 9223372036854775807}
    >
    > Topic Overrides (overridden after creation).
    >
    > {retention.ms=3600000, delete.retention.ms=3600000,
    > max.message.bytes=10485760, cleanup.policy=compact}
    >
    >
    >
    > The full server startup config:
    >
    > advertised.host.name = null
    > advertised.listeners = null
    > advertised.port = null
    > alter.config.policy.class.name = null
    > authorizer.class.name =
    > auto.create.topics.enable = false
    > auto.leader.rebalance.enable = true
    > background.threads = 10
    > broker.id = 1
    > broker.id.generation.enable = true
    > broker.rack = europe-west1-c
    > compression.type = producer
    > connections.max.idle.ms = 600000
    > controlled.shutdown.enable = true
    > controlled.shutdown.max.retries = 3
    > controlled.shutdown.retry.backoff.ms = 5000
    > controller.socket.timeout.ms = 30000
    > create.topic.policy.class.name = null
    > default.replication.factor = 1
    > delete.records.purgatory.purge.interval.requests = 1
    > delete.topic.enable = true
    > fetch.purgatory.purge.interval.requests = 1000
    > group.initial.rebalance.delay.ms = 0
    > group.max.session.timeout.ms = 300000
    > group.min.session.timeout.ms = 6000
    > host.name =
    > inter.broker.listener.name = null
    > inter.broker.protocol.version = 0.11.0-IV2
    > leader.imbalance.check.interval.seconds = 300
    > leader.imbalance.per.broker.percentage = 10
    > listener.security.protocol.map = SSL:SSL,SASL_PLAINTEXT:SASL_PL
    > AINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT
    > listeners = null
    > log.cleaner.backoff.ms = 15000
    > log.cleaner.dedupe.buffer.size = 134217728
    > log.cleaner.delete.retention.ms = 86400000
    > log.cleaner.enable = true
    > log.cleaner.io.buffer.load.factor = 0.9
    > log.cleaner.io.buffer.size = 524288
    > log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
    > log.cleaner.min.cleanable.ratio = 0.5
    > log.cleaner.min.compaction.lag.ms = 0
    > log.cleaner.threads = 1
    > log.cleanup.policy = [delete]
    > log.dir = /tmp/kafka-logs
    > log.dirs = /var/lib/kafka/data/topics
    > log.flush.interval.messages = 9223372036854775807
    > log.flush.interval.ms = null
    > log.flush.offset.checkpoint.interval.ms = 60000
    > log.flush.scheduler.interval.ms = 9223372036854775807
    > log.flush.start.offset.checkpoint.interval.ms = 60000
    > log.index.interval.bytes = 4096
    > log.index.size.max.bytes = 10485760
    > log.message.format.version = 0.11.0-IV2
    > log.message.timestamp.difference.max.ms = 9223372036854775807
    > log.message.timestamp.type = CreateTime
    > log.preallocate = false
    > log.retention.bytes = -1
    > log.retention.check.interval.ms = 300000
    > log.retention.hours = -1
    > log.retention.minutes = null
    > log.retention.ms = null
    > log.roll.hours = 168
    > log.roll.jitter.hours = 0
    > log.roll.jitter.ms = null
    > log.roll.ms = null
    > log.segment.bytes = 1073741824
    > log.segment.delete.delay.ms = 60000
    > max.connections.per.ip = 2147483647
    > max.connections.per.ip.overrides =
    > message.max.bytes = 2097152
    > metric.reporters = []
    > metrics.num.samples = 2
    > metrics.recording.level = INFO
    > metrics.sample.window.ms = 30000
    > min.insync.replicas = 1
    > num.io.threads = 8
    > num.network.threads = 3
    > num.partitions = 1
    > num.recovery.threads.per.data.dir = 1
    > num.replica.fetchers = 1
    > offset.metadata.max.bytes = 4096
    > offsets.commit.required.acks = -1
    > offsets.commit.timeout.ms = 5000
    > offsets.load.buffer.size = 5242880
    > offsets.retention.check.interval.ms = 600000
    > offsets.retention.minutes = 1440
    > offsets.topic.compression.codec = 0
    > offsets.topic.num.partitions = 50
    > offsets.topic.replication.factor = 1
    > offsets.topic.segment.bytes = 104857600
    > port = 9092
    > principal.builder.class = class org.apache.kafka.common.securi
    > ty.auth.DefaultPrincipalBuilder
    > producer.purgatory.purge.interval.requests = 1000
    > queued.max.requests = 500
    > quota.consumer.default = 9223372036854775807
    > quota.producer.default = 9223372036854775807
    > quota.window.num = 11
    > quota.window.size.seconds = 1
    > replica.fetch.backoff.ms = 1000
    > replica.fetch.max.bytes = 10485760
    > replica.fetch.min.bytes = 1
    > replica.fetch.response.max.bytes = 10485760
    > replica.fetch.wait.max.ms = 500
    > replica.high.watermark.checkpoint.interval.ms = 5000
    > replica.lag.time.max.ms = 10000
    > replica.socket.receive.buffer.bytes = 65536
    > replica.socket.timeout.ms = 30000
    > replication.quota.window.num = 11
    > replication.quota.window.size.seconds = 1
    > request.timeout.ms = 30000
    > reserved.broker.max.id = 1000
    > security.inter.broker.protocol = PLAINTEXT
    > socket.receive.buffer.bytes = 102400
    > socket.request.max.bytes = 104857600
    > socket.send.buffer.bytes = 102400
    > transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
    > transaction.max.timeout.ms = 900000
    > transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
    > transaction.state.log.load.buffer.size = 5242880
    > transaction.state.log.min.isr = 1
    > transaction.state.log.num.partitions = 50
    > transaction.state.log.replication.factor = 1
    > transaction.state.log.segment.bytes = 104857600
    > transactional.id.expiration.ms = 604800000
    > unclean.leader.election.enable = false
    > zookeeper.connect = zookeeper:2181
    > zookeeper.connection.timeout.ms = 6000
    > zookeeper.session.timeout.ms = 6000
    > zookeeper.set.acl = false
    > zookeeper.sync.time.ms = 2000
    >
    >
    >
    

Reply via email to