Hello,
Thankyou for your reply.
I have attached a text file containing the data within the
server.properties file.

Also I could see that it was the .log files within the __consumer_offset
topic that were sizing around 100 mb each.
So due to many such log files,the disk is getting maxed out.

Your comments on the same.

Regards,
Kaushik Nambiar


On Mon, Sep 24, 2018, 5:06 PM M. Manna <manme...@gmail.com> wrote:

> What are your settings for:
>
> 1) Offsets.retention.check.interval.ms
> 2) Offsets.retention.minutes (default is 7 days, not 24 hours).
>
> Also, did this occur even after you restarted any individual brokers?
> Please share the server.properties "As is" for your case.
>
> Regards,
>
> On Mon, 24 Sep 2018 at 12:14, Kaushik Nambiar <kaushiknambia...@gmail.com>
> wrote:
>
> > Hello,
> > I am using a Kafka with version 0.11.xx.
> > When I check the logs I can see the index segments for user defined
> topics
> > are getting deleted.
> > But I cannot find the indices for the consumer_offset topic getting
> > deleted.
> > That's causing around GBs of data getting accumulated in our persistent
> > disk.
> > Based on the server.properties file,we haven't set any
> > offsets.retention.minutes.
> > So I am assuming the default values would be used which is 24 hours.
> > But still I can find .index files in the __consumer_offsets topic which
> are
> > months old.
> > Cannot you guys tell me what could be the reason for this.
> > Also a workaround to fix this issue would help too.
> >
> > Any kind of help is greatly appreciated.
> >
> > Regards,
> > Kaushik Nambiar
> >
>
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1
delete.topic.enable=true
transaction.state.log.replication.factor=2
transaction.state.log.min.isr=1
offsets.topic.replication.factor=2

############################# Socket Server Settings 
#############################



# The port the socket server listens on



# Hostname the broker will bind to. If not set, the server will bind to all 
interfaces
host.name=x.x.x.x

# Hostname the broker will advertise to producers and consumers. If not set, it 
uses the
# value for "host.name" if configured.  Otherwise, it will use the value 
returned from
# java.net.InetAddress.getCanonicalHostName().


# The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.

# The number of threads handling network requests
num.network.threads=3

# The number of threads doing disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=65536

# The maximum size of a request that the socket server will accept (protection 
against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1
delete.topic.enable=true
transaction.state.log.replication.factor=2
transaction.state.log.min.isr=1
offsets.topic.replication.factor=2

############################# Socket Server Settings 
#############################



# The port the socket server listens on



# Hostname the broker will bind to. If not set, the server will bind to all 
interfaces
host.name=x.x.x.x

# Hostname the broker will advertise to producers and consumers. If not set, it 
uses the
# value for "host.name" if configured.  Otherwise, it will use the value 
returned from
# java.net.InetAddress.getCanonicalHostName().


# The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.

# The number of threads handling network requests
num.network.threads=3

# The number of threads doing disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=65536

# The maximum size of a request that the socket server will accept (protection 
against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma seperated list of directories under which to store log files
log.dirs=/xxxxxxx/store/kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
default.replication.factor=2

# The number of threads per data directory to be used for log recovery at 
startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs 
located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only 
fsync() to sync
# the OS cache lazily. The following configurations control the flush of data 
to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the 
flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a 
small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data 
after a period of time or
# every N messages (or both). This can be done globally and overridden on a 
per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy 
can
# be set to delete segments after a period of time, or after a given size has 
accumulated.
# A segment will be deleted whenever *either* of these criteria are met. 
Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion
log.retention.hours=720

# A size-based retention policy for logs. Segments are pruned from the log as 
long as the remaining
# segments don't drop below log.retention.bytes.
log.retention.bytes=251658240

# The maximum size of a log segment file. When this size is reached a new log 
segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted 
according
# to the retention policies
log.retention.check.interval.ms=300000

# By default the log cleaner is disabled and the log retention policy will 
default to just delete segments after their retention expires.
# If log.cleaner.enable=true is set the cleaner will be enabled and individual 
logs can then be marked for log compaction.
log.cleaner.enable=false

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.

zookeeper.connect=x.x.x.x:2181,x.x.x.x:2181,x.x.x.x:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=2000
listeners=PLAINTEXT://x.x.x.x:9092,SSL://x.x.x.x:9093
advertised.listeners=PLAINTEXT://x.x.x.x:9092,SSL://x.x.x.x:9093
ssl.keystore.location=xxxxxxxxxxxxxxxxxx
ssl.keystore.password=xxxxxxxxxxxxxxxxxxxxxxx
ssl.key.password=xxxxxxxxxxxxxxxxxxxxx
ssl.truststore.location=xxxxxxxx
ssl.truststore.password=xxxxxxxxxxxx
ssl.client.auth=required
security.inter.broker.protocol=SSL
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=false
super.users=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
auto.create.topics.enable=false
controlled.shutdown.enable=true

Reply via email to