Hello

I am using the kafka docs (
https://kafka.apache.org/documentation/#dynamicbrokerconfigs)
specifically witht the anchor

I am using 2.13 (Scala) and 3.0.0 (Kafka) and I am using Apache in Kraft

So i found out the /bin/kafka-config.sh works a bit different

For example from the doc this should work

> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-name 0 --describe


But it doesn from KRAFT (what I have been working with) you need  to
add  *--all
*at the end.

I am trying to get system wide cluster stats, (command below from the doc)
but it just stops silently with exitcode 0 and no output

> bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
> --entity-default --describe


I could not get it to work,

My server.properties file looks like this (sorry its kindof big)

Thank you for your help in any case

############################# Server Basics #############################

# The role of this server. Setting this puts us in KRaft mode
process.roles=broker,controller

# The node id associated with this instance's roles
node.id=1

# The connect string for the controller quorum
controller.quorum.voters=1@localhost:9093

controlled.shutdown.enable=true

############################# Socket Server Settings
#############################

# The address the socket server listens on. It will get the value returned
from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://:9092,CONTROLLER://:9093
inter.broker.listener.name=PLAINTEXT

# Hostname and port the broker will advertise to producers and consumers.
If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use
the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://localhost:9092

# Listener, host name, and port for the controller to advertise to the
brokers. If
# this server is a controller, this listener must be configured.
controller.listener.names=CONTROLLER

# Maps listener names to security protocols, the default is for them to be
the same. See the config documentation for more details
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from
the network and sending responses to the network
num.network.threads=16

# The number of threads that the server uses for processing requests, which
may include disk I/O
num.io.threads=16

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept
(protection against OOM)
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/var/kafka/logs/kraft-combined-logs

# The default number of log partitions per topic. More partitions allow
greater
# parallelism for consumption, but this will also result in more files
across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at
startup and flushing at shutdown.
# This value is recommended to be increased for installations with data
dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings
 #############################
# The replication factor for the group metadata internal topics
"__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is
recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only
fsync() to sync
# the OS cache lazily. The following configurations control the flush of
data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using
replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when
the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation,
and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data
after a period of time or
# every N messages (or both). This can be done globally and overridden on a
per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a
flush
#log.flush.interval.ms=1000

############################# Log Retention Policy
#############################

# The following configurations control the disposal of log segments. The
policy can
# be set to delete segments after a period of time, or after a given size
has accumulated.
# A segment will be deleted whenever *either* of these criteria are met.
Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log
unless the remaining
# segments drop below log.retention.bytes. Functions independently of
log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new
log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be
deleted according
# to the retention policies
log.retention.check.interval.ms=300000

###### LOG COMPACTION
# https://kafka.apache.org/documentation.html#design_compactionguarantees

#The minimum time a message will remain uncompacted in the log.
#Only applicable for logs that are being compacted.
# 5min
min.compaction.log.ms=300

log.cleanup.policy=compact,delete


# the minimum time a message will remain uncompacted in the log
# choose 30 min
log.cleaner.min.compaction.lag.ms=1800000

# if the log has dirty records (not necessarily higher then the
log.cleaner.min.cleanable.ratio)
# and the age of the log is greater then "log.cleaner.max.compaction.lag.ms"
then it is eligable for compaction
# 3 hours
log.cleaner.max.compaction.lag.ms=3600000


# 134'217'728
# 13'421'772
# make it 1G 1'073'741'824
log.cleaner.dedupe.buffer.size=2073741824
log.cleaner.io.buffer.size=2073741824

# how long to keep a tombstone
# removal of "delete markers" happen concurrently with "read"
# consumers have "delete.retention.ms" time to reach end of the log
starting from offset 0
# it is possible for consumers to miss delete markers if it takes longer
then "delete.retention.ms"
# 1 day
delete.retention.ms=86400000

Reply via email to