Hi,

So yeah, you can set the max.block.ms to 0, at the cost of perhaps some
data loss and occasional noise on the console.

But usual logging solution to this is to wrap the Kafka appender in an
async appender with an appropriately sized buffer.

Cheers,

Liam Clarke

On Mon, 2 Nov. 2020, 11:27 pm DSA SA, <dsa.ka...@gmail.com> wrote:

> Hello everyone!
>
> We are using the Log4j Kafka Appender to send application LogEvents to a
> Kafka topic.
>
> We are using the syncSend=false producer attribute to asynchronously send
> LogEvents to Kafka.
> This setting works well for already connected producers which are not able
> to connect to the cluster anymore. For example due to network connectivity
> issues.
> Thus, the kafka producer send() method won't wait for an answer from the
> brokers and exceptions are logged and discarded.
>
> However, for applications that can't initially reach the kafka cluster, the
> send() method will block for every LogEntry, as configured in the producer
> property max.block.ms.
>
> The following exception is thrown:
> org.apache.kafka.common.errors.TimeoutException: Topic logs not present in
> metadata after 1000 ms.
>
> This is also described in the kafka producer's max.block.ms property
> documentation:
> "The configuration controls how long KafkaProducer.send() and
> KafkaProducer.partitionsFor() will block. These methods can be blocked
> either because the buffer is full or metadata unavailable."
>
> I wonder if the send() call will also block after metadata.max.age has
> expired?
>
> Is there an out-of-the-box workaround ,thus the send() call will not block,
> without modifying max.block.ms settings?
>
> Or is the only solution an independent kafka consumer which checks the
> connectivity to the cluster. And enables/disables the log4j kafka appender
> to prevent blocking the application?
>
> Kind regards
>

Reply via email to