Hi,
I am working on to enable ipv6 support for my kafka setup.
I have added IPV6 listener (plaintext) and I see no issues in kafka server log.
Listener config in my sever properties (real Ipv4 marked with X and ipv6 marked
with Y)
listeners=PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9093,SASL_SSL:
Hi,
I am using Kafka 3.2 (in windows) and for a topic i to send tombstone records.
Everything was ok but i always see last value for the key (even i see null
records present after delete.retention.ms period)
Example
Key1 value1
Key2 value2
Key1 - null record - tombstone record
and so on
I am
-t mytopic -Z -K: (This one takes " as message)
$ echo abc: | kcat -b mybroker -t mytopic -Z -K: (This one takes one empty
space as input)
The right way to send tombstone is
$ echo abc:| kcat -b mybroker -t mytopic -Z -K:
Regards,
Nanda
____
From: Nanda Naga
Sent: W
I see this exception in our environment in multiple machines. What config will
help to resolve this error?
The exception I get is at
java.lang.OutOfMemoryError: Unable to allocate 369295624 bytes
at java.base/jdk.internal.misc.Unsafe.allocateMemory(Unsafe.java:632)
at java.base/java.nio.DirectBy
rtant
at https://aka.ms/LearnAboutSenderIdentification ]
The error usually occurs when you try to connect with a client speaking TLS to
the plain text port of the Kafka broker.
Radu
On Wed, May 28, 2025 at 12:37 AM Nanda Naga
wrote:
> I see this exception in our environment in multiple machin
message is no longer a clear text, its first 4 bytes can
resemble anything (sometimes a huge number) thus resulting in OOM.
Hope that clears it up for you.
Ömer Şiar Baysal
On Thu, May 29, 2025, 18:18 Nanda Naga
wrote:
> Thanks Radu for the response.
>
> Wondering why it is out of me
In broker server properties and controller server properties, I have setup the
custom principal builder class name and custom acl authorizer (extends standard
authorizer) class name properly
The normal produce/ consumes that the topic has acls works fine though using
the custom principal and cu
I figured this out issue - it is due to missing serialization/deserialization
logic for the custom principal
Regards,
Nanda
-Original Message-
From: Nanda Naga
Sent: Friday, June 6, 2025 1:19 PM
To: users@kafka.apache.org
Subject: [EXTERNAL] Kraft mode - Authz errors while doing
I am using an external application to authorize requests before sending to
broker. Before kraft I initialized aclauthroizer and just called configure and
then called authorize (the zk watcher inside aclauthorizer took care of loading
and updating acls).
In case of kraft, how to use this feature