Hi Team,
Any update on this.
Regards,
Pricks
From: Ann Pricks
Date: Friday, 3 July 2020 at 4:10 PM
To: "users@kafka.apache.org"
Subject: Consumer Groups Describe is not working
Hi Team,
Today, In our production cluster, we faced an issue with Kafka (Old offsets was
getting pulled from spar
Hi,
Al right, I am running kafka from an Debian Linux operating system – no docker
images are involved. I just can exclude any problems with my producer
application – the following command fails too:
/home/kafka/kafka/bin/kafka-topics.sh --list --bootstrap-server 127.0.0.1:9092
Error wh
Hi Sebastian,
Something you can investigate here is which value has been set to the
configuration property `advertised.listeners`. Your client is trying to
establish a connection over the 9092 port using the `127.0.0.1`
interface. Check if this is a valid listener for the Kafka broker.
Thank
Ann,
You can try execute the CLI `kafka-consumer-groups` with TRACE enabled
to dig a little deeper in the problem. In order to do this you need to:
1. Make a copy of your `$KAFKA_HOME/etc/kafka/tools-log4j.properties` file
2. Set `root.logger=TRACE,console`
3. Run `export
KAFKA_OPTS="-Dlog4
What error you aare getting . just make sure user have appropriate permission .
Please share the error if you are getting .
On 7/8/20, 3:56 AM, "Ann Pricks" wrote:
[External]
Hi Team,
Any update on this.
Regards,
Pricks
From: Ann Pricks
Date: Friday, 3 July 2
Hi,
Here is how to config looks like:
# Socket Server Settings
#
listeners=PLAINTEXT://:9092
# advertised.listeners=193.135.9.23:9092
To connect using localhost is just a test, I need to connect via IP-Adress and
Port only. Could
Hi there,
I'm getting lot of this type of warning:
WARN org.apache.kafka.streams.kstream.internals.KTableSource - Detected
out-of-order KTable update for entity-STATE-STORE-00 at offset
65806, partition 5.
It looks like the warning is generated each time a new record goes into the
sour
Hi Ricardo,
Thanks for your kind response.
As per your suggestion, I have enabled trace and attached the log file.
Kindly check and let me know in case of any other details required.
Regards,
AnnPricksEdmund
From: Ricardo Ferreira
Date: Wednesday, 8 July 2020 at 6:29 PM
To: "users@kafka.apac
Hi Ricardo,
Thanks for your kind response.
As per your suggestion, I have enabled trace and PFB the content of the log
file.
Log File Content:
[2020-07-08 18:48:08,963] INFO Registered kafka:type=kafka.Log4jController
MBean (kafka.utils.Log4jControllerRegistration$)
[2020-07-08 18:48:09,244]
Hi Ann,
It's common practice in many Spark Streaming apps to store offsets external
to Kafka. Especially when checkpointing is enabled.
Are you sure that the app is committing offsets to Kafka?
Kind regards,
Liam Clarke
On Thu, 9 Jul. 2020, 8:00 am Ann Pricks, wrote:
> Hi Ricardo,
>
> Thanks
Hello,
Up until now we have configured compression on producer level. Since moving
to mm2, we are having some issues with producer level compression and mm2,
and are planning on trying out topic level compression for this case.
Are there any inherent differences to each method other than how/wher
What is the issue with compression at producer level in mm2 ? Can you
please explain?
Thanks,
Nitin
On Thu, Jul 9, 2020 at 10:37 AM Iftach Ben-Yosef
wrote:
> Hello,
>
> Up until now we have configured compression on producer level. Since moving
> to mm2, we are having some issues with producer
Hello Nitin, I have been unable to successfully setup producer level
compression for my dedicated mm2 cluster, I have a separate mail
correspondence on that issue titled 'destination topics in mm2 larger than
source topic'. I will forward you this correspondence now.
I achieved only partial succes
Hello,
We noticed that setting ssl.endpoint.identification.algorithm to empty (on
both client and broker side) we got a big performance improvement in terms
of throughput. As far as I understand, this is related to the SSL
connection doing a DNS lookup to check that the host matches the
certificat
14 matches
Mail list logo