Re: Required guidelines for kafka upgrade

2019-05-03 Thread ASHOK MACHERLA
Dear Senthil when I tried produce messages into topic ,this type errors coming continuously ashok@Node-1:/opt/kafka-new$ sh bin/kafka-console-producer.sh --broker-list 192.168.175.128:9092 --producer.config producer-ssl.config --topic otp-email >[2019-05-03 22:37:

Question regarding Kafka 2.0

2019-05-03 Thread Sourabh S P
Hi, Prior I was using Kafka version 1.1.1 and currently we are planning on migrating to version 2.0.0. But I am facing issues as there are lot of classes which has been removed. kafka.api.TopicMetadata; kafka.client.ClientUtils; kafka.consumer.ConsumerConfig; kafka.consumer.SimpleConsum

Kafka Version 2.0

2019-05-03 Thread findsps93
Hi, Prior I was using Kafka version 1.1.1 and currently we are planning on migrating to version 2.0.0. But I am facing issues as there are lot of classes which has been removed. kafka.api.TopicMetadata; kafka.client.ClientUtils; kafka.consumer.ConsumerConfig; kafka.consumer.SimpleConsumer

RE: Seeing issue with log cleaner thread

2019-05-03 Thread Jigar Rathod
Did you unable log.cleaner.enable? " Enable the log cleaner process to run on the server. Should be enabled if using any topics with a cleanup.policy=compact including the internal offsets topic. If disabled those topics will not be compacted and continually grow in size." https://kafka.apache.

Seeing issue with log cleaner thread

2019-05-03 Thread Ashwarya, Tanvi
I'm seeing a problem where I have some compacted topics on my kafka cluster (3 node). When there is a fresh startup of kafka and zookeeper, and I create compact topics with a service running that writes to this topic, I see log cleaner thread is not running. It has no logs in /var/log/kafka/log

Re: Required guidelines for kafka upgrade

2019-05-03 Thread ASHOK MACHERLA
Dear Senthil Could you please explain clearly Consumer client properties means ??? Where can I set that parameter. I checked within the Kafka cluster, I pushed some messages and when I tried to pulling from same topic, it's not printing any messages Please tell me senthil. How can we solve

Re: Required guidelines for kafka upgrade

2019-05-03 Thread SenthilKumar K
You have to set the same endpoint algorithm (empty) in consumer client properties. On Sat, May 4, 2019, 12:15 AM ASHOK MACHERLA wrote: > Dear Senthil > > as you suggested I follow, Kafka Cluster is fine ISR showing 0,1,2 > > but getting SSL error logs > > [2019-05-03 11:01:19,611] INFO [Socket

Re: Required guidelines for kafka upgrade

2019-05-03 Thread ASHOK MACHERLA
Dear Senthil as you suggested I follow, Kafka Cluster is fine ISR showing 0,1,2 but getting SSL error logs [2019-05-03 11:01:19,611] INFO [SocketServer brokerId=0] Failed authentication with /192.168.175.128 (SSL handshake failed) (org.apache.kafka.common.network.Selec

Re: Required guidelines for kafka upgrade

2019-05-03 Thread ASHOK MACHERLA
Dear Senthil first of all thanks for help, after I set like ssl.endpoint.identification.algorithm = and then restart it's working fine. after that I changed below parameters in all brokers inter.broker.protocol.version=2.2.0 log.message.format.version=2.2.0 after that restarted one by one.

Re: Required guidelines for kafka upgrade

2019-05-03 Thread SenthilKumar K
Hi Ashok , From the logs its clear that problem with identification algorithm. at org.apache.kafka.common.network.SslTransportLayer.doHandshake(SslTransportLayer.java:340) ... 15 more Caused by: java.security.cert.CertificateException: Unknown identification algorithm: " " Set empty and restart y

Re: Required guidelines for kafka upgrade

2019-05-03 Thread SenthilKumar K
Here is my server.properties. reserved.broker.max.id = 2147483647 log.retention.bytes = 68719476736 listeners = SSL://xx:9093 socket.receive.buffer.bytes = 102400 broker.id = xxx ssl.truststore.password = x auto.create.topics.enable = true ssl.enabled.protocols = TLSv1.2 zookeeper.connect

Re: Required guidelines for kafka upgrade

2019-05-03 Thread ASHOK MACHERLA
Dear Please find this below error org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed Caused by: javax.net.ssl.SSLHandshakeException: General SSLEngine problem at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1521) at sun.security.ssl.SSLEngineImpl.checkTask

Re: Required guidelines for kafka upgrade

2019-05-03 Thread SenthilKumar K
Hi, if you see SSL issue try setting ssl.endpoint.identification.algorithm= Simply leave it empty no double quote . It would be good if you share error message from broker logs. --Senthil On Fri, May 3, 2019, 9:36 PM Harper Henn wrote: > What specific errors are you seeing in the server logs of

Re: ISR briefly shrinks then expands

2019-05-03 Thread Steven Taschuk
gc pauses (in milliseconds) for all brokers in the cluster for two minutes around one recent episode: min max count 7 851 6 849 6 953 <-- broker removed from isr 6 955 7 954 71052 61156 71154 61258 71452 10

Re: Required guidelines for kafka upgrade

2019-05-03 Thread Harper Henn
What specific errors are you seeing in the server logs of the broker you upgraded (can you copy/paste them)? On Fri, May 3, 2019 at 7:29 AM ASHOK MACHERLA wrote: > *Dear Senthil* > > As you suggested , I follow but I’m facing errors > > This is my old configurations which is Kafka (0.10.1) versi

Re: Required guidelines for kafka upgrade

2019-05-03 Thread ASHOK MACHERLA
Dear Senthil As you suggested , I follow but I’m facing errors This is my old configurations which is Kafka (0.10.1) version * broker.id=0 port=9092 delete.topic.enable=true message.max.bytes=10 listeners=SSL://192.168.175.

Restart process after adding control.plane.listener.name config in Kafka 2.2.0

2019-05-03 Thread Jonathan Santilli
Hello, hope you all are great, I would like to know what's the process to update the Kafka Broker configuration in order to use the new config: *control.plane.listener.name in Kafka version 2.2.0* The documentation says: *Name of listener used for communicatio

Re: Using processor API via DSL

2019-05-03 Thread Alessandro Tagliapietra
Ok so I'm not sure if I did this correctly, I've upgraded both the server (by replacing the JARs in the confluent docker image with those built from kafka source) and the client (by using the built JARs as local file dependencies). I've used this as source: https://github.com/apache/kafka/archive/