kafka Broker Znode TTL

2018-06-07 Thread harish lohar
Do Kafka Broker has configuration to specify TTL for kafka broker znodes
such  /broker/ids/

This is useful in case zookeeper cluster goes down and kafka broker
re-registers , in that case it fails to start as Zookeeper already has the
same broker id registered.

Thanks
Harish


Re: kafka Broker Znode TTL

2018-06-12 Thread harish lohar
Hi,

This issue happens when your zookeeper cluster is down , in that case the
znode is not removed.

We are running both zookeeper and kafka at same machine , hence if the
machine goes down causing <= 50% node in zookeeper cluster, there is no way
currently to clear the  /broker/ids/ nodes.

Also i am not sure if same /broker/ids/ gets deleted as soon as kafka is
killed.

Thanks
Harish

On Fri, Jun 8, 2018 at 10:51 PM 逐风者的祝福 <1878707...@qq.com> wrote:

> I think /broker/ids/ is registered as a temporary znode when the broker is
> down and znode will be removed .
>
>
>
>
> -- 原始邮件 ------
> 发件人: "harish lohar";
> 发送时间: 2018年6月7日(星期四) 晚上11:41
> 收件人: "users";
> 主题: kafka Broker Znode TTL
>
>
>
> Do Kafka Broker has configuration to specify TTL for kafka broker znodes
> such  /broker/ids/
>
> This is useful in case zookeeper cluster goes down and kafka broker
> re-registers , in that case it fails to start as Zookeeper already has the
> same broker id registered.
>
> Thanks
> Harish


Whats the Kafka Broker Behavior when ZooKeeperCluster is DOWN

2018-06-27 Thread harish lohar
Hi All,

In a 3 Node Zookeeper cluster if 2 Nodes goes down but Zookeeper to which
Kafka broker is up, what is expected ??

Is kafka borker expected to go down ?? or only the new operations will fail.

Thanks
Harish


Re: Very long consumer rebalances

2018-07-09 Thread harish lohar
Try reducing below timer
metadata.max.age.ms = 30


On Fri, Jul 6, 2018 at 5:55 AM Shantanu Deshmukh 
wrote:

> Hello everyone,
>
> We are running a 3 broker Kafka 0.10.0.1 cluster. We have a java app which
> spawns many consumer threads consuming from different topics. For every
> topic we have specified different consumer-group. A lot of times I see that
> whenever this application is restarted a CG on one or two topics takes more
> than 5 minutes to receive partition assignment. Till that time consumers
> for that topic don't consumer anything. If I go to Kafka broker and run
> consumer-groups.sh and describe that particular CG I see that it is
> rebalancing. There is time critical data stored in that topic and we cannot
> tolerate such long delays. What can be the reason for such long rebalances.
>
> Here's our consumer config
>
>
> auto.commit.interval.ms = 3000
> auto.offset.reset = latest
> bootstrap.servers = [x.x.x.x:9092, x.x.x.x:9092, x.x.x.x:9092]
> check.crcs = true
> client.id =
> connections.max.idle.ms = 54
> enable.auto.commit = true
> exclude.internal.topics = true
> fetch.max.bytes = 52428800
> fetch.max.wait.ms = 500
> fetch.min.bytes = 1
> group.id = otp-notifications-consumer
> heartbeat.interval.ms = 3000
> interceptor.classes = null
> key.deserializer = class
> org.apache.kafka.common.serialization.StringDeserializer
> max.partition.fetch.bytes = 1048576
> max.poll.interval.ms = 30
> max.poll.records = 50
> metadata.max.age.ms = 30
> metric.reporters = []
> metrics.num.samples = 2
> metrics.sample.window.ms = 3
> partition.assignment.strategy = [class
> org.apache.kafka.clients.consumer.RangeAssignor]
> receive.buffer.bytes = 65536
> reconnect.backoff.ms = 50
> request.timeout.ms = 305000
> retry.backoff.ms = 100
> sasl.kerberos.kinit.cmd = /usr/bin/kinit
> sasl.kerberos.min.time.before.relogin = 6
> sasl.kerberos.service.name = null
> sasl.kerberos.ticket.renew.jitter = 0.05
> sasl.kerberos.ticket.renew.window.factor = 0.8
> sasl.mechanism = GSSAPI
> security.protocol = SSL
> send.buffer.bytes = 131072
> session.timeout.ms = 30
> ssl.cipher.suites = null
> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> ssl.endpoint.identification.algorithm = null
> ssl.key.password = null
> ssl.keymanager.algorithm = SunX509
> ssl.keystore.location = null
> ssl.keystore.password = null
> ssl.keystore.type = JKS
> ssl.protocol = TLS
> ssl.provider = null
> ssl.secure.random.implementation = null
> ssl.trustmanager.algorithm = PKIX
> ssl.truststore.location = /x/x/client.truststore.jks
> ssl.truststore.password = [hidden]
> ssl.truststore.type = JKS
> value.deserializer = class
> org.apache.kafka.common.serialization.StringDeserializer
>
> Please help.
>
> *Thanks & Regards,*
> *Shantanu Deshmukh*
>


JMX Port Conflict when starting both Broker and kafka Connect

2018-11-30 Thread harish lohar
Hi ,

I am trying to use kafkaConnect for mongdb-source-connector

I am using connect-distributed.sh.

When i start kafka with KAFKA_JMX_OPTS, kafka broker starts fine but as
soon as kafka connect is started it gives
javaagent:./jmx_prometheus_javaagent-0.6.jar=7071:./kafka-0-8-2.yml"

address already in use error as below:

2018-11-30 10:08:18.142:WARN:ipjsoejuc.AbstractLifeCycle:FAILED
SelectChannelConnector@0.0.0.0:7071: java.net.BindException: Address
already in use
java.net.BindException: Address already in use

Please let me know if there is any configuration change exist to avoid this
conflict.

Thanks
Harish