Deleting Kafka consumer offset topic.log files

2018-09-26 Thread Kaushik Nambiar
Hello, I am using an SSL Kafka v 0.11.xx on a Linux operating system. I can see in the log files that the topic segments are getting deleted regularly. The concern I am having is for the system topic which is __consumer_offset , the segments are not getting deleted. So that's contributing to a l

unable to start kafka when zookeeper cluster is in working but unhealthy state

2018-09-26 Thread James Yu
Hi, I fail to start kafka broker when the corresponding zookeeper cluster is in working but unhealthy state. The zookeeper cluster is made of 3 nodes: zookeeper-0, zookeeper-1, zookeeper-2. I put all 3 zookeeper nodes into kafka's server.properties, specifically for zookeeper.connect attribute a

RE: Terminate Streams application from within Transformer?

2018-09-26 Thread Tim Ward
That works, thanks. Tim Ward -Original Message- From: Bill Bejeck Sent: 21 September 2018 15:06 To: users@kafka.apache.org Subject: Re: Terminate Streams application from within Transformer? Hi Tim, I wouldn't recommend System.exit(), as it won't give streams a chance to go through a s

Re: unable to start kafka when zookeeper cluster is in working but unhealthy state

2018-09-26 Thread Liam Clarke
Hi James, That's not an unresponsive node that's killing Kafka, that's a failure to resolve the address that's killing it - my personal expectation would be that even though zookeeper-2.zookeeper.etc may be down, its name should still resolve. Regards, Liam Clarke On Wed, Sep 26, 2018 at 10:0

Re: unable to start kafka when zookeeper cluster is in working but unhealthy state

2018-09-26 Thread Manikumar
You can try using Kafka 2.0 release. Original issue is handled in ZOOKEEPER-2184 and corresponding zookeeper version is used in Kafka 2.0. On Wed, Sep 26, 2018 at 3:58 PM Liam Clarke wrote: > Hi James, > > That's not an unresponsive node that's killing Kafka, that's a failure to > resolve the a

Re: unable to start kafka when zookeeper cluster is in working but unhealthy state

2018-09-26 Thread James Yu
@Liam, the hostname is removed from dns server due to the node is no longer alive, so kafka is unable reolve IP for zookeeper-2 thus an NullPointerException is thrown. @Manikumar, ZOOKEEPER-2184 is to re-resolve IP for new instance of zookeeper-2, however, zookeeper-2 stays down hence now IP to be

Re: unable to start kafka when zookeeper cluster is in working but unhealthy state

2018-09-26 Thread Manikumar
Yes, In case of UnknownHostException, zookeeper client will try to connect remaining hostnames given in the zk connect string. On Wed, Sep 26, 2018 at 7:45 PM James Yu wrote: > @Liam, the hostname is removed from dns server due to the node is no longer > alive, so kafka is unable reolve IP for

manually trigger log compaction

2018-09-26 Thread Xu, Nan
Hi, Wondering is there a way to manually trigger a log compaction for a certain topic? Thanks, Nan -- This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, co

Have connector be paused from start

2018-09-26 Thread Rickard Cardell
Hi Is there a way to have a Kafka Connect connector begin in state 'PAUSED'? I.e I would like to have the connector set to paused before it can process any data from Kafka. Some background: I have a use case where we will push data from Kafka into S3 using Kafka Connect. It also involves a one-ti

Re: manually trigger log compaction

2018-09-26 Thread M. Manna
This is possible using kafka-configs script. You need 'topics' as entity type and --alter directive. The changes are made cluster-wide. Try the help documentation for kafka-configs. Regards, On Wed, 26 Sep 2018 at 16:16, Xu, Nan wrote: > Hi, > >Wondering is there a way to manually trigger

Re: [ANNOUNCE] New committer: Colin McCabe

2018-09-26 Thread Srinivas Reddy
Congratulations Colin 👏 - Srinivas - Typed on tiny keys. pls ignore typos.{mobile app} On Tue 25 Sep, 2018, 16:39 Ismael Juma, wrote: > Hi all, > > The PMC for Apache Kafka has invited Colin McCabe as a committer and we are > pleased to announce that he has accepted! > > Colin has contributed

Re: [ANNOUNCE] New committer: Colin McCabe

2018-09-26 Thread Konstantine Karantasis
Well deserved! Congratulations Colin. -Konstantine On Wed, Sep 26, 2018 at 4:57 AM Srinivas Reddy wrote: > Congratulations Colin 👏 > > - > Srinivas > > - Typed on tiny keys. pls ignore typos.{mobile app} > > On Tue 25 Sep, 2018, 16:39 Ismael Juma, wrote: > > > Hi all, > > > > The PMC for Apach

Re: [ANNOUNCE] New committer: Colin McCabe

2018-09-26 Thread Martin Gainty
welcome and congratulations Colin Martin From: Konstantine Karantasis Sent: Wednesday, September 26, 2018 1:03 PM To: d...@kafka.apache.org Cc: users@kafka.apache.org Subject: Re: [ANNOUNCE] New committer: Colin McCabe Well deserved! Congratulations Colin. -Kons

Re: [ANNOUNCE] New committer: Colin McCabe

2018-09-26 Thread Yishun Guan
Congrats! -Yishun On Wed, Sep 26, 2018, 10:04 AM Konstantine Karantasis < konstant...@confluent.io> wrote: > Well deserved! Congratulations Colin. > > -Konstantine > > On Wed, Sep 26, 2018 at 4:57 AM Srinivas Reddy > > wrote: > > > Congratulations Colin 👏 > > > > - > > Srinivas > > > > - Typed o

When adding new broker to cluster, choking on bandwidth and the producer latencies are very high

2018-09-26 Thread Yam Kolli
Hi Team, We have 9 node cluster with around data size of 14TB. One broker recreated due to a hardware problem. When we are trying to add this broker we are seeing higher producer latencies around 15sec and the bandwidth is getting exhausted. Kafka Version: *0.10.0.1* Instance : Memory :16gb Perc

Kafka use case

2018-09-26 Thread Shibi Ns
I have 2 systems 1. System I - A web based interface based on Oracle DB and No REST API support 2. System II - Supports rest API's which also has web based interface . When a record created or updated in either of the system I want propagate the data to other system . Ca

Re: Kafka consumer offset topic index not getting deleted

2018-09-26 Thread Satish Duggana
>>Offsets.retention.minutes (default is 7 days, not 24 hours). In 0.11.x , default value was 24 hrs, it is changed to 7 days in 2.0[1]. Kaushik mentions that they are using 0.11.xx 1. http://kafka.apache.org/documentation/#upgrade_200_notable On Mon, Sep 24, 2018 at 8:42 PM, Kaushik Nambiar wrot