Producer TimeoutException while accessing with domain and L3 Load balancer.

2017-11-28 Thread Madhukar Bharti
Hi,

We have Kafka cluster with three brokers(0.10.0.1). We are accessing this
cluster using domain name say common.test. Also, L3 load balancer has been
configured for this domain name, so that request will be passed to brokers
in RR way.

The Producer has been implemented in java client(KafkaProducer), with
configurations:  rerties=3, max.in.flight.requests.per.connection=1.

In normal load ~1.5 M messages/day, we didn't face any exceptions. But we
are receiving below TimeoutException if load increases(~2.5M message/day)
randomly and in total ~100 Exceptions.


> *java.util.concurrent.ExecutionException:
> org.apache.kafka.common.errors.TimeoutException: Batch containing 1
> record(s) expired due to timeout while requesting metadata from brokers for
> Test-7 at *
>
> *org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:65)
> at
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:52)
> at
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:25)
> at com.zoho.mqueue.producer.KafkaSender.sendToKafka(KafkaSender.java:89)
> at *
>

If we use broker IPs directly to producer configuration, no exception
comes. How to solve this so that no message will be lost?


Regards,
Madhukar


Multiple brokers - do they share the load?

2017-11-28 Thread Skip Montanaro
Apologies for the rather long set-up...

I've been using Kafka as a client for a few months now. The setup I've
been using has three brokers on separate servers, all listening to
port 9092. My consumers always connected to server1:9092. I've ignored
server2 and server3.

Now I'm starting to mess around a bit with setting up my own
itsy-bitsy cluster. Step one is a single instance at host1:9092. Next
step in the instructions (I'm following the recipe laid out in the
documentation) will be to add two more brokers at host1:9093 and
host1:9094.

My question: If every consumer connects to host1:9092 will the brokers
listening to the other ports starve for attention, or does the
connection process somehow redirect clients to the other brokers so
the three (or more) brokers get fairly equitable loads?

Thanks,

Skip Montanaro


Re: Multiple brokers - do they share the load?

2017-11-28 Thread Svante Karlsson
You are connecting to a single seed node - your kafka library will then
under the hood connect to the partition leaders for each partition you
subscribe or post to.

The load is not different compared to if you gave all nodes as connect
parameter. However if your seed node crashes then your client cannot
connect to the cluster.,.

2017-11-28 15:06 GMT+01:00 Skip Montanaro :

> Apologies for the rather long set-up...
>
> I've been using Kafka as a client for a few months now. The setup I've
> been using has three brokers on separate servers, all listening to
> port 9092. My consumers always connected to server1:9092. I've ignored
> server2 and server3.
>
> Now I'm starting to mess around a bit with setting up my own
> itsy-bitsy cluster. Step one is a single instance at host1:9092. Next
> step in the instructions (I'm following the recipe laid out in the
> documentation) will be to add two more brokers at host1:9093 and
> host1:9094.
>
> My question: If every consumer connects to host1:9092 will the brokers
> listening to the other ports starve for attention, or does the
> connection process somehow redirect clients to the other brokers so
> the three (or more) brokers get fairly equitable loads?
>
> Thanks,
>
> Skip Montanaro
>


Re: GDPR appliance

2017-11-28 Thread Ben Stopford
You should also be able to manage this with a compacted topic. If you give
each message a unique key you'd then be able to delete, or overwrite
specific records. Kafka will delete them from disk when compaction runs. If
you need to partition for ordering purposes you'd need to use a custom
partitioner that extracts a partition key from the unique key before it
does the hash.

B

On Sun, Nov 26, 2017 at 10:40 AM Wim Van Leuven <
wim.vanleu...@highestpoint.biz> wrote:

> Thanks, Lars, for the most interesting read!
>
>
>
> On Sun, 26 Nov 2017 at 00:38 Lars Albertsson  wrote:
>
> > Hi David,
> >
> > You might find this presentation useful:
> > https://www.slideshare.net/lallea/protecting-privacy-in-practice
> >
> > It explains privacy building blocks primarily in a batch processing
> > context, but most of the principles are applicable for stream
> > processing as well, e.g. splitting non-PII and PII data ("ejected
> > record" slide), encrypting PII data ("lost key" slide).
> >
> > Regards,
> >
> >
> >
> > Lars Albertsson
> > Data engineering consultant
> > www.mapflat.com
> > https://twitter.com/lalleal
> > +46 70 7687109 <+46%2070%20768%2071%2009> <+46%2070%20768%2071%2009>
> > Calendar: http://www.mapflat.com/calendar
> >
> >
> > On Wed, Nov 22, 2017 at 7:46 PM, David Espinosa 
> wrote:
> > > Hi all,
> > > I would like to double check with you how we want to apply some GDPR
> into
> > > my kafka topics. In concrete the "right to be forgotten", what forces
> us
> > to
> > > delete some data contained in the messages. So not deleting the
> message,
> > > but editing it.
> > > For doing that, my intention is to replicate the topic and apply a
> > > transformation over it.
> > > I think that frameworks like Kafka Streams or Apache Storm.
> > >
> > > Did anybody had to solve this problem?
> > >
> > > Thanks in advance.
> >
>


Fwd: Fwd: suddenly kafka cluster shutdown

2017-11-28 Thread Jose Raul Perez Rodriguez


Hi kafka user and devs,

I have a kafka cluster on AWS EMR instances using de zookeeper that
cames with EMR,

The problem is that after some days running and working good, the
cluster shutdown, every node shutdown showing logs messages like the one
below.

That is happening in the las two month 3 times, with apparently no
reasons because no error is shown, always with version kafka 11  .


[2017-11-22 16:00:00,002] INFO [GroupCoordinator 1003]: Member
consumer-2-9e7145cf-9f7f-4056-b9aa-8cab564b6867 in group
telecomming-stream-0 has failed, removing it from the group
(kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:00:00,003] INFO [GroupCoordinator 1003]: Preparing to
rebalance group telecomming-stream-0 with old generation 7
(__consumer_offsets-28) (kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:00:00,003] INFO [GroupCoordinator 1003]: Group
telecomming-stream-0 with generation 8 is now empty
(__consumer_offsets-28) (kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:02:10,957] INFO [GroupCoordinator 1003]: Preparing to
rebalance group telecomming-stream-0 with old generation 8
(__consumer_offsets-28) (kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:02:10,958] INFO [GroupCoordinator 1003]: Stabilized group
telecomming-stream-0 generation 9 (__consumer_offsets-28)
(kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:02:11,037] INFO [GroupCoordinator 1003]: Assignment
received from leader for group telecomming-stream-0 for generation 9
(kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:02:32,409] INFO [GroupCoordinator 1003]: Preparing to
rebalance group telecomming-stream-0 with old generation 9
(__consumer_offsets-28) (kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:02:32,410] INFO [GroupCoordinator 1003]: Group
telecomming-stream-0 with generation 10 is now empty
(__consumer_offsets-28) (kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:04:53,252] INFO [Group Metadata Manager on Broker 1003]:
Group telecomming-stream-0 transitioned to Dead in generation 10
(kafka.coordinator.group.GroupMetadataManager)
[2017-11-22 16:04:53,252] INFO [Group Metadata Manager on Broker 1003]:
Removed 0 expired offsets in 0 milliseconds.
(kafka.coordinator.group.GroupMetadataManager)
[2017-11-22 16:12:47,441] INFO [GroupCoordinator 1003]: Preparing to
rebalance group telecomming-stream-0 with old generation 0
(__consumer_offsets-28) (kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:12:47,442] INFO [GroupCoordinator 1003]: Stabilized group
telecomming-stream-0 generation 1 (__consumer_offsets-28)
(kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:12:47,442] INFO [GroupCoordinator 1003]: Preparing to
rebalance group telecomming-stream-0 with old generation 1
(__consumer_offsets-28) (kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:12:47,446] INFO [GroupCoordinator 1003]: Stabilized group
telecomming-stream-0 generation 2 (__consumer_offsets-28)
(kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:12:47,529] INFO [GroupCoordinator 1003]: Assignment
received from leader for group telecomming-stream-0 for generation 2
(kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:14:30,009] INFO [GroupCoordinator 1003]: Member
consumer-1-9d40ffcc-0f75-44f2-b9db-b34dfd47fd6c in group
telecomming-stream-0 has failed, removing it from the group
(kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:14:30,009] INFO [GroupCoordinator 1003]: Preparing to
rebalance group telecomming-stream-0 with old generation 2
(__consumer_offsets-28) (kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:14:53,252] INFO [Group Metadata Manager on Broker 1003]:
Removed 0 expired offsets in 0 milliseconds.
(kafka.coordinator.group.GroupMetadataManager)
[2017-11-22 16:15:00,010] INFO [GroupCoordinator 1003]: Group
telecomming-stream-0 with generation 3 is now empty
(__consumer_offsets-28) (kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:15:30,002] INFO [GroupCoordinator 1003]: Member
consumer-2-653f3962-8259-47e9-a8dc-4b23d85f2c80 in group
telecomming-stream-0 has failed, removing it from the group
(kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:15:30,003] INFO [GroupCoordinator 1003]: Preparing to
rebalance group telecomming-stream-0 with old generation 3
(__consumer_offsets-28) (kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:15:30,003] INFO [GroupCoordinator 1003]: Stabilized group
telecomming-stream-0 generation 4 (__consumer_offsets-28)
(kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:15:30,004] INFO [GroupCoordinator 1003]: Assignment
received from leader for group telecomming-stream-0 for generation 4
(kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:16:00,005] INFO [GroupCoordinator 1003]: Member
consumer-2-5c6d78ef-1422-4123-b0fe-76de4745bb6e in group
telecomming-stream-0 has failed, removing it from the group
(kafka.coordinator.group.GroupCoordinator)
[2017-11-22 16:16:00,005] INFO [GroupCoordinator 1003]: Pr

RE: Multiple brokers - do they share the load?

2017-11-28 Thread Tauzell, Dave
If you create a partitioned topic with at least 3 partitions then you will see 
your client connect to all of the brokers.  The client decides which partition 
a message should go to and then sends it directly to the broker that is the 
leader for that partition.  If you have replicated topics, then the brokers 
themselves will also be connected to one-another in order to replication 
messages.

-Dave

-Original Message-
From: Skip Montanaro [mailto:skip.montan...@gmail.com]
Sent: Tuesday, November 28, 2017 8:06 AM
To: users@kafka.apache.org
Subject: Multiple brokers - do they share the load?

Apologies for the rather long set-up...

I've been using Kafka as a client for a few months now. The setup I've been 
using has three brokers on separate servers, all listening to port 9092. My 
consumers always connected to server1:9092. I've ignored
server2 and server3.

Now I'm starting to mess around a bit with setting up my own itsy-bitsy 
cluster. Step one is a single instance at host1:9092. Next step in the 
instructions (I'm following the recipe laid out in the
documentation) will be to add two more brokers at host1:9093 and host1:9094.

My question: If every consumer connects to host1:9092 will the brokers 
listening to the other ports starve for attention, or does the connection 
process somehow redirect clients to the other brokers so the three (or more) 
brokers get fairly equitable loads?

Thanks,

Skip Montanaro

This e-mail and any files transmitted with it are confidential, may contain 
sensitive information, and are intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this e-mail in error, 
please notify the sender by reply e-mail immediately and destroy all copies of 
the e-mail and any attachments.


Re: Upgrading producers to 1.0

2017-11-28 Thread Brian Cottingham
On 11/27/17, 8:36 PM, "Matthias J. Sax"  wrote:

Not sure were you exactly copied this. However, second paragraph here
https://kafka.apache.org/documentation/#upgrade_10_2_0 explains:

> Starting with version 0.10.2, Java clients (producer and consumer) have 
acquired the ability to communicate with older brokers. Version 0.10.2 clients 
can talk to version 0.10.0 or newer brokers. However, if your brokers are older 
than 0.10.0, you must upgrade all the brokers in the Kafka cluster before 
upgrading your clients. Version 0.10.2 brokers support 0.8.x and newer clients.

Ah. I see. You copied from the KIP.

The "Motivation" sections describes the state _before_ the change :)


I think we may be talking about different goals; if I understand correctly, 
you’re discussing upgrading clients without upgrading brokers. I want to do the 
opposite: upgrade brokers and leave the clients alone. It sounds like that 
should work fine, right?



Apache Kafka with NAND Flash Memory and small hardware

2017-11-28 Thread Haenel, Angelika
Hi there,

I would like to know if it is a bad idea to use Apache Kafka, -Zookeeper and 
-NiFi on small hardware like a box pc with:

CPU: Celeron J1900 or Atom E3845
RAM: 4 GB
SSD: 32 GB

How often will Apache Kafka write to disc, cause the write cycles are limited 
on a SSD?

If there is no problem, what are the best settings?
Splitting SSD to different partitions?  I have read about having multiple disc 
for better performance, but we have only one SSD available.

Best regards

Angelika



Lots of warns about LogCleaningPaused during partition reassignment

2017-11-28 Thread BGCH

Hello!

We tried to migrate data from 0.10.2.1 cluster to 0.11.0.2. Firstly we 
spread topics to both clusters. There were lots of problems and restarts 
of some nodes of both clusters (we probably shouldn't do that). All this 
ended up with a state when we had lots of exceptions from 2 nodes of 
0.10 cluster:
/java.lang.IllegalStateException: Compaction for partition topic_name-7 
cannot be aborted and paused since it is in LogCleaningPaused state./


and the whole reassignment process stuck.

I looked through the source code of LogManager and found KAFKA-3123 
 which may be the 
cause. I restarted that 2 nodes and reassignment proceeded, but now I have /
[ReplicaFetcherThread-0-1028], Error for partition [topic_name,33] to 
broker 
1028:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: 
This server does not host this topic-partition./


My guess is that Kafka could not roll out new segments of logs due to 
LogCleaningPaused exception and after restart some segments lost. Is 
this correct?


Is there any other possible cause of that /LogCleaningPaused /exception? 
And the main question is how to prevent all this?

//


--
hivehome.com 



Hive | London | Cambridge | Houston | Toronto
The information contained in or attached to this email is confidential and 
intended only for the use of the individual(s) to which it is addressed. It 
may contain information which is confidential and/or covered by legal 
professional or other privilege. The views expressed in this email are not 
necessarily the views of Centrica plc, and the company, its directors, 
officers or employees make no representation or accept any liability for 
their accuracy or completeness unless expressly stated to the contrary. 
Centrica Connected Home Limited (company no: 5782908), registered in 
England and Wales with its registered office at Millstream, Maidenhead 
Road, Windsor, Berkshire SL4 5GD.


Re: Upgrading producers to 1.0

2017-11-28 Thread Matthias J. Sax
Upgrading brokers without client was always supported :)

Since 0.10.2, it also works the other way round.


-Matthias

On 11/28/17 7:33 AM, Brian Cottingham wrote:
> On 11/27/17, 8:36 PM, "Matthias J. Sax"  wrote:
> 
> Not sure were you exactly copied this. However, second paragraph here
> https://kafka.apache.org/documentation/#upgrade_10_2_0 explains:
> 
> > Starting with version 0.10.2, Java clients (producer and consumer) have 
> acquired the ability to communicate with older brokers. Version 0.10.2 
> clients can talk to version 0.10.0 or newer brokers. However, if your brokers 
> are older than 0.10.0, you must upgrade all the brokers in the Kafka cluster 
> before upgrading your clients. Version 0.10.2 brokers support 0.8.x and newer 
> clients.
> 
> Ah. I see. You copied from the KIP.
> 
> The "Motivation" sections describes the state _before_ the change :)
> 
> 
> I think we may be talking about different goals; if I understand correctly, 
> you’re discussing upgrading clients without upgrading brokers. I want to do 
> the opposite: upgrade brokers and leave the clients alone. It sounds like 
> that should work fine, right?
> 



signature.asc
Description: OpenPGP digital signature


Re: Upgrading producers to 1.0

2017-11-28 Thread Brian Cottingham
Excellent, thanks very much for the help!

On 11/28/17, 11:43 AM, "Matthias J. Sax"  wrote:

Upgrading brokers without client was always supported :)

Since 0.10.2, it also works the other way round.


-Matthias

On 11/28/17 7:33 AM, Brian Cottingham wrote:
> On 11/27/17, 8:36 PM, "Matthias J. Sax"  wrote:
> 
> Not sure were you exactly copied this. However, second paragraph here
> https://kafka.apache.org/documentation/#upgrade_10_2_0 explains:
> 
> > Starting with version 0.10.2, Java clients (producer and consumer) 
have acquired the ability to communicate with older brokers. Version 0.10.2 
clients can talk to version 0.10.0 or newer brokers. However, if your brokers 
are older than 0.10.0, you must upgrade all the brokers in the Kafka cluster 
before upgrading your clients. Version 0.10.2 brokers support 0.8.x and newer 
clients.
> 
> Ah. I see. You copied from the KIP.
> 
> The "Motivation" sections describes the state _before_ the change :)
> 
> 
> I think we may be talking about different goals; if I understand 
correctly, you’re discussing upgrading clients without upgrading brokers. I 
want to do the opposite: upgrade brokers and leave the clients alone. It sounds 
like that should work fine, right?
> 





Re: Exception in rebalancing | Kafka 1.0.0

2017-11-28 Thread Guozhang Wang
Sameer,

Thanks for reporting, it looks similar to the ticket we have resolved some
time ago (https://issues.apache.org/jira/browse/KAFKA-5154), note that we
added some check to avoid NPE but instead throws a more meaningful
exception message, but the root cause maybe the same.

If your Kafka Streams is also using 1.0.0 version as well it may indicate
some corner cases still not fixed; so if that is the case, could you create
a JIRA and upload all logs / stack traces you have for us to investigate
into this issue?


Guozhang


On Mon, Nov 27, 2017 at 10:36 PM, Sameer Kumar 
wrote:

> hi all,
>
> Faced this exception yesterday, any possible reasons for the same. At the
> same time, one of the machines was restarted in my Kafka Streams cluster
> and hence the job ended there.
> Detailed exception trace is attached.
>
> I am using Kafka 1.0.0.
>
> 2017-11-28 00:07:38 ERROR Kafka010Base:46 - Exception caught in thread
> c-7-aq23-000647df-ff25-48de-b92f-02f43988353e-StreamThread-6
> java.lang.IllegalStateException: Record's partition does not belong to
> this partition-group.
> at org.apache.kafka.streams.processor.internals.
> PartitionGroup.numBuffered(PartitionGroup.java:156)
> at org.apache.kafka.streams.processor.internals.StreamTask.addRecords(
> StreamTask.java:545)
> at org.apache.kafka.streams.processor.internals.StreamThread.
> addRecordsToTasks(StreamThread.java:920)
> at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(
> StreamThread.java:821)
> at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(
> StreamThread.java:774)
> at org.apache.kafka.streams.processor.internals.
> StreamThread.run(StreamThread.java:744)
>
> Regards,
> -Sameer.
>



-- 
-- Guozhang


Re: [ANNOUNCE] New committer: Onur Karaman

2017-11-28 Thread Jason Gustafson
Sorry for being late to the party, but congratulations Onur!



On Wed, Nov 8, 2017 at 1:47 AM, Sandeep Nemuri  wrote:

> Congratulations Onur!!
>
> On Wed, Nov 8, 2017 at 9:19 AM, UMESH CHAUDHARY 
> wrote:
>
> > Congratulations Onur!
> >
> > On Tue, 7 Nov 2017 at 21:44 Jun Rao  wrote:
> >
> > > Affan,
> > >
> > > All known problems in the controller are described in the doc linked
> from
> > > https://issues.apache.org/jira/browse/KAFKA-5027.
> > >
> > > Thanks,
> > >
> > > Jun
> > >
> > > On Mon, Nov 6, 2017 at 11:00 PM, Affan Syed  wrote:
> > >
> > > > Congrats Onur,
> > > >
> > > > Can you also share the document where all known problems are listed;
> I
> > am
> > > > assuming these bugs are still valid for the current stable release.
> > > >
> > > > Affan
> > > >
> > > > - Affan
> > > >
> > > > On Mon, Nov 6, 2017 at 10:24 PM, Jun Rao  wrote:
> > > >
> > > > > Hi, everyone,
> > > > >
> > > > > The PMC of Apache Kafka is pleased to announce a new Kafka
> committer
> > > Onur
> > > > > Karaman.
> > > > >
> > > > > Onur's most significant work is the improvement of Kafka
> controller,
> > > > which
> > > > > is the brain of a Kafka cluster. Over time, we have accumulated
> > quite a
> > > > few
> > > > > correctness and performance issues in the controller. There have
> been
> > > > > attempts to fix controller issues in isolation, which would make
> the
> > > code
> > > > > base more complicated without a clear path of solving all problems.
> > > Onur
> > > > is
> > > > > the one who took a holistic approach, by first documenting all
> known
> > > > > issues, writing down a new design, coming up with a plan to deliver
> > the
> > > > > changes in phases and executing on it. At this point, Onur has
> > > completed
> > > > > the two most important phases: making the controller single
> threaded
> > > and
> > > > > changing the controller to use the async ZK api. The former fixed
> > > > multiple
> > > > > deadlocks and race conditions. The latter significantly improved
> the
> > > > > performance when there are many partitions. Experimental results
> show
> > > > that
> > > > > Onur's work reduced the controlled shutdown time by a factor of 100
> > > times
> > > > > and the controller failover time by a factor of 3 times.
> > > > >
> > > > > Congratulations, Onur!
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Jun (on behalf of the Apache Kafka PMC)
> > > > >
> > > >
> > >
> >
>
>
>
> --
> *  Regards*
> *  Sandeep Nemuri*
>


KAFKA-5413 and 0.10.x

2017-11-28 Thread Philippe Laflamme
Hi,

We've recently hit an issue that is marked as "resolved" in the 0.10
branch, but has never been released[1]. There is no known workaround for
the problem.

Upgrading our cluster to a 0.11 version is certainly an option, but is a
risky one given that it could introduce move bugs (especially since it has
a new storage format and a whole slew of new features). We would much
rather upgrade to a 0.10.x release to only obtain this fix that we need.

I understand that the Kafka development team has moved on to 0.11 and now
1.0, but it's also safe to say that there are more users on past versions,
though I have no data to back this statement (maybe someone can?). Given
that the 0.10.x branch is broken and still actively used, it seems like a
0.10.2.2 release would be beneficial to the user community.

My question is whether or not users can help with making such releases
happen. Is there anything that the community can do to help cutting a
release like this one? If not, is there any documentation about how one
would proceed to cut an internal release, e.g.: can the existing release
tooling be leveraged? Otherwise, maybe someone here has gone through this
process and can share their experience?

Cheers,
Philippe
[1] https://issues.apache.org/jira/browse/KAFKA-5413


Managing broker rolls?

2017-11-28 Thread Matt Farmer
Hey all,

So, I'm curious to hear how others have solved this problem.

We've got quite a few brokers and rolling all of them to pick up new
configuration (which consists of triggering a clean shutdown, then
restarting the service and waiting for replication to catch up before
moving on) ultimately takes an entire day to do as a human. This is a
process I would like to automate.

Things that I have looked at include:

(1) Using a bot that can talk to the Kafka admin API - but there's
currently no Admin API call to trigger a clean shutdown of a broker (would
folks be interested in this?)

(2) Using a giant shell script that speaks admin API and can detect ISR
catch-up — but this either requires a developer's machine to stay connected
during the entire process (not a guarantee) or requires us to give some
shared resource SSH permissions across all our servers (not ideal)

What are others doing?

Would folks be interested in an AdminClient call that triggers a graceful
shutdown on a Broker? I could write up a KIP for this if so.

Cheers,
Matt


kafka compacted topic

2017-11-28 Thread Kane Kim
How does kafka log compaction work?
Does it compact all of the log files periodically against new changes?


Re: kafka compacted topic

2017-11-28 Thread Jakub Scholz
There is quite a nice section on this in the documentation -
http://kafka.apache.org/documentation/#compaction ... I think it should
answer your questions.

On Wed, Nov 29, 2017 at 7:19 AM, Kane Kim  wrote:

> How does kafka log compaction work?
> Does it compact all of the log files periodically against new changes?
>