Merging clusters or changing zk root

2018-05-09 Thread Luke Steensen
Hello, I suspect the answer is no, but I'm curious if there is any way to either change a running cluster's zookeeper chroot or perform a "merge" of two clusters such that their individual workloads can be distributed across the combined set of brokers. Thanks! Luke

Re: are offsets per consumer or per consumer group?

2018-02-08 Thread Luke Steensen
M, Xavier Noria wrote: > On Thu, Feb 8, 2018 at 4:27 PM, Luke Steensen < > luke.steen...@braintreepayments.com> wrote: > > Offsets are maintained per consumer group. When an individual consumer > > crashes, the consumer group coordinator will detect that failure and > >

Re: are offsets per consumer or per consumer group?

2018-02-08 Thread Luke Steensen
Offsets are maintained per consumer group. When an individual consumer crashes, the consumer group coordinator will detect that failure and trigger a rebalance. This redistributes the partitions being consumed across the available consumer processes, using the most recently committed offset for eac

Connect continuously calling commit on failed task

2017-11-16 Thread Luke Steensen
Hello, We're developing a Kafka Connect plugin and seeing some strange behavior around error handling. When an exception is thrown in the task's poll method, the task transitions into the failed state as expected. However, when I watch the logs, I still see errors being logged from the commit meth

Re: Producer connect timeouts

2016-12-19 Thread Luke Steensen
e solution, but agreed that it's not > great. If possible, reducing the default TCP connection timeout isn't > unreasonable either -- the defaults are set for WAN connections (and > arguably set for WAN connections of long ago), so much more aggressive > timeouts are rea

Producer connect timeouts

2016-12-16 Thread Luke Steensen
Hello, Is it correct that producers do not fail new connection establishment when it exceeds the request timeout? Running on AWS, we've encountered a problem where certain very low volume producers end up with metadata that's sufficiently stale that they attempt to establish a connection to a bro

Re: Kafka for event sourcing architecture

2016-05-17 Thread Luke Steensen
It's harder in Kafka because the unit of replication is an entire partition, not a single key/value pair. Partitions are large and constantly growing, where key/value pairs are typically much smaller and don't change in size. There would theoretically be no difference if you had one partition per k

Resetting consumer offsets externally

2016-03-23 Thread Luke Steensen
Hello, Are there any features planned that would enable an external process to reset the offsets of a given consumer group? I realize this goes counter to the design of the system, but it would be convenient if offsets could be reset as a simple admin command. The strategies we've investigated ar

Re: Unexpected mirror maker behavior with new consumer

2016-02-25 Thread Luke Steensen
t; > in both cases for consistency. Would you mind opening a JIRA? > > > > Thanks, > > Jason > > > > On Wed, Feb 24, 2016 at 4:17 PM, Luke Steensen < > > luke.steen...@braintreepayments.com> wrote: > > > > > Hello, > > > > > >

Unexpected mirror maker behavior with new consumer

2016-02-24 Thread Luke Steensen
Hello, I've been experimenting with mirror maker and am a bit confused about some behavior I'm seeing. I have two simple single broker clusters setup locally, source and target. The source cluster has a topic foo with a few message in it. When I run mirror maker with --whitelist '*' (as noted in

Re: Controlled shutdown not relinquishing leadership of all partitions

2016-01-25 Thread Luke Steensen
s. Thanks, Luke On Thu, Jan 14, 2016 at 11:48 AM, Luke Steensen < luke.steen...@braintreepayments.com> wrote: > I don't have broker logs at the moment, but I'll work on getting some I > can share. We are running 0.9.0.0 for both the brokers and producer in this > case. I&#x

Re: Partitions and consumer assignment

2016-01-19 Thread Luke Steensen
Hi Jason, You might find this blog post useful: http://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/ - Luke On Sat, Jan 16, 2016 at 1:30 PM, Jason Williams wrote: > Hi Franco, > > Thank you for the info! It is my reading of the docs that adding > parti

Re: Possible WAN Replication Setup

2016-01-15 Thread Luke Steensen
Not an expert, but that sounds like a very reasonable use case for Kafka. The log.retention.* configs on the edge brokers should cover your TTL needs. On Thu, Jan 14, 2016 at 3:37 PM, Jason J. W. Williams < jasonjwwilli...@gmail.com> wrote: > Hello, > > We historically have been a RabbitMQ envir

Re: Controlled shutdown not relinquishing leadership of all partitions

2016-01-14 Thread Luke Steensen
rship > actually *was* transferred but the broker that answered the metadata > request did not get the news . > > In 0.8.2 we had some bugs regarding how membership info is distributed to > all nodes. This was resolved in 0.9.0.0, so perhaps an upgrade will help. > &g

Re: Partition rebalancing after broker removal

2016-01-14 Thread Luke Steensen
No worries, glad to have the functionality! Thanks for your help. Luke On Thu, Jan 14, 2016 at 10:58 AM, Gwen Shapira wrote: > Yep. That tool is not our best documented :( > > On Thu, Jan 14, 2016 at 11:49 AM, Luke Steensen < > luke.steen...@braintreepayments.com> w

Re: Partition rebalancing after broker removal

2016-01-14 Thread Luke Steensen
mand, but you could use > the reassignment tool to change the preferred leader (and nothing else) and > then trigger preferred leader election. > > Gwen > > On Thu, Jan 14, 2016 at 11:30 AM, Luke Steensen < > luke.steen...@braintreepayments.com> wrote: > > > Hi Gwen,

Re: Partition rebalancing after broker removal

2016-01-14 Thread Luke Steensen
cluster with id = "n", and the replica-map shows that > broker "n" has certain partitions (because we never assigned them away), > the new broker will immediately become follower for these partitions and > start replicating the missing data. > This makes automatic

Re: Partition rebalancing after broker removal

2016-01-14 Thread Luke Steensen
Hello, For #3, I assume this relies on controlled shutdown to transfer leadership gracefully? Or is there some way to use partition reassignment to set the preferred leader of each partition? I ask because we've run into some problems relying on controlled shutdown and having a separate verifiable

Re: Controlled shutdown not relinquishing leadership of all partitions

2016-01-13 Thread Luke Steensen
nd a > few > > things make a few assumptions about how zookeeper is used. But as a tool > > for provisioning, expanding and failure recovery it is working fine so > far. > > > > *knocks on wood* > > > > On Tue, Jan 12, 2016 at 4:08 PM, Luke Steensen < > > l

Re: Controlled shutdown not relinquishing leadership of all partitions

2016-01-12 Thread Luke Steensen
//github.com/yahoo/kafka-manager > > On Tue, Jan 12, 2016 at 2:50 PM, Luke Steensen < > luke.steen...@braintreepayments.com> wrote: > > > Hello, > > > > We've run into a bit of a head-scratcher with a new kafka deployment and > > I'm curious if anyon

Controlled shutdown not relinquishing leadership of all partitions

2016-01-12 Thread Luke Steensen
Hello, We've run into a bit of a head-scratcher with a new kafka deployment and I'm curious if anyone has any ideas. A little bit of background: this deployment uses "immutable infrastructure" on AWS, so instead of configuring the host in-place, we stop the broker, tear down the instance, and rep

Re: request.timeout.ms not working as expected

2015-11-10 Thread Luke Steensen
meone more familiar with the code can comment, but that statement does appear to be preventing the correct behavior. Thanks, Luke On Tue, Nov 10, 2015 at 2:15 PM, Luke Steensen < luke.steen...@braintreepayments.com> wrote: > Hello, > > We've been testing recent version

request.timeout.ms not working as expected

2015-11-10 Thread Luke Steensen
ure how to safely decommission a broker without potentially leaving a producer with a permanently stuck request. Thanks, Luke Steensen