Hello,
I suspect the answer is no, but I'm curious if there is any way to either
change a running cluster's zookeeper chroot or perform a "merge" of two
clusters such that their individual workloads can be distributed across the
combined set of brokers.
Thanks!
Luke
M, Xavier Noria wrote:
> On Thu, Feb 8, 2018 at 4:27 PM, Luke Steensen <
> luke.steen...@braintreepayments.com> wrote:
>
> Offsets are maintained per consumer group. When an individual consumer
> > crashes, the consumer group coordinator will detect that failure and
> >
Offsets are maintained per consumer group. When an individual consumer
crashes, the consumer group coordinator will detect that failure and
trigger a rebalance. This redistributes the partitions being consumed
across the available consumer processes, using the most recently committed
offset for eac
Hello,
We're developing a Kafka Connect plugin and seeing some strange behavior
around error handling. When an exception is thrown in the task's poll
method, the task transitions into the failed state as expected. However,
when I watch the logs, I still see errors being logged from the commit
meth
e solution, but agreed that it's not
> great. If possible, reducing the default TCP connection timeout isn't
> unreasonable either -- the defaults are set for WAN connections (and
> arguably set for WAN connections of long ago), so much more aggressive
> timeouts are rea
Hello,
Is it correct that producers do not fail new connection establishment when
it exceeds the request timeout?
Running on AWS, we've encountered a problem where certain very low volume
producers end up with metadata that's sufficiently stale that they attempt
to establish a connection to a bro
It's harder in Kafka because the unit of replication is an entire
partition, not a single key/value pair. Partitions are large and constantly
growing, where key/value pairs are typically much smaller and don't change
in size. There would theoretically be no difference if you had one
partition per k
Hello,
Are there any features planned that would enable an external process to
reset the offsets of a given consumer group? I realize this goes counter to
the design of the system, but it would be convenient if offsets could be
reset as a simple admin command.
The strategies we've investigated ar
t; > in both cases for consistency. Would you mind opening a JIRA?
> >
> > Thanks,
> > Jason
> >
> > On Wed, Feb 24, 2016 at 4:17 PM, Luke Steensen <
> > luke.steen...@braintreepayments.com> wrote:
> >
> > > Hello,
> > >
> > >
Hello,
I've been experimenting with mirror maker and am a bit confused about some
behavior I'm seeing.
I have two simple single broker clusters setup locally, source and target.
The source cluster has a topic foo with a few message in it.
When I run mirror maker with --whitelist '*' (as noted in
s.
Thanks,
Luke
On Thu, Jan 14, 2016 at 11:48 AM, Luke Steensen <
luke.steen...@braintreepayments.com> wrote:
> I don't have broker logs at the moment, but I'll work on getting some I
> can share. We are running 0.9.0.0 for both the brokers and producer in this
> case. I
Hi Jason,
You might find this blog post useful:
http://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/
- Luke
On Sat, Jan 16, 2016 at 1:30 PM, Jason Williams
wrote:
> Hi Franco,
>
> Thank you for the info! It is my reading of the docs that adding
> parti
Not an expert, but that sounds like a very reasonable use case for Kafka.
The log.retention.* configs on the edge brokers should cover your TTL needs.
On Thu, Jan 14, 2016 at 3:37 PM, Jason J. W. Williams <
jasonjwwilli...@gmail.com> wrote:
> Hello,
>
> We historically have been a RabbitMQ envir
rship
> actually *was* transferred but the broker that answered the metadata
> request did not get the news .
>
> In 0.8.2 we had some bugs regarding how membership info is distributed to
> all nodes. This was resolved in 0.9.0.0, so perhaps an upgrade will help.
>
&g
No worries, glad to have the functionality! Thanks for your help.
Luke
On Thu, Jan 14, 2016 at 10:58 AM, Gwen Shapira wrote:
> Yep. That tool is not our best documented :(
>
> On Thu, Jan 14, 2016 at 11:49 AM, Luke Steensen <
> luke.steen...@braintreepayments.com> w
mand, but you could use
> the reassignment tool to change the preferred leader (and nothing else) and
> then trigger preferred leader election.
>
> Gwen
>
> On Thu, Jan 14, 2016 at 11:30 AM, Luke Steensen <
> luke.steen...@braintreepayments.com> wrote:
>
> > Hi Gwen,
cluster with id = "n", and the replica-map shows that
> broker "n" has certain partitions (because we never assigned them away),
> the new broker will immediately become follower for these partitions and
> start replicating the missing data.
> This makes automatic
Hello,
For #3, I assume this relies on controlled shutdown to transfer leadership
gracefully? Or is there some way to use partition reassignment to set the
preferred leader of each partition? I ask because we've run into some
problems relying on controlled shutdown and having a separate verifiable
nd a
> few
> > things make a few assumptions about how zookeeper is used. But as a tool
> > for provisioning, expanding and failure recovery it is working fine so
> far.
> >
> > *knocks on wood*
> >
> > On Tue, Jan 12, 2016 at 4:08 PM, Luke Steensen <
> > l
//github.com/yahoo/kafka-manager
>
> On Tue, Jan 12, 2016 at 2:50 PM, Luke Steensen <
> luke.steen...@braintreepayments.com> wrote:
>
> > Hello,
> >
> > We've run into a bit of a head-scratcher with a new kafka deployment and
> > I'm curious if anyon
Hello,
We've run into a bit of a head-scratcher with a new kafka deployment and
I'm curious if anyone has any ideas.
A little bit of background: this deployment uses "immutable infrastructure"
on AWS, so instead of configuring the host in-place, we stop the broker,
tear down the instance, and rep
meone more familiar with the code can comment, but that
statement does appear to be preventing the correct behavior.
Thanks,
Luke
On Tue, Nov 10, 2015 at 2:15 PM, Luke Steensen <
luke.steen...@braintreepayments.com> wrote:
> Hello,
>
> We've been testing recent version
ure how to safely decommission a broker without
potentially leaving a producer with a permanently stuck request.
Thanks,
Luke Steensen
23 matches
Mail list logo