In case it helps anyone else, we opened sourced our Nagios health check for
monitoring consumer group health using Burrow:
https://github.com/williamsjj/kafka_health
-J
t;
>
> Regards,
> Dave
>
>
> -Original Message-
> From: "Jason J. W. Williams"
> Sent: Monday, June 13, 2016 10:45am
> To: users@kafka.apache.org
> Subject: Re: Introducing Dory
>
> Hi Dave,
>
> Dory sounds very exciting. Without persiste
Hi Dave,
Dory sounds very exciting. Without persistence its less useful for clients
connected over a WAN, since if the WAN goes wonky you could build up quite
a queue until it comes back.
-J
On Mon, Jun 13, 2016 at 3:00 AM, Dave Peterson wrote:
> Hi Arya,
>
> In the case of a kernel panic or o
any ZK-related operations. The
> tradeoff between 3 and 5 ZK nodes is fault tolerance (better with 5) vs
> write performance (better with 3).
>
> -Ewen
>
> On Tue, Jan 26, 2016 at 12:09 PM, Jason J. W. Williams <
> jasonjwwilli...@gmail.com> wrote:
>
> > Hi Guys,
> &g
Hi Guys,
In general, is it a good idea to run a ZK node on each Kafka broker? In
otherwords, as you add broker nodes you are also adding ZK nodes 1:1. Or
should the ZK cluster be kept a smaller fixed size (like 3)?
Thank you in advance.
-J
There does appear to be a recurring issue with the node logging closed
connections from the other 2 nodes:
https://gist.github.com/williamsjj/0481600566f10e5593c4
The other nodes though show no errors.
-J
On Mon, Jan 25, 2016 at 2:35 PM, Jason J. W. Williams <
jasonjwwilli...@gmail.com>
Can you confirm that ISRon node never increased?
>
Yeah I waited for a couple hours and no change.
Thank you for your help so far.
-J
> On Jan 24, 2016 12:33 PM, "Jason J. W. Williams" <
> jasonjwwilli...@gmail.com>
> wrote:
>
> > Hi Guys,
> >
> >
Hi Guys,
I've set up a test cluster using 3 Vagrant nodes each running a Kafka
broker and zookeeper instance. Zookeepers are all linked properly and the
brokers appear to be registering properly.
After creating a new topic with 128 partitions and a replication factor of
3 (https://gist.github.co
> >>>> So assuming you've already paired your partition count to the
> consumer
> > >>>> count...if you experience a spike in messages and want to spin up
> more
> > >>>> consumers to add temporary processing capacity, what's the suggested
> > >> way to
> > >>>> ha
e may vary though.
> But you’re certainly not alone with wanting to do something like this.
> There is a buffering producer on the roadmap, although it may end up being
> a slightly different thing.
>
> B
>
>
> > On 16 Jan 2016, at 00:12, Jason J. W. Williams <
> jason
Hi,
I'm trying to make sure I understand this statement in the docs:
"Each broker partition is consumed by a single consumer within a given
consumer group. The consumer must establish its ownership of a given
partition before any consumption can begin."
If I have:
* a topic with 1 partition
* s
The log.retention.* configs on the edge brokers should cover your TTL
> needs.
>
>
> On Thu, Jan 14, 2016 at 3:37 PM, Jason J. W. Williams <
> jasonjwwilli...@gmail.com> wrote:
>
> > Hello,
> >
> > We historically have been a RabbitMQ environment, but we're l
Hello,
We historically have been a RabbitMQ environment, but we're looking at
using Kafka for a new project and I'm wondering if the following
topology/setup would work well in Kafka (for RMQ we'd use federation):
* Multiple remote datacenters consisting each of a single server running an
HTTP ap
13 matches
Mail list logo