Both the broker and the consumer have the logic for handling ZK session
expirations. So, they should recover automatically. The issue is that if
there is a real failure in the broker/consumer while the VPN is down, the
failure may not be detected.

Thanks,

Jun


On Wed, Nov 27, 2013 at 8:02 AM, Philip O'Toole <phi...@loggly.com> wrote:

> Actually, we are using custom producers, which recover fully after any
> disconnect-and-reconnect from Kafka. It is the High-Level consumers and
> Kafka itself than concerned me.
>
> I have seen good behaviour from the system in these conditions before, but
> wanted to confirm.
>
> Philip
>
>
>
>
> On Wed, Nov 27, 2013 at 7:46 AM, Jun Rao <jun...@gmail.com> wrote:
>
> > I assume that you are using a ZK-based producer. If brokers don't change
> in
> > the window where the VPN connection is down, this may not matter. When
> the
> > VPN connection is back, the ZK session will expire and the producer will
> > establish establish a new ZK session and  new connections to the brokers.
> >
> > Thanks,
> >
> > Jun
> >
> >
> > On Tue, Nov 26, 2013 at 9:33 PM, Philip O'Toole <phi...@loggly.com>
> wrote:
> >
> > > I want to use a ZK cluster for my Kafka cluster, which is only
> available
> > > over a cross-country VPN tunnel. The VPN tunnel is prone to resets,
> every
> > > other day or so, perhaps down for a couple of  minutes at a time.
> > >
> > > Is this a concern? Any setting changes I should make to mitigate any
> > > potential issues? Will my simple producers be affected at all, while
> > > writing to a Kafka broker that experiences this?
> > >
> > > Thanks,
> > >
> > > Philip
> >
>

Reply via email to