Hi Tyler.  This KIP is written such that the sever (broker) specifies a
session lifetime to the client, and then the client will re-authenticate at
a time consistent with that with whatever credentials it has when the
re-authentication kicks off.  You could specify a low max session lifetime
on the broker, but then you have to make sure the client refreshes its
credentials at that rate (and there are refresh-related configs for
kerberos to helkp you do that, such as
sasl.kerberos.ticket.renew.window.factor).  But whether this would solve
your problem or not I don't know.  It certainly won't allow you to react
when the scenario occurs, but it if your session lifetime and credential
refresh window are short enough you would end up reacting relatively soon
thereafter -- but again, my knowledge of Kerberos and what is exactly going
on in your situation is limited/practically zero.  I'm in the process of
getting the PR into shape, and hopefully it will be ready in the next week
or so -- you could of course try it out at that time and see.

Ron

On Wed, Sep 19, 2018 at 1:26 PM Tyler Monahan <tjmonah...@gmail.com> wrote:

> Hello,
>
> I have a rather odd issue that I think this KIP might fix but wanted to
> check. I have a kafka setup using SASL/Kerberos and when a broker dies in
> aws a new one is created using the same name/id. The Kerberos credentials
> however are different on the new instances and existing
> brokers/consumers/producers continue to try using the stored credentials
> for the old instance on the new instance which fails until everything is
> restarted to clear out stored credentials. My understanding is this KIP
> would make it so Re-Authenticate will clear out bad stored credentials. I
> am not sure if the re-authentication process would kick of when something
> fails with bad credential errors though.
>
> Tyler Monahan
>

Reply via email to