Thanks Ismael,

Don't have permissions, my username is dbahir.

On Fri, Jun 3, 2016 at 4:49 AM, Ismael Juma <ism...@juma.me.uk> wrote:

> There are instructions here:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
>
> Let me know your user id in the wiki if you don't have the required
> permissions to create pages.
>
> Ismael
>
> On Fri, Jun 3, 2016 at 3:33 AM, Danny Bahir <dannyba...@gmail.com> wrote:
>
> > Yes, I'm in.
> >
> > Sent from my iPhone
> >
> > > On Jun 2, 2016, at 8:32 AM, Ismael Juma <ism...@juma.me.uk> wrote:
> > >
> > > Hi Danny,
> > >
> > > A KIP has not been drafted for that yet. Would you be interested in
> > working
> > > on it?
> > >
> > > Ismael
> > >
> > >> On Thu, Jun 2, 2016 at 1:15 PM, Danny Bahir <dannyba...@gmail.com>
> > wrote:
> > >>
> > >> Thanks Ben.
> > >>
> > >> The comments on the Jira mention a pluggable component that will
> manage
> > >> the bootstrap list from a discovery service.
> > >>
> > >> That's exactly what I need.
> > >>
> > >> Was a Kip drafted for this enhancement?
> > >>
> > >> -Danny
> > >>
> > >>> On Jun 1, 2016, at 7:05 AM, Ben Stopford <b...@confluent.io> wrote:
> > >>>
> > >>> Hey Danny
> > >>>
> > >>> Currently the bootstrap servers are only used when the client
> > >> initialises (there’s a bit of discussion around the issue in the jira
> > below
> > >> if you’re interested). To implement failover you’d need to catch a
> > timeout
> > >> exception in your client code, consulting your service discovery
> > mechanism
> > >> and reinitialise the client.
> > >>>
> > >>> KAFKA-3068 <https://issues.apache.org/jira/browse/KAFKA-3068>
> > >>>
> > >>> B
> > >>>
> > >>>> On 31 May 2016, at 22:09, Danny Bahir <dannyba...@gmail.com> wrote:
> > >>>>
> > >>>> Hello,
> > >>>>
> > >>>> Working on a multi data center Kafka installation in which all
> > clusters
> > >> have the same topics, the producers will be able to connect to any of
> > the
> > >> clusters. Would like the ability to dynamically control the set of
> > clusters
> > >> a producer will be able to connect to, that will allow to gracefully
> > take a
> > >> cluster offline for maintenance.
> > >>>> Current design is to have one zk cluster that is across all data
> > >> centers and will have info regarding what in which cluster a service
> is
> > >> available.
> > >>>>
> > >>>> In the case of Kafka it will house the info needed to populate
> > >> bootstrap.servers, a wrapper will be placed around the Kafka producer
> > and
> > >> will watch this ZK value. When the value will change the producer
> > instance
> > >> with the old value will be shut down and a new producer with the new
> > >> bootstrap.servers info will replace it.
> > >>>>
> > >>>> Is there a best practice for achieving this?
> > >>>>
> > >>>> Is there a way to dynamically update bootstrap.servers?
> > >>>>
> > >>>> Does the producer always go to the same machine from
> bootstrap.servers
> > >> when it refreshes the MetaData after metadata.max.age.ms has expired?
> > >>>>
> > >>>> Thanks!
> > >>
> >
>

Reply via email to