The producer in 0.8 doesn't depend on a VIP to connect to the brokers.
Instead, it obtains metadata from a list of brokers (or a VIP in front of
the brokers) and finds out the host/port of the broker for the leader
replica of each partition in a topic. It then establishes socket
connections to those brokers directly. If a message doesn't provide a key,
the producer will pick a random and available partition to send the data.
On send failures, metadata will be refreshed to pick up the new leaders of
the partitions. Metadata is also refreshed periodically to pick up any
newly added partitions.

Thanks,

Jun


On Fri, Sep 27, 2013 at 8:12 AM, Nicolae Marasoiu <nmara...@adobe.com>wrote:

> Hi,
>
> Thank you for answer.
>
> How is the balancing done without zookeeper? Is it done locally to
> producer thread with round robin, or do we need to use a VIP with a TCP
> balancer for each partition, with the hosts having that partition
> replicas, or by other means?
>
> Indeed, the producers reconnect, and increasing the maximum number of send
> message attempts we achieved the wanted effect, namely to queue messages
> even when kafka brokers are down and, the latest on producer.close(), wait
> for them to come back up and send them the messages. We close it because
> we have a map reduce for that backend process, not a streaming one,
> because storm in-memory queuing limitations. When Samza comes up with
> persistent channels based on kafka this will likely become a streaming
> process with no close involved usually.
>
> We use an async producer. By blocking I meant the write in its queue from
> the client is blocking when the queue is full, as we don't want to loose
> them (enque.ms < 0).
>
> Thank you again,
> Nicu Marasoiu
> Adobe
>
>
> On 9/27/13 5:57 PM, "Jun Rao" <jun...@gmail.com> wrote:
>
> >Yes, the zk option is no longer available in the producer in 0.8 to
> >simplify the logic in the client. Load balancing in the producer is still
> >supported though.
> >
> >I am not sure what you mean by "blocking async" producer. The producer has
> >an async mode, but then the send() calls are not blocking. The expected
> >behavior is that if all brokers are down, all sends will fail. However,
> >when the brokers are up again, sends will succeed again. There is no need
> >to close the producer in between. Is that not what you are seeing?
> >
> >Thanks,
> >
> >Jun
> >
> >
> >On Fri, Sep 27, 2013 at 7:15 AM, Nicolae Marasoiu
> ><nmara...@adobe.com>wrote:
> >
> >> Hi,
> >>
> >> We have a blocking async producer.
> >> We noticed:
> >>
> >>  1.  when the metadata brokers are down, the client no longer attempts
> >>to
> >> reconnect to any of them.
> >>  2.  zookeeper.connect is no longer recognized as a valid configuration,
> >> hinting that in fact the zookeeper balancing is no longer supported on
> >> producing?
> >>
> >> The net effect is that producer.close() immediately returns while kafka
> >> cluster is down. Our expectation is to remain waiting until some nodes
> >>go
> >> up and try to send the data.
> >>
> >> I know that in 0.7.x this was the behaviour, based on zookeeper
> >> whitelisting, balancing and watching.
> >>
> >> Please advice,
> >> Nicu Marasoiu, Adobe
> >>
>
>

Reply via email to