[ 
https://issues.apache.org/jira/browse/KAFKA-3068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15087309#comment-15087309
 ] 

Eno Thereska commented on KAFKA-3068:
-------------------------------------

[~junrao], [~hachikuji] I understand the concerns. What I don't like about 
using the bootstrap servers is that the problem is punted to the user (to 
provide enough bootstrap servers, to keep track of whether they have moved and 
to restart producers when they do so. For a long running cluster of 100+ 
machines that is hard to do.). [~junrao]: between these two non-ideal solutions 
do we have a sense which one is the least worst? I can change the code to use 
the bootstrap brokers but I am worried an equal number of users may be 
dissatisfied from that. 

> NetworkClient may connect to a different Kafka cluster than originally 
> configured
> ---------------------------------------------------------------------------------
>
>                 Key: KAFKA-3068
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3068
>             Project: Kafka
>          Issue Type: Bug
>          Components: clients
>    Affects Versions: 0.9.0.0
>            Reporter: Jun Rao
>
> In https://github.com/apache/kafka/pull/290, we added the logic to cache all 
> brokers (id and ip) that the client has ever seen. If we can't find an 
> available broker from the current Metadata, we will pick a broker that we 
> have ever seen (in NetworkClient.leastLoadedNode()).
> One potential problem this logic can introduce is the following. Suppose that 
> we have a broker with id 1 in a Kafka cluster. A producer client remembers 
> this broker in nodesEverSeen. At some point, we bring down this broker and 
> use the host in a different Kafka cluster. Then, the producer client uses 
> this broker from nodesEverSeen to refresh metadata. It will find the metadata 
> in a different Kafka cluster and start producing data there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to