[ 
https://issues.apache.org/jira/browse/KAFKA-3205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15217346#comment-15217346
 ] 

ASF GitHub Bot commented on KAFKA-3205:
---------------------------------------

GitHub user bondj opened a pull request:

    https://github.com/apache/kafka/pull/1166

    KAFKA-3205 Support passive close by broker

    An attempt to fix KAFKA-3205.  It appears the problem is that the broker 
has closed the connection passively, and the client should react appropriately.
    
    In NetworkReceive.readFrom() rather than throw an EOFException (Which means 
the end of stream has been reached unexpectedly during input), instead return 
the negative bytes read signifying an acceptable end of stream.
    
    In Selector if the channel is being passively closed, don't try to read any 
more data, don't try to write, and close the key.
    
    I believe this will fix the problem.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/bondj/kafka passiveClose

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/kafka/pull/1166.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #1166
    
----
commit 5dc11015435a38a0d97efa2f46b4d9d9f41645b5
Author: Jonathan Bond <jb...@netflix.com>
Date:   2016-03-30T03:57:11Z

    Support passive close by broker

----


> Error in I/O with host (java.io.EOFException) raised in producer
> ----------------------------------------------------------------
>
>                 Key: KAFKA-3205
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3205
>             Project: Kafka
>          Issue Type: Bug
>          Components: clients
>    Affects Versions: 0.8.2.1, 0.9.0.0
>            Reporter: Jonathan Raffre
>
> In a situation with a Kafka broker in 0.9 and producers still in 0.8.2.x, 
> producers seems to raise the following after a variable amount of time since 
> start :
> {noformat}
> 2016-01-29 14:33:13,066 WARN [] o.a.k.c.n.Selector: Error in I/O with 
> 172.22.2.170
> java.io.EOFException: null
>         at 
> org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:62)
>  ~[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
>         at org.apache.kafka.common.network.Selector.poll(Selector.java:248) 
> ~[org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
>         at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:192) 
> [org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
>         at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:191) 
> [org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
>         at 
> org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:122) 
> [org.apache.kafka.kafka-clients-0.8.2.0.jar:na]
>         at java.lang.Thread.run(Thread.java:745) [na:1.8.0_66-internal]
> {noformat}
> This can be reproduced successfully by doing the following :
>  * Start a 0.8.2 producer connected to the 0.9 broker
>  * Wait 15 minutes, exactly
>  * See the error in the producer logs.
> Oddly, this also shows up in an active producer but after 10 minutes of 
> activity.
> Kafka's server.properties :
> {noformat}
> broker.id=1
> listeners=PLAINTEXT://:9092
> port=9092
> num.network.threads=2
> num.io.threads=2
> socket.send.buffer.bytes=1048576
> socket.receive.buffer.bytes=1048576
> socket.request.max.bytes=104857600
> log.dirs=/mnt/data/kafka
> num.partitions=4
> auto.create.topics.enable=false
> delete.topic.enable=true
> num.recovery.threads.per.data.dir=1
> log.retention.hours=48
> log.retention.bytes=524288000
> log.segment.bytes=52428800
> log.retention.check.interval.ms=60000
> log.roll.hours=24
> log.cleanup.policy=delete
> log.cleaner.enable=true
> zookeeper.connect=127.0.0.1:2181
> zookeeper.connection.timeout.ms=1000000
> {noformat}
> Producer's configuration :
> {noformat}
>       compression.type = none
>       metric.reporters = []
>       metadata.max.age.ms = 300000
>       metadata.fetch.timeout.ms = 60000
>       acks = all
>       batch.size = 16384
>       reconnect.backoff.ms = 10
>       bootstrap.servers = [127.0.0.1:9092]
>       receive.buffer.bytes = 32768
>       retry.backoff.ms = 500
>       buffer.memory = 33554432
>       timeout.ms = 30000
>       key.serializer = class 
> org.apache.kafka.common.serialization.StringSerializer
>       retries = 3
>       max.request.size = 5000000
>       block.on.buffer.full = true
>       value.serializer = class 
> org.apache.kafka.common.serialization.StringSerializer
>       metrics.sample.window.ms = 30000
>       send.buffer.bytes = 131072
>       max.in.flight.requests.per.connection = 5
>       metrics.num.samples = 2
>       linger.ms = 0
>       client.id = 
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to