[ 
https://issues.apache.org/jira/browse/KAFKA-3703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15457896#comment-15457896
 ] 

ASF GitHub Bot commented on KAFKA-3703:
---------------------------------------

GitHub user rajinisivaram opened a pull request:

    https://github.com/apache/kafka/pull/1817

    KAFKA-3703: Flush outgoing writes before closing client selector

    Close client connections only after outgoing writes complete or timeout.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/rajinisivaram/kafka KAFKA-3703

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/kafka/pull/1817.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #1817
    
----
commit 50f009bebe0beaf55cb5e00f9db8fcb626f1399a
Author: Rajini Sivaram <rajinisiva...@googlemail.com>
Date:   2016-09-02T07:55:49Z

    KAFKA-3703: Flush outgoing writes before closing client selector

----


> PlaintextTransportLayer.close() doesn't complete outgoing writes
> ----------------------------------------------------------------
>
>                 Key: KAFKA-3703
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3703
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: Rajini Sivaram
>            Assignee: Rajini Sivaram
>
> Outgoing writes may be discarded when a connection is closed. For instance, 
> when running a producer with acks=0, a producer that writes data and closes 
> the producer would expect to see all writes to complete if there are no 
> errors. But close() simply closes the channel and socket which could result 
> in outgoing data being discarded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to