[ 
https://issues.apache.org/jira/browse/KAFKA-5376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16037415#comment-16037415
 ] 

ASF GitHub Bot commented on KAFKA-5376:
---------------------------------------

GitHub user hachikuji opened a pull request:

    https://github.com/apache/kafka/pull/3239

    KAFKA-5376: Ensure aborted transactions are propagated in DelayedFetch

    

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/hachikuji/kafka KAFKA-5376

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/kafka/pull/3239.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #3239
    
----
commit dd358917d3ee5233f477061af01be6a202138d83
Author: Jason Gustafson <ja...@confluent.io>
Date:   2017-06-05T18:59:29Z

    KAFKA-5376: Ensure aborted transactions are propagated in DelayedFetch

----


> Transactions: Concurrent transactional consumer reads aborted messages
> ----------------------------------------------------------------------
>
>                 Key: KAFKA-5376
>                 URL: https://issues.apache.org/jira/browse/KAFKA-5376
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: Apurva Mehta
>            Assignee: Jason Gustafson
>            Priority: Blocker
>              Labels: exactly-once
>         Attachments: KAFKA-5376.tar.gz
>
>
> This may be a dup of KAFKA-5355, but the system tests in KAFKA-5366 shows 
> that a concurrent transactional consumer reads aborted messages. For the test 
> in question the clients are bounced 6 times. With a transaction size of 500, 
> we expect 3000 aborted messages. The concurrent consumer regularly over 
> counts by 1000 to 1500 messages, suggesting that some aborted transactions 
> are consumed. 
> {noformat}
> --------------------------------------------------------------------------------
> test_id:    
> kafkatest.tests.core.transactions_test.TransactionsTest.test_transactions.failure_mode=clean_bounce.bounce_target=clients
> status:     FAIL
> run time:   1 minute 56.102 seconds
>     Detected 1000 dups in concurrently consumed messages
> Traceback (most recent call last):
>   File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", 
> line 123, in run
>     data = self.run_test()
>   File 
> "/usr/local/lib/python2.7/dist-packages/ducktape/tests/runner_client.py", 
> line 176, in run_test
>     return self.test_context.function(self.test)
>   File "/usr/local/lib/python2.7/dist-packages/ducktape/mark/_mark.py", line 
> 321, in wrapper
>     return functools.partial(f, *args, **kwargs)(*w_args, **w_kwargs)
>   File "/opt/kafka-dev/tests/kafkatest/tests/core/transactions_test.py", line 
> 235, in test_transactions
>     assert num_dups_in_concurrent_consumer == 0, "Detected %d dups in 
> concurrently consumed messages" % num_dups_in_concurrent_consumer
> AssertionError: Detected 1000 dups in concurrently consumed messages
> {noformat}
> This behavior continues even after https://github.com/apache/kafka/pull/3221 
> was merged. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to