[ https://issues.apache.org/jira/browse/KAFKA-736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Neha Narkhede updated KAFKA-736: -------------------------------- Attachment: kafka-736-draft.patch This is a draft patch that changes the behavior of the required.request.acks=0 to not wait on a response from the broker. Since the producer can send batched requests without waiting for a network roundtrip, the throughput of the producer is very high and matches that of the 0.7 producer. I haven't run full fledged performance tests to get a detailed report but I've seen a single producer's throughput increase from 11 MB/s to 45 MB/s with the same config. Initially, I thought that without any changes to the socket server, it will not read more than 1 request from a producer on the same connection. That's because after reading a request completely, we set the interest ops to READ only after the response is written on the socket. Now, since we basically got rid of the response, my thinking was that the producer will keep writing onto the socket and its socket buffer will eventually fill up since the broker is not reading from that socket anymore. But this is not how the socket server behaves, which works in the favor of this feature. When the socket server accepts a connection, it registers the READ interest for that channel. Now even after we read a request completely, if there are more requests waiting on that socket and since the interest ops on that socket has not been changed, the server continues to select that key for READ operation. But, the current socket server design will reorder pipelined requests. All the requests sent to the broker end up in a common request queue. Let's say there are two requests R1 followed by R2 from the same socket in the request queue. Two different io threads can handle those requests and the response for R2 gets written before R1 on the socket. For ordering to work correctly, we need to maintain stickiness between the requests from one key and the corresponding io/request handler thread. One way of solving this problem is to replace the common request queue with a per io thread request queue. The network thread maps a key to a io thread when it accepts a new connection and maintains this mapping until the connection is closed and the key is invalid. One of the problems that this design has is that if one client sends requests at a very high rate, the corresponding io thread's request queue will fill up and the respective network thread will block. But thinking about this, the current one-request-queue approach suffers from the same drawback. This draft patch is meant for design review, I would like to save the following improvements for the v1 patch depending on which way we decide to go - 1. Add more unit tests for required.num.acks=0 2. Cache the key->io thread mapping instead of recomputing on each request > Add an option to the 0.8 producer to mimic 0.7 producer behavior > ---------------------------------------------------------------- > > Key: KAFKA-736 > URL: https://issues.apache.org/jira/browse/KAFKA-736 > Project: Kafka > Issue Type: Improvement > Components: producer > Affects Versions: 0.8 > Reporter: Neha Narkhede > Assignee: Neha Narkhede > Priority: Blocker > Labels: p2, replication-performance > Attachments: kafka-736-draft.patch > > Original Estimate: 24h > Remaining Estimate: 24h > > I profiled a producer throughput benchmark between a producer and a remote > broker. It turns out that the background send threads spends ~97% of its time > waiting to read the acknowledgement from the broker. > I propose we change the current behavior of request.required.acks=0 to mean > no acknowledgement from the broker. This will mimic the 0.7 producer behavior > and will enable tuning the producer for very high throughput. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira