Re: KIP-408: Add Asynchronous Processing to Kafka Streams

2018-12-26 Thread Richard Yu
y when the heavy RPC is done, we commit this record >>> to remove the barrier and make all 500 records available for downstream. So >>> here we still need to guarantee the ordering within 500 records, while in >>> the same time consumer semantic has nothing to change. >>

Re: KIP-408: Add Asynchronous Processing to Kafka Streams

2018-12-24 Thread Richard Yu
gt;> consumer semantic has nothing to change. >> >> Am I making the point clear here? Just want have more discussion on the >> ordering guarantee since I feel it wouldn't be a good idea to break >> consumer ordering guarantee by default. >> >> Best, >

Re: KIP-408: Add Asynchronous Processing to Kafka Streams

2018-12-24 Thread Richard Yu
e a good idea to break > consumer ordering guarantee by default. > > Best, > Boyang > > > From: Richard Yu > Sent: Saturday, December 22, 2018 9:08 AM > To: dev@kafka.apache.org > Subject: Re: KIP-408: Add Asynchronous Processing to Kafka Streams > > Hi Boyang, >

Re: KIP-408: Add Asynchronous Processing to Kafka Streams

2018-12-24 Thread Boyang Chen
k consumer ordering guarantee by default. Best, Boyang From: Richard Yu Sent: Saturday, December 22, 2018 9:08 AM To: dev@kafka.apache.org Subject: Re: KIP-408: Add Asynchronous Processing to Kafka Streams Hi Boyang, Thanks for pointing out the possibility o

Re: KIP-408: Add Asynchronous Processing to Kafka Streams

2018-12-21 Thread Richard Yu
Hi Boyang, Thanks for pointing out the possibility of skipping bad records (never crossed my mind). I suppose we could make it an option for the user if they could skip a bad record. It was never the intention of this KIP though on whether or not to do that. I could log a JIRA on such an issue, bu

Re: KIP-408: Add Asynchronous Processing to Kafka Streams

2018-12-21 Thread Boyang Chen
Thanks Richard for proposing this feature! We also have encountered some similar feature request that we want to define a generic async processing API. However I guess the motivation here is that we should skip big records during normal processi