RE: RE: Maintaining same offset while migrating from Confluent Replicator to Apache Mirror Maker 2.0

2020-12-07 Thread Amit.SRIVASTAV
Hi Ning, It did not worked. Here are the logs from the replicator and mirror maker 2 respectively: Replicator: [2020-12-08 05:11:06,611] INFO [Consumer clientId=onprem-aws-replicator-0, groupId=onprem-aws-replicator] Seeking to offset 83 for partition ONPREM.AWS.REPLICA.TOPIC.P3R3-0 (org.a

Re: Previous State of a Kafka Message

2020-12-07 Thread Suresh Chidambaram
Hi Andrew, Thank you for the suggestion. Creating the custom Processor helped me achieving the requirement. Thank you once again. Thanks C Suresh On Sunday, December 6, 2020, Andrew Grant wrote: > Hi Suresh, > > It seems the keys are different for each message - I see 0, 1 and 2. When > you sa

Re: RE: Maintaining same offset while migrating from Confluent Replicator to Apache Mirror Maker 2.0

2020-12-07 Thread Ning Zhang
Hi Amit, After looking into a little bit, do you mind to override `connector.consumer.group`? The default consumer group may be called 'connect-MirrorSourceConnector' or similar On 2020/12/07 03:32:30, wrote: > Hi Ning, > > Thank you for the response. > > I probably tried to change every po

Re: [VOTE] 2.7.0 RC4

2020-12-07 Thread Bill Bejeck
Hi Boyang, Thanks for the heads-up; I agree this is a regression for streams and should go into 2.7. Thanks, Bill On Mon, Dec 7, 2020 at 1:00 AM Boyang Chen wrote: > Hey Bill, > > Unfortunately we have found another regression in 2.7 streams, which I have > filed a blocker here

Re: GlobalKTable restoration - Unexplained performance penalty

2020-12-07 Thread Nitay Kufert
Regarding the NULLs not being deleted - I saw this https://issues.apache.org/jira/browse/KAFKA-8522 which might explain this case On Sun, Dec 6, 2020 at 3:02 PM Nitay Kufert wrote: > Hey, > First of all I want to apologize for thinking our own implementation > of StateRestoreListener was a defau

Sub Tasks being processed only after restart

2020-12-07 Thread Nitay Kufert
Hey, We are running a kafka-stream based app in production where the input, intermediate and global topics have 36 partitions. We have 17 sub-tasks (2 of them are for global stores so they won't generate tasks). More tech details: 6 machines with 16cpu's, 30 threads so: 6 * 30 = 180 stream-threads