Re: Kafka Streams 2.1.0, 3rd time data lose investigation

2019-01-08 Thread Nitay Kufert
Added the log file (In the previous mail I saw the lines are cut) On Tue, Jan 8, 2019 at 2:39 PM Nitay Kufert wrote: > Thanks. it seems promising. Sounds a lot like the problems we are having. > Do you know when the fix will be released? > > BTW, It just happened to us again, this time when I m

Re: Kafka Streams 2.1.0, 3rd time data lose investigation

2019-01-08 Thread Nitay Kufert
Thanks. it seems promising. Sounds a lot like the problems we are having. Do you know when the fix will be released? BTW, It just happened to us again, this time when I manually added 2 new instances (we had 4 and I increased to 6). This is the compacted topic showing the data loss: CreateTime:1

Re: Kafka Streams 2.1.0, 3rd time data lose investigation

2019-01-07 Thread John Roesler
Hi Nitay, > I will provide extra logs if it will happen again (I really really hope it > won't hehe :)) Yeah, I hear you. Reproducing errors in production is a real double-edged sword! Thanks for the explanation. It makes sense now. This may be grasping at straws, but it seems like your frequen

Re: Kafka Streams 2.1.0, 3rd time data lose investigation

2019-01-05 Thread Nitay Kufert
Hey John, Thanks for the response! I will provide extra logs if it will happen again (I really really hope it won't hehe :)) Some clarification regarding the previous mail: The only thing that shows the data loss is the messages from the compacted topic which I consumed a couple of hours after th

Re: Kafka Streams 2.1.0, 3rd time data lose investigation

2019-01-03 Thread John Roesler
Hi Nitay, I'm sorry to hear of these troubles; it sounds frustrating. No worries about spamming the list, but it does sound like this might be worth tracking as a bug report in Jira. Obviously, we do not expect to lose data when instances come and go, regardless of the frequency, and we do have t

Kafka Streams 2.1.0, 3rd time data lose investigation

2018-12-30 Thread Nitay Kufert
Hey everybody, We are running Kafka streams in production for the last year or so - we currently using the latest version (2.1.0) and we suffered from data lose several times before. The first time we noticed a data loss, we were able to trace it back to Exception that we were getting in the code -