Bumping for the off chance that during this time some sort of a bug was
reported that might explain this behaviour.. i will feel more comfortable
bumping our kafka versions this way :)
On Wed, Feb 24, 2021 at 12:48 PM Nitay Kufert wrote:
> I guess it's possible but very unlikely be
in `null` and hence `_.split`
> throws?
>
> On Tue, Feb 23, 2021 at 8:23 AM Nitay Kufert wrote:
>
> > Hey, missed your replay - but the code i've shared above the logs is the
> > code around those lines (removed some identifiers to make it a little bit
> > more ge
Stream[Windowed[String],
> SingleInputMessage]
On Fri, Jan 29, 2021 at 9:01 AM Guozhang Wang wrote:
> Could you share your code around
>
> >
>
> com.app.consumer.Utils$.$anonfun$buildCountersStream$1(ServiceUtils.scala:91)
>
> That seems to be where NPE is thrown.
>
>
gt; org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:690)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:551)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:510)
>
lso if you're using rocksDB
> there are some rocksDB metrics in newer versions of Kafka that could be
> helpful for diagnosing the issue.
>
> Cheers,
> Leah
>
> On Mon, Dec 7, 2020 at 8:59 AM Nitay Kufert wrote:
>
> > Hey,
> > We are running a kafka-stream
Regarding the NULLs not being deleted - I saw this
https://issues.apache.org/jira/browse/KAFKA-8522 which might explain this
case
On Sun, Dec 6, 2020 at 3:02 PM Nitay Kufert wrote:
> Hey,
> First of all I want to apologize for thinking our own implementation
> of StateRestoreListe
problem in the work allocation since the
machines are not loaded at all, and have enough threads (more than double
the cpu's).
Any idea what's going on there?
--
Nitay Kufert
Backend Team Leader
[image: ironSource] <http://www.ironsrc.com>
email nita...@ironsrc.com
mobile +
e you infers the
> elapsed time and the total number of records restored?
>
>
> Guozhang
>
>
> On Tue, Nov 24, 2020 at 3:44 AM Nitay Kufert wrote:
>
> > Hey,
> > I get the log *after* the restart was triggered for my app (and my app
> > actually restarted, mean
Restored", is it
> before the restarting was triggered?
>
> Guozhang
>
> On Mon, Nov 23, 2020 at 4:15 AM Nitay Kufert wrote:
>
> > Hey all,
> > We have been running a kafka-stream based service in production for the
> > last couple of years (we have 4 brokers o
we have some underlying problem which can explain it.
Let me know if you need some more info
Thanks!
--
Nitay Kufert
Backend Team Leader
[image: ironSource] <http://www.ironsrc.com>
email nita...@ironsrc.com
mobile +972-54-5480021
fax +972-77-5448273
skype nitay.kufert.ssa
12
Opened a Jira ticket: https://issues.apache.org/jira/browse/KAFKA-9824
On Mon, Apr 6, 2020 at 10:43 AM Nitay Kufert wrote:
> 2.3.1 - Both broker & clients upgrade
>
> On Sun, Apr 5, 2020 at 8:52 PM Ismael Juma wrote:
>
>> Hi Nitay,
>>
>> What version were y
deally consumer and broker logs from the
> period where the issue happened.
>
> Ismael
>
> On Sun, Apr 5, 2020, 10:13 AM Nitay Kufert wrote:
>
> > Hey,
> >
> > We are using kafka streams across our tech stack for the last 2-3 years.
> >
> >
020-04-02T07:00:00.000')
Any idea what can be causing this? we have it happen to us at least 5 times
since the upgrade, and before that i don't remember it even happening to us.
Let me know if you need more data from me
Thanks
--
Nitay Kufert
Backend Team Leader
[image: ironSource] <htt
Also posted it at apache jira:
https://issues.apache.org/jira/browse/KAFKA-9335
On Thu, Dec 26, 2019 at 12:41 PM Nitay Kufert wrote:
> I have made a "toy" example to reproduce this error, this is more or less
> what's going on in our application:
>>
>> pa
kaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267)
at
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231)
at
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)
at
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:843)
at
org.apa
Debugged a little, and I think the problem occurs when using *outerJoin *and
it always the repartition topic of the 2nd ktable of the join.
I also didn't mentioned we are working with Scala 2.11.12
On Tue, Dec 24, 2019 at 6:15 PM Nitay Kufert wrote:
> Hey, thanks for the
; make as well.
>
> On Mon, Dec 23, 2019 at 3:03 PM Guozhang Wang wrote:
>
> > Hello Nitay,
> >
> > Could you share the topology description on both 2.4 and 2.3.1, and also
> > could you elaborate on the feature flag you turned on / off?
> >
> >
> > Gu
returns false).
This specific stream operation has a feature flag - so if I turn the
feature flag OFF everything seems to work.
Am I missing something? Maybe a new feature requires some new
configurations?
Any known bug?
Help would be much appreciated
Let me know if you need more information (I can
>
> Best,
> Sophie
>
> On Mon, Jul 8, 2019 at 5:14 AM Nitay Kufert wrote:
>
> > Hey,
> > Following https://issues.apache.org/jira/browse/KAFKA-7918 I had to
> change
> > the current implementation of our unit tests.
> >
> > Befo
which in turn keep putTime un-initialized.
Am I missing something?
--
Nitay Kufert
Backend Developer
[image: ironSource] <http://www.ironsrc.com>
email nita...@ironsrc.com
mobile +972-54-5480021
fax +972-77-5448273
skype nitay.kufert.ssa
9 Ehad Ha'am st. Tel- Aviv
ironsrc.com <http
sound likely, if you changed the number of partitions then the
> hashing of the key's will change destination. You need to either clear the
> data (ie change retention to very small and roll the logs) or recreate the
> topic.
>
> /svante
>
> Den fre 17 maj 2019 kl 12:32 skre
resetting.
If this sounds like a possible explanation (It does to me) - than I guess
this whole threads is redundant hehe :)
On Fri, May 17, 2019 at 1:01 PM Nitay Kufert wrote:
> Hey all,
>
> I am trying to understand a situation I came across and can't find an
> explanation...
&g
3599916489
CreateTime:1557871319283 unique_key_123 14450.3599916489
CreateTime:1557872731646 unique_key_123 14451.2599916489
I would really appreciate an explanation or reassurance that this is not
expected behavior.
Let me know if I can supply more information
Thanks!
--
Nitay Kufert
Backend
Added the log file (In the previous mail I saw the lines are cut)
On Tue, Jan 8, 2019 at 2:39 PM Nitay Kufert wrote:
> Thanks. it seems promising. Sounds a lot like the problems we are having.
> Do you know when the fix will be released?
>
> BTW, It just happened to us again, this
x27;s going to re-initialize the state
> store, followed immediately by a transition to "running". Perhaps you can
> check your Streams logs to see if you see anything similar.
>
> Thanks,
> -John
>
> On Sat, Jan 5, 2019 at 10:48 AM Nitay Kufert wrote:
>
> >
ive tasks: [0_2, 0_5]
> >current standby tasks: [0_1, 0_4]
> >previous active tasks: []
> > (org.apache.kafka.streams.processor.internals.StreamThread)
>
>
>
> I look forward to hearing back from you (either with more detailed logs or
> just a clarification a
nse of what I am writing and maybe shed some light on
the possible reasons for this strange behavior.
For now, as a temporary solution, we are moving to "on-demand" instances
(which basically means that machines won't go up and down often), so I hope
it will solve our problems.
T
*Update in case it is relevant for someone:*
It seems that the problem is related to the "Too many open files" errors we
were getting.
We changed the open files limitations on our instances and it looks like
the problem is gone.
Thanks
On Wed, Nov 28, 2018 at 2:32 PM Nitay Kufert wr
some logic on the app side not to "start" consuming unless
the app is in RUNNING state?
Thanks
On Tue, Nov 27, 2018 at 7:08 PM Nitay Kufert wrote:
> Hey everyone,
> We are running Kafka streams ver 2.1.0 (Scala).
>
> The specific use-case I am talking about can be simpli
s disabled we don't want to go back to old behavior and not
> purge data.
>
> Hope this helps.
>
>
> -Matthias
>
> On 11/22/18 8:27 AM, Nitay Kufert wrote:
> > Thanks for the fast response Bill.
> > I have read the comments on the Jira and I understood that th
;high rate" keys
4. When it happens - it happens on several keys at the same time
5. Around the same time, we had a Spot-Instance replacement.
6. Check the details on the reduce function and saw that only in the first
invocation it doesn't actually apply the reduce function.
Maybe we
reading the Jira comments https://issues.apache.org/jira/browse/KAFKA-7190
> and you can read the associated KIP
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-360%3A+Improve+handling+of+unknown+producer
> .
>
> Thanks,
> Bill
>
> On Wed, Nov 21, 2018 at 5:33 AM Ni
urrent: {epoch:-1, offset:-1} for Partition:
> >> > >
> >> apache-wordcount-KSTREAM-AGGREGATE-STATE-STORE-03-repartition-0.
> >> > > Cache now contains 0 entries.
> (kafka.server.epoch.LeaderEpochFileCache)
> >> > > [2018-07-18 21:10:
Hey,
I described the problem I am having with Kafka streams 1.1.0 in
StackOverflow, so I hope its fine to cross-reference.
https://stackoverflow.com/questions/49968123/unknown-producer-id-when-using-apache-kafka-streams-scala
--
Nitay Kufert
Backend Developer
[image: ironSource] <h
34 matches
Mail list logo