I would recommend to investigate broker and network health.
-Matthias
On 10/19/19 7:28 PM, Xiyuan Hu wrote:
> Hi,
>
> I'm running Kafka Streams v2.3.0. During peak hours, I noticed some
> nodes had Timeout exception and it will mark the node status to DEAD.
> Even though, I implement a CustomPro
>> 1) The app has a huge consuming lag on both source topic and internal
>> repartition topics , ~5 M messages and keeps growing. Will the lag
>> lead to this timeout exception? My understanding is the app polls too
>> many messages before it could send out even the lag indicates it still
>> polls
The problem is, that the first `reduce()` is using a windowed-store and
applies a retention time to expire old windows.
However, the second `reduce()` is _no_ windowed. Windowed-aggregations
over KTables (the result of the first aggregation is a KTable) are
currently not supported. Therefore, the
We had some discussion in the community slack. As we don't have debug level
log, it's hard to tell what happened exactly.
On Wed, Oct 23, 2019 at 9:52 PM Matthias J. Sax
wrote:
> Seems like Amuthan created a ticket:
> https://issues.apache.org/jira/browse/KAFKA-9073
>
> Let's move the discussion
Hello Boyang,
If I start the Kafka Stream process at the middle of a day, say, 10/24 16:00
pm, and with a tumbling window size of 1 day(24 hours). Would the next
aggregation run at 10/25 00:00 AM? or at 10/25 16:00 PM?
在 2019-10-24 11:06:24,"Boyang Chen" 写道:
>Hey Zongzhen,
>
>I have im
Did you try to test you code using `TopologyTestDriver`? Maybe this
helps to figure out the root cause of the issue.
We have many unit/integration tests in place and many people use
suppress() successfully in production. Hence, I am sure, it basically
works -- of course, they might still be an unk
One issue to consider is timezones thought. Tumbling windows align at
timetamp zero, but zero is the start of the day in UTC only. If you are
in a different timezone, you would need to "shift" the timestamps
accordingly.
For example, you can shift them using a custom TimestampExtractor, or
you use
Seems like Amuthan created a ticket:
https://issues.apache.org/jira/browse/KAFKA-9073
Let's move the discussion to the ticket.
-Matthias
On 10/21/19 1:19 PM, Boyang Chen wrote:
> Hey Amuthan,
>
> I replied you on the Slack community. Feel free to choose either continue
> the discussion here or
Hey Zongzhen,
I have implemented some similar functionality with KStream before. You
could just set tumbling window to 24 hours to get daily aggregation result.
As you just need calendar dates, the tumbling window computation starts
from system time 0 which is exactly cut-off daily.
Boyang
On We
Hello,
I wanna run Kafka Streams on my system to aggregate the users' sales order
transactions based on "daily".
I know that Kafka Streams provides such mechanisms called tumbling window, but
it seems to be just setting an interval to run the aggregation function. What I
want is to aggregate b
+1 (binding)
- downloaded and compiled source code
- verified signatures for source code and Scala 2.11 binary
- run core/connect/streams quickstart using Scala 2.11 binaries
-Matthias
On 10/23/19 2:43 PM, Colin McCabe wrote:
> + d...@kafka.apache.org
>
> On Tue, Oct 22, 2019, at 15:48, Colin
+ d...@kafka.apache.org
On Tue, Oct 22, 2019, at 15:48, Colin McCabe wrote:
> +1. I ran the broker, producer, consumer, etc.
>
> best,
> Colin
>
> On Tue, Oct 22, 2019, at 13:32, Guozhang Wang wrote:
> > +1. I've ran the quick start and unit tests.
> >
> >
> > Guozhang
> >
> > On Tue, Oct 22
As long as you have >1 broker (2,3, whatever) and min insyc replica is set
to 1 (set by server.properties), you should be able to stop the affected
broker(s), delete all data files, and restart them.
It would recreate all the files based on your live leader.
Before you do that, please ensure that y
Hey,
Thank you for the reply.
Unfortunately an upgrade is not possible for us on this environment.
What would happen if I delete the broker files when RF < 3?
Data loss is something we can handle in order to bring this back up.
Currently we have a large number of under replicated partitions as
I have following Kafka cluster:
- Broker#: 13 (1 Broker : 14 cores & 36GB memory )
- Kafka cluster version: 2.0.0
- Kafka Java client version: 2.0.0
- Number topics: ~15 (3 replica, 2 min replica, 8 partitions)
- Number of consumers: 7K (all independent and manually assigned all
Hello David, thanks for the release,
+1
I have run the tests (all passed) and executed some internal Apps
successfully after starting one Broker following the quick start guide.
Cheers!
--
Jonathan
On Tue, Oct 22, 2019 at 11:49 PM Colin McCabe wrote:
> +1. I ran the broker, producer, con
16 matches
Mail list logo