how to unsubscribe?
On Sat, Apr 16, 2022 at 3:21 AM Jai Patel
wrote:
> Hi Nico,
>
> Wanted to close the loop here. We did end up find a number of problems in
> our code:
> 1. Our operator was slow. It was iterating over several large Protobufs
> in a MapState then filtering it down to 1.
I dug into this further and I no longer suspect the window describe
previously as it does not leverage MergeableWindowAssigner. However, I did
identify four in our code that do. They all
level ProcessingTimeSessionWindows.withGap. 3 of them use a 500ms gap,
while oe uses a 100ms gap. Based on t
Here's our custom trigger. We thought about switching to
ProcessingTimeoutTrigger.of(CountTrigger.of(100, Time.ofMinutes(1)). But
I'm not sure that'll trigger properly when the window closes.
Thanks.
Jai
import org.apache.flink.streaming.api.windowing.triggers.CountTrigger;
import
org.apache.fl
We are encountering the following error when running our Flink job. We
have several processing windows, but it appears to be related to a
TumblingProcessingTimeWindow. Checkpoints are failing to complete midway.
The code block for the window is:
.keyBy(order -> getKey(order))
.window(Tu
Hi Nico,
Wanted to close the loop here. We did end up find a number of problems in
our code:
1. Our operator was slow. It was iterating over several large Protobufs in
a MapState then filtering it down to 1. We were able to identify that one
up-front and significantly improve the runtime of the
Hi Flink community,
*Here is the context: *
Theoretically, I would like to write following query but it won't work
since we can only define the WATERMARK in a table DDL:
INSERT into tableC
select tableA.field1
SUM(1) as `count`,
time_ltz AS getEventTimeInNS(tableA.timestamp, tab
Hello Flink community,
I am getting this error when writing data to Vertica table:
Exception in thread "main" org.apache.flink.table.api.ValidationException:
Unable to create a sink for writing table
'default_catalog.default_database.VerticaSink’.
...
Caused by: java.lang.IllegalStateException:
Hi, Jose
I assume you are using the DataStream API.
In general for any udf's exception in the DataStream job, only the
developer of the DataStream job knows whether the exception can be
tolerated. Because in some cases, tolerating exceptions can cause errors in
the final result. So you still have
Thanks Martijn. Now it's more clear to me of the issue.
I'll try to see if the state processor API could help to convert the 1.9
savepoints to 1.13 compatible.
BR,
Le jeu. 14 avr. 2022 à 11:47, Martijn Visser a
écrit :
> Hi Qinghui,
>
> If you're using SQL, please be aware that there are unfor
10 matches
Mail list logo