Hi,
To collect the elements of a DataStream (usually only meant for testing
purposes), you can take a look at `DataStreamUtils#collect(DataStream)`.
Cheers,
Gordon
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi,
Could you let me know what data source / connector you are using?
My gut feeling is that perhaps some sources have already reached EOF and
terminated, which may explain the exception (which is expected behaviour).
Cheers,
Gordon
--
Sent from: http://apache-flink-user-mailing-list-archive.2
Hi,
Your observations are correct.
It is expected that the result of `KafkaDeserializationSchema#isEndOfStream`
triggers a single subtask to escape its fetch loop. Therefore, if a subtask
is assigned multiple partitions, as soon as one record (regardless of which
partition it came from) signals en
Hi,
I want to achieve the following using event time session windows:
1. When the window.getStart() and last event timestamp in the window is
greater than MIN_WINDOW_SIZE milliseconds, I want to emit a message "Window
started @ timestamp".
2. When the session window ends, i.e. the wate
Hi Maxim Parkachov,
The users files also have been shipped to JobManager and TaskManager.
However, it
is not directly added to the classpath. Instead, the parent directory is
added to the
classpath. This changes are to make resource classloading work. You could
check more
information here[1].
[1
Hi, sunfulin
Using constant key in `group by` query is not usual and inefficient, you can
get around this bug by bubbling up your constant key in `group by` from now.
BTW,godfrey is ready to resolve issue.
> 在 2020年2月17日,10:15,sunfulin 写道:
>
> Hi,
> WOW,really thankful for the track and
Hi,
WOW,really thankful for the track and debug of this problem. I can see the
constant key issue. Appreciate for your kindly help : )
At 2020-02-15 21:06:58, "Leonard Xu" wrote:
Hi, sunfulin
I reproduce your case,this should be a bug in extracting unique key from plan
and I create
Hi Soheil,
I think the root cause is that in the cancellation, the task was stuck in
*org.postgresql.jdbc.PgStatement.killTimerTask(PgStatement.java:999)*
The taskmanager process exit is expected in this case to enforce a failure
and recovery.
To be specific, when a task on the TM is to be canc
Dear community,
happy to share this week's community digest with the release of Flink 1.10,
a proposal for better changelog support in Flink SQL, a documentation style
guide, the Flink Forward San Francisco schedule and a bit more.
Flink Development
==
* [releases] Apache Flink 1.10