Hi Scott,
Yes, the window trigger firing for every single late element.
If you only want the window to be triggered once, you can:
- Remove the allowedLateness()
- Use BoundedOutOfOrdernessTimestampExtractor to emit Watermarks that
lag behind the element.
The code(scala) looks like:
> c
Hi Ahmad,
Which version of Flink do you use?
Thanks, vino.
Ahmad Hassan 于2018年10月19日周五 下午11:32写道:
> Hi,
>
> Initializing mapstate hangs in window function. However if i use
> valuestate then it is initialized succcessfully. I am using rocksdb to
> store the state.
>
> public class MyWindowFunc
Hi,
I'm writing a custom `SourceFunction` which wraps an underlying
`InputFormatSourceFunction`. When I try to use this `SourceFunction` in a
stream (via `env.addSource` and a subsequent sink) I get errors related to
the `InputSplitAssigner` not being initialized for a particular vertex ID.
Full e
Thanks for the answer Hequn!
To be honest im still trying to wrap my head around this solution, also
trying to think whether it has advantages over my solution.
My original thought was that my design is "backwards" because logically i
would want to
1. collect raw records
2. partition them b
I'm using event-time windows of 1 hour that have an allowed lateness of
several hours. This supports the processing of access logs that can be
delayed by several hours. The windows aggregate data over the 1 hour period
and write to a database sink. Pretty straightforward.
Will the event-time trigg
Hi,
Initializing mapstate hangs in window function. However if i use valuestate
then it is initialized succcessfully. I am using rocksdb to store the state.
public class MyWindowFunction extends RichWindowFunction
{
private transient MapStateDescriptor productsDescriptor =
new MapStateDescriptor<
Hi to all,
is there any better way to get the list of required parameters by a Flink
job other than implementing ProgramDescription interface and implementing a
long (and somehow structured) getDescription() method?
Wouldn't be better to add a List getJobOptions to such interface in
order to allow
Hi Bing,
Ping Chesnay for you.
Thanks, vino.
Bing Lin 于2018年10月19日周五 上午5:46写道:
> Hi, is there anyone working with pyflink streaming that can help me with
> import errors, for example when include the paths for my libraries it gives
> me a importerror type_check. Has anyone encountered anything
Then I would suggest to use the FlinkClusterClient to submit jobs.
On 19.10.2018 11:02, Flavio Pompermaier wrote:
Many times we had the need to have a special job (driver) that
orchestrate the other jobs.
The problem in that is that it's not possible to create something like
a Flink context/ses
Many times we had the need to have a special job (driver) that orchestrate
the other jobs.
The problem in that is that it's not possible to create something like a
Flink context/session that allows to monitor its sub-jobs.
I think that this thing would be extremely useful..an alternative would be
t
As the documentation states this is an effectively internal REST call
used by the CLI.
To submit jobs via REST I would suggest to check out the jar submission
routine.
https://ci.apache.org/projects/flink/flink-docs-master/monitoring/rest_api.html#jars-upload
https://ci.apache.org/projects/fl
11 matches
Mail list logo