Hi Jose,
It seems that you enable the externalized checkpoints in your streaming job.
If enabling externalized checkpoints is what you really want to,
'state.checkpoints.dir' must be set in flink-conf.yaml.
For your second question, yes, the only way is to modify the
flink-conf.yaml. See the refer
Hi,
After enabling checkpoints and set the property env.setStateBackend(new
FsStateBackend(url)) I am the following exception:
Caused by: java.lang.IllegalStateException: CheckpointConfig says to
persist periodic checkpoints, but no checkpoint directory has been
configured. You can configure conf
Hi,
Excuse me for the unclear title but I don't know how to summarise the
question.
I'm using an external library integrated with Flink called Flink-HTM. It is
still a prototype.
Internally, it performs everything I want but I have a problem returning
evaluated values in a printable datastream.
I
I have been trying out various scenarios with Flink with various sources and
sinks. I want to know how folks have been using Flink in production. What are
the common use cases. Essentially, I have implemented similar use cases in
Spark and now I find it fairly straightforward to convert thes
Yes, I enabled checkpointing and now the files do not have .pending extension.
Thank you Urs.
On Saturday, September 2, 2017, 3:10:28 AM PDT, Urs Schoenenberger
wrote:
Urs Schoenenberger (urs.schoenenber...@tngtech.com) is not on your Guest List
| Approve sender | Approve domain
Hi,
Hi Peter,
I just omitted the filter part. Sorry for that.
Actually, as the javadoc explained, by default a DataStream with iteration
will never terminate. That's because in a
stream environment with iteration, the operator will never know whether the
feedback stream has reached its end
(though th
> Am 02.09.2017 um 04:45 schrieb Xingcan Cui :
>
> In your codes, all the the long values will subtract 1 and be sent back to
> the iterate operator, endlessly.
Is this true? shouldn't
val iterationResult2 = env.generateSequence(1, 4).iterate(it => {
(it.filter(_ > 0).map(_ - 1), it.filt
hi all,
I m trying to do a streaming process like below,
1. collect sensor events from a source
2. collect rule events defined for a device (which streams sensor events)
3. rules may have been defined with window information for aggregation
processes differently for any device
4. when a
hi all,
I m trying to do a streaming process like below,
1. collect sensor events from a source
2. collect rule events defined for a device (which streams sensor events)
3. rules may have been defined with window information for aggregation
processes differently for any device
4. when a
Hi,
you need to enable checkpointing for your job. Flink uses ".pending"
extensions to mark parts that have been completely written, but are not
included in a checkpoint yet.
Once you enable checkpointing, the .pending extensions will be removed
whenever a checkpoint completes.
Regards,
Urs
On
10 matches
Mail list logo