No, I didn't because it's inconvenient for us to have 2 different docker
images for streaming and batch jobs.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Ok, thanks for the explanation now it makes sense. Previously I haven’t noticed
that those snapshot state calls visible in your stack trace come from State
Processor API. We will try to reproduce it, so we might have more questions
later, but those information might be enough.
One more question
The problem happens in batch jobs (the ones that use ExecutionEnvironment)
that use state processor api for bootstrapping initial savepoint for
streaming job.
We are building a single docker image for streaming and batch versions of
the job. In that image we put both presto (which we use for chec
But from the stack trace that you have posted it looks like you are using
Hadoop’s S3 implementation for the checkpointing? If so, can you try using
Presto and check whether you still encounter the same issue?
Also, could you explain how to reproduce the issue? What configuration are you
using?
Actually, I forgot to mention that it happens when there's also a presto
library in plugins folder (we are using presto for checkpoints and hadoop
for file sinks in the job itself)
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi,
Thanks for reporting the issue, I’ve created the jira ticket for that [1]. We
will investigate it and try to address it somehow.
Could you try out if the same issue happen when you use flink-s3-fs-presto [2]?
Piotrek
[1] https://issues.apache.org/jira/browse/FLINK-14574
[2]
https://ci.ap
We've added flink-s3-fs-hadoop library to plugins folder and trying to
bootstrap state to S3 using S3A protocol. The following exception happens
(unless hadoop library is put to lib folder instead of plugins). Looks like
S3A filesystem is trying to use "local" filesystem for temporary files and
fai