Hi all,
Wondering if anyone else has run into this.
We write files to S3 using the SerializedOutputFormat. When we
read them back, sometimes we get deserialization errors where the data seems to
be corrupt.
After a lot of logging, the weathervane of blame pointed towards the block size
someho
Thanks for your email Aleksandar! Sorry for reply late.
May I ask a question, do you config high-availability.storageDir in your
case?
That is, do you persist and retrieve job graph & checkpoint totally in MapDB
or, as ZooKeeper implementation does, persist them in an external filesystem
and just
Hi Steven,
The root cause is that the *InputDependencyConstraint* is null.
Did you ever invoke
*ExecutionConfig#setDefaultInputDependencyConstraint(null)* in your job
code?
If not, this should not happen according to current code paths, as the
*InputDependencyConstraint* is initially assigned wit
Hi All,
We are using flink on kubernetes, we want to support a functionality where
different pipelines which reads data from SSL enabled kafka can be running
at any point.
Since kafka expects SSL certificate to be in a file , we are thinking of a
solution where we created a volume which is availabl
All,
I was wondering what the expected default behavior is when same app is deployed
in 2 separate clusters but with same group Id. In theory idea was to create
active-active across separate clusters but it seems like both apps are getting
all the data from Kafka.
Anyone else has tried somethin
Hi all,
I noticed that ConfigConstants.HDFS_SITE_CONFIG is deprecated, though
YarnFileStageTestS3ITCase.setupCustomHadoopConfig() still uses it to specify a
file that provides AWS credentials for S3 integration testing.
What’s the recommended approach for S3 integration tests, without
HDFS_SIT
I am trying to update a cluster running in HA mode from 1.7.2 to 1.9.0. I
am attempting to just update the docker images to the new ones and restart
the cluster. Is this something that is supported? or do I need to destroy
the HA setup and build the cluster from scratch?
Here is the error I get.
2
Yes this is exactly what happens , as a work around I created a small jar
file which has code to load the dylib and I placed it under the lib folder
, this library is in provided scope in my actual job, so the dylib gets
loaded only once when the tm/jm jvm starts .
What I found interesting in my ol
Hi Vishwas,
There is a known issue in the Flink Jira project [1].
Is it possible that you have encountered the same problem?
[1]: https://issues.apache.org/jira/browse/FLINK-11402
Regards,
Aleksey
On Tue, Aug 27, 2019 at 8:03 AM Vishwas Siravara
wrote:
> Hi Jörn,
> I tried that. Here is my s