Hi.
I found that the problem is that i didn't have
flink-s3-fs-hadoop-.jar in flink lib directory, with that i can
use 's3a' protocol.
On Tue, Jun 11, 2019 at 4:48 PM Ken Krugler
wrote:
> The code in HadoopRecoverableWriter is:
>
> if (!"hdfs".equalsIgnoreCase(fs.getScheme()) ||
> !HadoopUtils.
The code in HadoopRecoverableWriter is:
if (!"hdfs".equalsIgnoreCase(fs.getScheme()) ||
!HadoopUtils.isMinHadoopVersion(2, 7)) {
throw new UnsupportedOperationException(
"Recoverable writers on Hadoop are only
suppor
Hi.
I'm a bit confused:
When launching my flink streaming application on EMR release 5.24 (which
have flink 1.8 version) that write Kafka messages to s3 parquet files i'm
getting the exception below, but when i'm installing flink 1.8 on EMR
custom wise it works.
What could be the difference beh