Hi,
to do that you can set the env variable HADOOP_CONF_DIR:
https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#hdfs
Best,
Gary
On Wed, Jan 17, 2018 at 2:27 AM, cw7k wrote:
> Hi, question on this page:
> "You need to point Flink to a valid Hadoop configuration..."https:
Thanks a lot, Eron. I'll draft a proposal and share it with the community.
On Thu, Jan 18, 2018 at 4:18 PM, Eron Wright wrote:
> I would suggest that you draft a proposal that lays out your goals and the
> technical challenges that you perceive. Then the community can provide
> some feedback on
Ok, I have the factory working in the WordCount example. I had to move the
factory code and META-INF into the WordCount project.
For general Flink jobs, I'm assuming that the goal would be to be able to
import the factory from the job itself instead of needing to copy the factory
.java file in
I would suggest that you draft a proposal that lays out your goals and the
technical challenges that you perceive. Then the community can provide
some feedback on potential solutions to those challenges, culminating in a
concrete improvement proposal.
Thanks
On Wed, Jan 17, 2018 at 7:29 PM, Shuy
Hi, just a bit more info, I have a test function working using oci://, based
on the S3 test:
https://github.com/apache/flink/blob/master/flink-filesystems/flink-s3-fs-hadoop/src/test/java/org/apache/flink/fs/s3hadoop/HadoopS3FileSystemITCase.java#L169
However, when I try to get the WordCount ex
can we add these two? They can make fine-grained recovery more consistent.
https://issues.apache.org/jira/browse/FLINK-8042
https://issues.apache.org/jira/browse/FLINK-8043
On Tue, Jan 16, 2018 at 8:35 AM, Timo Walther wrote:
> @Jincheng: Yes, I think we should include the two Table API PRs.
>
Thanks. I now have the 3 requirements fulfilled but the scheme isn't being
loaded; I get this error:
"Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException:
Could not find a file system implementation for scheme 'oci'. The scheme is not
directly supported by Flink and no Had
In fact, there are two S3FileSystemFactory classes, one for Hadoop and
another one for Presto.
In both cases an external file system class is wrapped in Flink's
HadoopFileSystem class [1] [2].
Best, Fabian
[1]
https://github.com/apache/flink/blob/master/flink-filesystems/flink-s3-fs-hadoop/src/ma
Are there any more comments on the FLIP?
Otherwise, I'd suggest to move the FLIP to the accepted FLIPs [1] and
continue with the implementation.
Also, is there a committer who'd like to shepherd the FLIP and review the
corresponding PRs?
Of course, everybody is welcome to review the code but we n