Samra, As I was quickly looking at your code I only saw the
ExecutionEnvironment from the read and not the StreamingExecutionEnvironment
for the write. Glad to hear that this worked for batching. Like you, I am very
much a Flink beginner who just happened to have tried out the batch write to
Hi Markus,
Thanks! This was very helpful! I realize what the issue is now. I followed
what you did and I am able to write data to s3 if I do batch processing,
but not stream processing. Do you know what the difference is and why it
would work for one and not the other?
Sam
On Wed, Jan 11, 2017 a
Sam, Don't point the variables at files, point them at the directories
containing the files. Do you have fs.s3.impl property defined?
Concrete example:
/home/markus/hadoop-config directory has one file "core-site.xml" with
thefollowing content:
fs.s3.impl
org.apache.hadoop.f
Hi Markus,
Thanks for your help. I created an environment variable in IntelliJ for
FLINK_CONF_DIR to point to the flink-conf.yaml and in it defined
fs.hdfs.hadoopconf to point to the core-site.xml, but when I do that, I get
the error: java.io.IOException: No file system found with scheme s3,
refer
Sam, I just happened to answer a similar question on Stackoverflow at Does
Apache Flink AWS S3 Sink require Hadoop for local testing?. I also submitted a
PR to make that (for me) a little clearer on the Apache Flink documentation
(https://github.com/apache/flink/pull/3054/files).
|
|
|
Hi,
I am new to Flink and I've written two small test projects: 1) to read data
from s3 and 2) to push data to s3. However, I am getting two different
errors for the projects relating to, i think, how the core-site.xml file is
being read. I am running the project locally in IntelliJ. I have the
en