Hi,
I am reading messages off a Kafka Topic and want to process the messages
through Flink and save them into S3. It was pointed out to me that stream
processing of the Kafka data won't be saved to S3 because S3 doesn't allow
data to be appended to a file, so I want to convert the Kafka stream int
rything
> is
> pointing to directories where the code looks for well-known filenames.
>
> With that, the following works to write to S3. (Maybe load events from
> collection at first):
>
> events.writeAsText("s3:///")
>
> env.execute
>
>
>
> On Wednesday, J
1388003/does-apache-flink-aws-s3-sink-require-hadoop-for-local-testing>
>
> Let me know if that works for you.
>
> Thanks,
> Markus
>
>
> On Tuesday, January 10, 2017 3:17 PM, Samra Kasim <
> samra.ka...@thehumangeo.com> wrote:
>
>
> Hi,
>
>
Hi,
I am new to Flink and I've written two small test projects: 1) to read data
from s3 and 2) to push data to s3. However, I am getting two different
errors for the projects relating to, i think, how the core-site.xml file is
being read. I am running the project locally in IntelliJ. I have the
en