I have backported this at
https://github.com/instructure/flink/tree/s3_recover_backport by
cherry-picking all the relevant code, I am not sure how backports are
usually done with Flink (if you squash and merge) but there were a few
minor conflicts and involved quite a few changes from master.

Going to try with my branch tomorrow and will report any issues.



On Tue, Oct 30, 2018 at 8:44 PM Mike Mintz <mikemi...@gmail.com> wrote:

> FWIW I also tried this on Flink 1.6.2 today and got the same error. This is
> my full stack trace:
>
> java.lang.UnsupportedOperationException: Recoverable writers on Hadoop
> are only supported for HDFS and for Hadoop version 2.7 or newer
>         at
> org.apache.flink.fs.s3hadoop.shaded.org.apache.flink.runtime.fs.hdfs.HadoopRecoverableWriter.<init>(HadoopRecoverableWriter.java:57)
>         at
> org.apache.flink.fs.s3hadoop.shaded.org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.createRecoverableWriter(HadoopFileSystem.java:202)
>         at
> org.apache.flink.core.fs.SafetyNetWrapperFileSystem.createRecoverableWriter(SafetyNetWrapperFileSystem.java:69)
>         at
> org.apache.flink.streaming.api.functions.sink.filesystem.Buckets.<init>(Buckets.java:111)
>         at
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink$RowFormatBuilder.createBuckets(StreamingFileSink.java:242)
>         at
> org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.initializeState(StreamingFileSink.java:327)
>         at
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
>         at
> org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
>         at
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
>         at
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:277)
>         at
> org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:738)
>         at
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:289)
>         at org.apache.flink.runtime.taskmanager.Task.run(Task.java:711)
>         at java.lang.Thread.run(Thread.java:748)
>
>
> On Tue, Oct 30, 2018 at 4:12 PM Till Rohrmann <trohrm...@apache.org>
> wrote:
>
> > Hi Addison,
> >
> > I think the idea was to also backport this feature to 1.6 since we
> > considered it a bug that S3 was not supported in 1.6. I've pulled in
> Kostas
> > who worked on the S3 writer. @Klou did we intentionally not backport this
> > feature?
> >
> > I think there should be nothing special about backporting this feature
> and
> > building your own version of Flink.
> >
> > Cheers,
> > Till
> >
> > On Tue, Oct 30, 2018 at 10:54 PM Addison Higham <addis...@gmail.com>
> > wrote:
> >
> > > Hi all,
> > >
> > > Been hitting my head against a wall for the last few hours. The release
> > > notes for 1.6.2 show https://issues.apache.org/jira/browse/FLINK-9752
> as
> > > resolved in 1.6.2. I am trying to upgrade and switch some things to use
> > the
> > > StreamingFileSink against s3. However, when doing so, I get the
> following
> > > error:
> > >
> > > Recoverable writers on Hadoop are only supported for HDFS and for
> > > Hadoop version 2.7 or newer
> > >
> > >
> > > I started digging into the code and looking deeper, I don't see the
> code
> > at
> > > https://github.com/apache/flink/pull/6795 as being backported to
> 1.6.2?
> > >
> > > Was the Fix Version erroneous? If so, are there plans to backport it?
> If
> > > not, seems like that should be fixed in the release notes.
> > >
> > > I have been waiting for that functionality and we already build our own
> > > flink, so I am tempted to backport it onto 1.6.2... anything tricky
> about
> > > that?
> > >
> > > Thanks!
> > >
> >
>

Reply via email to