Re: S3 for state backend in Flink 1.4.0

2018-06-01 Thread Stephan Ewen
> > Thanks > Hayden > > -Original Message- > From: Edward Rojas [mailto:edward.roja...@gmail.com] > Sent: Thursday, February 01, 2018 6:09 PM > To: user@flink.apache.org > Subject: RE: S3 for state backend in Flink 1.4.0 > > Hi Hayden, > > It seems li

RE: S3 for state backend in Flink 1.4.0

2018-02-07 Thread Marchant, Hayden
dward.roja...@gmail.com] Sent: Thursday, February 01, 2018 6:09 PM To: user@flink.apache.org Subject: RE: S3 for state backend in Flink 1.4.0 Hi Hayden, It seems like a good alternative. But I see it's intended to work with spark, did you manage to get it working with Flink ? I some tests but I get

RE: S3 for state backend in Flink 1.4.0

2018-02-01 Thread Edward Rojas
Hi Hayden, It seems like a good alternative. But I see it's intended to work with spark, did you manage to get it working with Flink ? I some tests but I get several errors when trying to create a file, either for checkpointing or saving data. Thanks in advance, Regards, Edward -- Sent from:

RE: S3 for state backend in Flink 1.4.0

2018-02-01 Thread Marchant, Hayden
r-the-fast-lane-connecting-object-stores-to-spark/ Good luck!! -Original Message- From: Edward Rojas [mailto:edward.roja...@gmail.com] Sent: Wednesday, January 31, 2018 3:02 PM To: user@flink.apache.org Subject: RE: S3 for state backend in Flink 1.4.0 Hi, We are having a similar pr

Re: S3 for state backend in Flink 1.4.0

2018-01-31 Thread Edward Rojas
Hi Aljoscha, Thinking a little bit more about this, although IBM Object storage is compatible with Amazon's S3, it's not an eventually consistent file system, but rather immediately consistent. So we won't need the support for eventually consistent FS for our use case to work, but we would only

Re: S3 for state backend in Flink 1.4.0

2018-01-31 Thread Aljoscha Krettek
Hi, Unfortunately not yet, though it's high on my personal list of stuff that I want to get resolved. It won't make it into 1.5.0 but I think 1.6.0. Best, Aljoscha > On 31. Jan 2018, at 16:31, Edward Rojas wrote: > > Thanks Aljoscha. That makes sense. > Do you have a more specific date for

Re: S3 for state backend in Flink 1.4.0

2018-01-31 Thread Edward Rojas
Thanks Aljoscha. That makes sense. Do you have a more specific date for the changes on BucketingSink and/or the PR to be released ? -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: S3 for state backend in Flink 1.4.0

2018-01-31 Thread Aljoscha Krettek
Hi Edward, The problem here is that readTextFile() and writeAsText() use the Flink FileSystem abstraction underneath, which will pick up the s3 filesystem from opt. The BucketingSink, on the other hand, uses the Hadoop FileSystem abstraction directly, meaning that there has to be some HadoopFil

RE: S3 for state backend in Flink 1.4.0

2018-01-31 Thread Edward Rojas
Hi, We are having a similar problem when trying to use Flink 1.4.0 with IBM Object Storage for reading and writing data. We followed https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/aws.html and the suggestion on https://issues.apache.org/jira/browse/FLINK-851. We put

RE: S3 for state backend in Flink 1.4.0

2018-01-28 Thread Marchant, Hayden
n [ICG-IT] Cc: user@flink.apache.org Subject: Re: S3 for state backend in Flink 1.4.0 Hi, Did you try overriding that config and it didn't work? That dependency is in fact still using the Hadoop S3 FS implementation but is shading everything to our own namespace so that there can't be

Re: S3 for state backend in Flink 1.4.0

2018-01-25 Thread Aljoscha Krettek
Hi, Did you try overriding that config and it didn't work? That dependency is in fact still using the Hadoop S3 FS implementation but is shading everything to our own namespace so that there can't be any version conflicts. If that doesn't work then we need to look into this further. The way yo

S3 for state backend in Flink 1.4.0

2018-01-24 Thread Marchant, Hayden
Hi, We have a Flink Streaming application that uses S3 for storing checkpoints. We are not using 'regular' S3, but rather IBM Object Storage which has an S3-compatible connector. We had quite some challenges in overiding the endpoint from the default s3.amnazonaws.com to our internal IBM Object