Re: [DISCUSS] Strong read-after-write consistency of Flink FileSystems

2021-12-14 Thread David Morávek
Any other thoughts on the topic? If there are no concerns, I'd continue with creating a FLIP for changing the "written" contract of the Flink FileSystems to reflect this. Best, D. On Wed, Dec 8, 2021 at 5:53 PM David Morávek wrote: > Hi Martijn, > > I simply wasn't aware of that one :) It seem

Re: [DISCUSS] Strong read-after-write consistency of Flink FileSystems

2021-12-08 Thread David Morávek
Hi Martijn, I simply wasn't aware of that one :) It seems to be provided the guarantees that we need [1]. > Of course, Azure Storage is built on a platform grounded in strong > consistency guaranteeing that writes are made durable before acknowledging > success to the client. This is critically i

Re: [DISCUSS] Strong read-after-write consistency of Flink FileSystems

2021-12-08 Thread Martijn Visser
Hi David, Just to be sure, since you've already included Azure Blob Storage, but did you deliberately skip Azure Data Lake Store Gen2? That's currently supported and also used by Flink users [1]. There's also MapR FS, but I doubt if that is still used. Best regards, [1] https://nightlies.apache.

[DISCUSS] Strong read-after-write consistency of Flink FileSystems

2021-12-06 Thread David Morávek
Hi Everyone, as outlined in FLIP-194 discussion [1], for the future directions of Flink HA services, I'd like to verify my thoughts around guarantees of the distributed filesystems used with Flink. Currently some of the services (*JobGraphStore*, *CompletedCheckpointStore*) are implemented using