Hi Ufuk,
It seems I messed it up a bit :)
I cannot comment on jira, since it's temporarily locked...
1. org.apache.hadoop.fs.PathIsNotEmptyDirectoryException:
`/flink/checkpoints_test/570d6e67d571c109daab468e5678402b/chk-62 is non
empty': Directory is not empty - this seems to be expected behavi
thanks,
I'll try to reproduce it in some test by myself...
maciek
On 12/05/2016 18:39, Ufuk Celebi wrote:
The issue is here: https://issues.apache.org/jira/browse/FLINK-3902
(My "explanation" before dosn't make sense actually and I don't see a
reason why this should be related to having many s
The issue is here: https://issues.apache.org/jira/browse/FLINK-3902
(My "explanation" before dosn't make sense actually and I don't see a
reason why this should be related to having many state handles.)
On Thu, May 12, 2016 at 3:54 PM, Ufuk Celebi wrote:
> Hey Maciek,
>
> thanks for reporting th
Hey Maciek,
thanks for reporting this. Having files linger around looks like a bug to me.
The idea behind having the recursive flag set to false in the
AbstractFileStateHandle.discardState() call is that the
FileStateHandle is actually just a single file and not a directory.
The second call tryin
Hi,
we have stream job with quite large state (few GB), we're using
FSStateBackend and we're storing checkpoints in hdfs.
What we observe is that v. often old checkpoints are not discarded
properly. In hadoop logs I can see:
2016-05-10 12:21:06,559 INFO BlockStateChange: BLOCK* addToInvalidat