Hi! I think to some extend this is expected. There is some cleanup code that deletes files and then issues parent directory remove requests. It relies on the fact that the parent directory is only removed if it is empty (after the last file was deleted).
Is this a problem right now, or just a confusing behavior? Greetings, Stephan On Tue, Oct 11, 2016 at 5:25 PM, static-max <flasha...@googlemail.com> wrote: > Hi, > > I get many (multiple times per minute) errors in my Namenode HDFS logfile: > > 2016-10-11 17:17:07,596 INFO ipc.Server (Server.java:logException(2401)) > - IPC Server handler 295 on 8020, call > org.apache.hadoop.hdfs.protocol.ClientProtocol.delete > from datanode1:34872 Call#2361 Retry#0 > org.apache.hadoop.fs.PathIsNotEmptyDirectoryException: `/flink/recovery > is non empty': Directory is not empty > at org.apache.hadoop.hdfs.server.namenode.FSDirDeleteOp.delete( > FSDirDeleteOp.java:89) > at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete( > FSNamesystem.java:3829) > at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer. > delete(NameNodeRpcServer.java:1071) > at org.apache.hadoop.hdfs.protocolPB. > ClientNamenodeProtocolServerSideTranslatorPB.delete( > ClientNamenodeProtocolServerSideTranslatorPB.java:619) > at org.apache.hadoop.hdfs.protocol.proto. > ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod( > ClientNamenodeProtocolProtos.java) > at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ > ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:640) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2313) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2309) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at org.apache.hadoop.security.UserGroupInformation.doAs( > UserGroupInformation.java:1724) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2307) > > That is the directory I configured for Jobmanager HA. I deleted it before > starting the YARN session but that did not help. The folder gets created by > Flink without problems. > > I'm using latest Flink Master (Commit: 6731ec1) and build it for Hadoop > 2.7.3. > > Any idea is highly appreciated. Thanks a lot! >