[ https://issues.apache.org/jira/browse/FLINK-6020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16010227#comment-16010227 ]
ASF GitHub Bot commented on FLINK-6020: --------------------------------------- Github user StefanRRichter commented on a diff in the pull request: https://github.com/apache/flink/pull/3888#discussion_r116446914 --- Diff: flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobServerConnection.java --- @@ -235,8 +235,54 @@ else if (contentAddressable == CONTENT_ADDRESSABLE) { return; } - // from here on, we started sending data, so all we can do is close the connection when something happens + readLock.lock(); + try { + try { + if (!blobFile.exists()) { + // first we have to release the read lock in order to acquire the write lock + readLock.unlock(); + writeLock.lock(); --- End diff -- In between upgrading from read lock to write lock, multiple threads can reach this point and as far as I see, then a file can be written more often then required. I would assume the code still produces correct result, but could do duplicate work. An obvious fix would be to re-check `blobFile.exists()` under the write lock, but now sure if the costs of another meta data query per write could offset occasional, but unlikely duplicate work. > Blob Server cannot handle multiple job submits (with same content) parallelly > ----------------------------------------------------------------------------- > > Key: FLINK-6020 > URL: https://issues.apache.org/jira/browse/FLINK-6020 > Project: Flink > Issue Type: Sub-task > Components: Distributed Coordination > Reporter: Tao Wang > Assignee: Till Rohrmann > Priority: Critical > > In yarn-cluster mode, if we submit one same job multiple times parallelly, > the task will encounter class load problem and lease occuputation. > Because blob server stores user jars in name with generated sha1sum of those, > first writes a temp file and move it to finalialize. For recovery it also > will put them to HDFS with same file name. > In same time, when multiple clients sumit same job with same jar, the local > jar files in blob server and those file on hdfs will be handled in multiple > threads(BlobServerConnection), and impact each other. > It's better to have a way to handle this, now two ideas comes up to my head: > 1. lock the write operation, or > 2. use some unique identifier as file name instead of ( or added up to) > sha1sum of the file contents. -- This message was sent by Atlassian JIRA (v6.3.15#6346)