zhuzhurk commented on code in PR #24747:
URL: https://github.com/apache/flink/pull/24747#discussion_r1593457699


##########
flink-filesystems/flink-oss-fs-hadoop/src/main/java/org/apache/flink/fs/osshadoop/writer/OSSCommitter.java:
##########
@@ -75,8 +75,7 @@ public void commitAfterRecovery() throws IOException {
                 ossAccessor.completeMultipartUpload(objectName, uploadId, 
partETags);
             } catch (Exception e) {
                 LOG.info(
-                        "Failed to commit after recovery {} with multi-part 
upload ID {}. "
-                                + "exception {}",
+                        "Failed to commit after recovery {} with multi-part 
upload ID {}. ",

Review Comment:
   Better to remove the trailing space.



##########
flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java:
##########
@@ -1427,7 +1427,9 @@ private static String 
encodeYarnLocalResourceDescriptorListToString(
      */
     private void failSessionDuringDeployment(
             YarnClient yarnClient, YarnClientApplication yarnApplication) {
-        LOG.info("Killing YARN application");
+        LOG.info(

Review Comment:
   Se my other comment around. I think this enrichment is not necessary.



##########
flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java:
##########
@@ -1792,6 +1794,8 @@ public void run() {
                 if (!fs.delete(yarnFilesDir, true)) {
                     throw new IOException(
                             "Deleting files in " + yarnFilesDir + " was 
unsuccessful");
+                } else {
+                    LOG.info("Deleting files in {} was successful", 
yarnFilesDir);

Review Comment:
   There is no need to log such an info log if no error happens.



##########
flink-runtime/src/main/java/org/apache/flink/runtime/blob/BlobClient.java:
##########
@@ -130,7 +130,12 @@ static void downloadFromBlobServer(
             throws IOException {
 
         final byte[] buf = new byte[BUFFER_SIZE];
-        LOG.info("Downloading {}/{} from {}", jobId, blobKey, serverAddress);
+        LOG.info(
+                "Downloading {}/{} from {} (retry {})",

Review Comment:
   I think the `retry` is not needed here. If a retry happens, the retry 
attempt is already logged (in the code below).
   Besides that, `numFetchRetries` is not the current retry attempt but is the 
max retry count.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to