1996fanrui commented on code in PR #22028: URL: https://github.com/apache/flink/pull/22028#discussion_r1123963326
########## flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java: ########## @@ -1441,13 +1444,17 @@ public String getClusterDescription() { totalMemory += res.getMemory(); totalCores += res.getVirtualCores(); ps.format(format, "NodeID", rep.getNodeId()); - ps.format(format, "Memory", res.getMemory() + " MB"); + ps.format(format, "Memory", getDisplayMemory(res.getMemory())); Review Comment: I prefer using the `MemorySize of flink` instead of `StringUtils.byteDesc of hadoop`, because flink already has similar feature. WDYT? ```suggestion ps.format(format, "Memory", MemorySize.ofMebiBytes(128).toHumanReadableString())); ``` And I have a demo here.  ########## flink-yarn/src/test/java/org/apache/flink/yarn/YarnClusterDescriptorTest.java: ########## @@ -912,4 +913,17 @@ private Map<String, String> getTestMasterEnv( appId.toString()); } } + + @Test + public void testByteDesc() { + long bytesInMB = 1024 * 1024; + // 128 MB + assertThat(StringUtils.byteDesc(bytesInMB * 128)).isEqualTo("128 MB"); + // 512 MB + assertThat(StringUtils.byteDesc(bytesInMB * 512)).isEqualTo("512 MB"); + // 1 GB + assertThat(StringUtils.byteDesc(bytesInMB * 1024)).isEqualTo("1 GB"); + // 128 GB + assertThat(StringUtils.byteDesc(bytesInMB * 131072)).isEqualTo("128 GB"); + } Review Comment: Flink unit test shouldn't check the feature of others services, such as: yarn, hdfs, zookeeper and jdk. Flink unit test should check the feature in the flink side. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org