ZanderXu commented on code in PR #6591:
URL: https://github.com/apache/hadoop/pull/6591#discussion_r1507313404


##########
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java:
##########
@@ -1182,10 +1182,12 @@ public void run() {
             if (begin != null) {
               long duration = Time.monotonicNowNanos() - begin;
               if (TimeUnit.NANOSECONDS.toMillis(duration) > 
dfsclientSlowLogThresholdMs) {
-                LOG.info("Slow ReadProcessor read fields for block " + block
+                final String msg = "Slow ReadProcessor read fields for block " 
+ block
                     + " took " + TimeUnit.NANOSECONDS.toMillis(duration) + "ms 
(threshold="
                     + dfsclientSlowLogThresholdMs + "ms); ack: " + ack
-                    + ", targets: " + Arrays.asList(targets));
+                    + ", targets: " + Arrays.asList(targets);
+                LOG.warn(msg);
+                throw new IOException(msg);

Review Comment:
   Thanks @xleoken for involving me.
   
   Your reported problem should be fixed, but I don't think your modification 
is a good solution.
   
   Maybe Datastreamer can identify this case and recovery it through 
PipelineRecovery. But there are two questions should be confirmed:
   
   - How to identify this case?
   - Which datanode should be marked as a bad or slow DN?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to