cnauroth commented on code in PR #50173:
URL: https://github.com/apache/spark/pull/50173#discussion_r1982628947


##########
resource-managers/yarn/src/test/scala/org/apache/spark/deploy/yarn/AmIpFilterSuite.scala:
##########
@@ -162,8 +162,10 @@ class AmIpFilterSuite extends SparkFunSuite {
     assert(!filter.getProxyAddresses.isEmpty)
     // waiting for configuration update
     eventually(timeout(5.seconds), interval(500.millis)) {
-      assertThrows[ServletException] {
+      try {
         filter.getProxyAddresses.isEmpty

Review Comment:
   No, there are two possible outcomes, and both must be considered valid for 
the assertion.
   
   1) "unknownhost" does not exist on the network. The logic of `AmIpFilter` 
detects this as an `UnknownHostException`, and the end result is an empty set 
of viable proxy addresses. `AmIpFilter` then throws `ServletException`.
   
   2) "unknownhost" does exist on the network. It does resolve through DNS. 
However, there is no routable path to the host. In my case, as mentioned in the 
original description, that's because I only have an IPv4 address and the 
"unknownhost" on my network only has an IPv6 address. In this case, no 
exception is thrown, and the expectation is for `getProxyAddresses()` to return 
an empty set.
   
   This logic is perhaps a little unexpected, but I believe the intent of 
[SPARK-48238](https://issues.apache.org/jira/browse/SPARK-48238) was to match 
the pre-defined behavior of Hadoop.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to