[ https://issues.apache.org/jira/browse/HIVE-12568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15046411#comment-15046411 ]
Lefty Leverenz commented on HIVE-12568: --------------------------------------- Doc note: When this gets merged to master, *hive.spark.client.rpc.server.address* will need to be documented in the Spark section of Configuration Properties. For now, it has a TODOC-SPARK label. Should *hive.spark.client.rpc.server.address* also be mentioned in Hive on Spark: Getting Started? * Hive on Spark: Getting Started ** [Configuring Spark | https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started#HiveonSpark:GettingStarted-ConfiguringSpark] ** [Recommended Configuration | https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started#HiveonSpark:GettingStarted-RecommendedConfiguration] * [Configuration Properties -- Spark | https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-Spark] By the way, the parameter description doesn't have any line breaks (\n). Perhaps that could be corrected when another Spark parameter gets added. > Provide an option to specify network interface used by Spark remote client > [Spark Branch] > ----------------------------------------------------------------------------------------- > > Key: HIVE-12568 > URL: https://issues.apache.org/jira/browse/HIVE-12568 > Project: Hive > Issue Type: Bug > Components: Spark > Affects Versions: 1.1.0 > Reporter: Xuefu Zhang > Assignee: Xuefu Zhang > Labels: TODOC-SPARK > Fix For: spark-branch > > Attachments: HIVE-12568.0-spark.patch, HIVE-12568.1-spark.patch, > HIVE-12568.2-spark.patch, HIVE-12568.2-spark.patch > > > Spark client sends a pair of host name and port number to the remote driver > so that the driver can connects back to HS2 where the user session is. Spark > client has its own way determining the host name, and pick one network > interface if the host happens to have multiple network interfaces. This can > be problematic. For that, there is parameter, > hive.spark.client.server.address, which user can pick an interface. > Unfortunately, this interface isn't exposed. > Instead of exposing this parameter, we can use the same logic as Hive in > determining the host name. Therefore, the remote driver connecting to HS2 > using the same network interface as a HS2 client would do. > There might be a case where user may want the remote driver to use a > different network. This is rare if at all. Thus, for now it should be > sufficient to use the same network interface. -- This message was sent by Atlassian JIRA (v6.3.4#6332)