[ 
https://issues.apache.org/jira/browse/HIVE-20969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16699343#comment-16699343
 ] 

Sahil Takiar commented on HIVE-20969:
-------------------------------------

The intention of HIVE-19008 was so simplify the session id logic in HoS. Before 
HIVE-19008, the HoS session id was a UUID that was completely independent of 
the session id. After HIVE-19008, the HoS session id is a counter that is 
incremented for each each new Spark session created for a given Hive session.

{quote} I would assume that it would be good to connect the spark session to 
the hive session in every log message so it would be good if the sparkSessionId 
would contain the hive session id too. \{quote}

Adding the hive session id into the spark session id sounds like a reasonable 
idea to me. Logically, that is what HIVE-19008 already does. After HIVE-19008, 
any spark session id is globally identifiable by the Hive session id + Spark 
session id. Again, prior to HIVE-19008 the sparkSessionId was a UUID that was 
independent of the hive session id.

> HoS sessionId generation can cause race conditions when uploading files to 
> HDFS
> -------------------------------------------------------------------------------
>
>                 Key: HIVE-20969
>                 URL: https://issues.apache.org/jira/browse/HIVE-20969
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>    Affects Versions: 4.0.0
>            Reporter: Peter Vary
>            Assignee: Peter Vary
>            Priority: Major
>
> The observed exception is:
> {code}
> Caused by: java.io.FileNotFoundException: File does not exist: 
> /tmp/hive/_spark_session_dir/0/hive-exec-2.1.1-SNAPSHOT.jar (inode 21140) 
> [Lease.  Holder: DFSClient_NONMAPREDUCE_304217459_39, pending creates: 1]
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2781)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.analyzeFileState(FSDirWriteFileOp.java:599)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.validateAddBlock(FSDirWriteFileOp.java:171)
>       at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2660)
>       at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:872)
>       at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:550)
>       at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>       at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
>       at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
>       at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:869)
>       at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:815)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:422)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
>       at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2675)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to