[ 
https://issues.apache.org/jira/browse/HIVE-5523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13947105#comment-13947105
 ] 

Sushanth Sowmyan commented on HIVE-5523:
----------------------------------------

Thanks!

The reason I flipped the input/output logic was because hive executes the input 
part "always". HCat will execute input if input and output if output. For HCat, 
thus, it makes no difference whether we check for input or output. For hive, it 
has no concept of input and output, and thus executes a "default", which is the 
same as input, so it felt like it was a bit cleaner to treat output as a 
special case which defaulted to false, rather than to treat input as a selector 
that defaulted to true. I'm still not completely happy about it, to be honest.

> HiveHBaseStorageHandler should pass kerbros credentials down to HBase
> ---------------------------------------------------------------------
>
>                 Key: HIVE-5523
>                 URL: https://issues.apache.org/jira/browse/HIVE-5523
>             Project: Hive
>          Issue Type: Bug
>          Components: HBase Handler
>    Affects Versions: 0.11.0
>            Reporter: Nick Dimiduk
>            Assignee: Sushanth Sowmyan
>         Attachments: HIVE-5523.patch, Task Logs_ 
> 'attempt_201310110032_0023_r_000000_0'.html
>
>
> Running on a secured cluster, I have an HBase table defined thusly
> {noformat}
> CREATE TABLE IF NOT EXISTS pagecounts_hbase (rowkey STRING, pageviews STRING, 
> bytes STRING)
> STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
> WITH SERDEPROPERTIES ('hbase.columns.mapping' = ':key,f:c1,f:c2')
> TBLPROPERTIES ('hbase.table.name' = 'pagecounts');
> {noformat}
> and a query to populate that table
> {noformat}
> -- ensure hbase dependency jars are shipped with the MR job
> SET hive.aux.jars.path = 
> file:///etc/hbase/conf/hbase-site.xml,file:///usr/lib/hive/lib/hive-hbase-handler-0.11.0.1.3.2.0-111.jar,file:///usr/lib/hbase/hbase-0.94.6.1.3.2.0-111-security.jar,file:///usr/lib/zookeeper/zookeeper-3.4.5.1.3.2.0-111.jar;
> -- populate our hbase table
> FROM pgc INSERT INTO TABLE pagecounts_hbase SELECT pgc.* WHERE rowkey LIKE 
> 'en/q%' LIMIT 10;
> {noformat}
> The reduce tasks fail with what boils down to the following exception:
> {noformat}
> Caused by: java.lang.RuntimeException: SASL authentication failed. The most 
> likely cause is missing or invalid credentials. Consider 'kinit'.
>       at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$1.run(SecureClient.java:263)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:396)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>       at java.lang.reflect.Method.invoke(Method.java:597)
>       at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
>       at org.apache.hadoop.hbase.security.User.call(User.java:590)
>       at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
>       at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:444)
>       at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.handleSaslConnectionFailure(SecureClient.java:224)
>       at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:313)
>       at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1124)
>       at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:974)
>       at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:104)
>       at $Proxy10.getProtocolVersion(Unknown Source)
>       at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine.getProxy(SecureRpcEngine.java:146)
>       at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:208)
>       at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1346)
>       at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1305)
>       at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1292)
>       at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1001)
>       at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:896)
>       at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:998)
>       at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:900)
>       at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:857)
>       at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:234)
>       at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:174)
>       at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:133)
>       at 
> org.apache.hadoop.hive.hbase.HiveHBaseTableOutputFormat.getHiveRecordWriter(HiveHBaseTableOutputFormat.java:83)
>       at 
> org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getRecordWriter(HiveFileFormatUtils.java:250)
>       at 
> org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveRecordWriter(HiveFileFormatUtils.java:237)
>       ... 17 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to