Nathan,

when Sentry is enabled, all files are supposed to be owned by hive:hive or
impala:impala and permissions are managed via HDFS ACLs which are
coordinated with Sentry.

- Alex

On Mon, Sep 24, 2018 at 1:15 PM Nathan Bamford <
nathan.bamf...@redpointglobal.com> wrote:

> Hi,
>
>   We use HCatWriter to write records to Hive, and I've recently run into a
> problem with HCatWriter that seems intractable.
>
>   We can write tables without partitions all the live-long day, but any
> attempt to write to a partition results in the following error:
>
>
> "net/redpoint/hiveclient/DMHCatWriter.closeWriter:org.apache.hive.hcatalog.common.HCatException
> : 2004 : HCatOutputFormat not initialized, setOutput has to be called.
> Cause : org.apache.hive.hcatalog.common.HCatException : 2006 : Error adding
> partition to metastore. Cause :
> org.apache.hadoop.security.AccessControlException: Permission denied.
> user=nbamford is not the owner of inode=state=CO"
>
>   Digging into the source for
> org.apache.hive.hcatalog.mapreduce.FileOutputCommitterContainer.constructPartition,
> I find the following lines:
> for (FieldSchema partKey : table.getPartitionKeys()) {
> if (i++ != 0) {
> fs.mkdirs(partPath); // Attempt to make the path in case it does not exist
> before we check
> applyGroupAndPerms(fs, partPath, perms, grpName, false);
> }
> partPath = constructPartialPartPath(partPath,
> partKey.getName().toLowerCase(), partKVs);
> }
> }
>
>   The error is thrown from the applyGroupAndPerms function, which you will
> note does not check for the directory existing, and having the right
> permissions (in this case, it does).
>
>
>   I am at a complete loss for how to proceed. I can't even think of a
> workaround. It seems to me HCatWriter simply cannot write partitions when
> Sentry and the HDFS ACL plugin are in force.
>

Reply via email to