nfiguration);
job.addCacheFile(new URI("hdfs://tmp/my.truststore"));
.. and the Distributed Cache directly but I do not see them in the
directly listing of a Tez log.
On Tue, Aug 6, 2019 at 1:44 PM Kristopher Kane wrote:
>
> Does anyone have a pointer to how I can copy non-jar f
Does anyone have a pointer to how I can copy non-jar files from a
storage handler such that they are accessible by the map task executor
in usercache?
Thanks,
Kris
I'm trying to add protected SSL credentials to the Kafka Storage
Handler. This is my first jump into the pool.
I have it working where the creds for the keystore/truststore are in
JCEKS files in HDFS and the KafkaStorageHandler class loads them into
the job configuration based on some new TBLPROP
Authorization, rather.
On Thu, Jun 13, 2019 at 10:51 AM Kristopher Kane wrote:
>
> You really have no choice with storage based authentication.
>
> On Fri, Jun 7, 2019 at 12:24 PM Mainak Ghosh wrote:
> >
> > Hey Alan,
> >
> > Thanks for replying.
You really have no choice with storage based authentication.
On Fri, Jun 7, 2019 at 12:24 PM Mainak Ghosh wrote:
>
> Hey Alan,
>
> Thanks for replying. We are currently using storage based authorization and
> Hive 2.3.2. Unfortunately, we found that the default warehouse path requires
> a 777 f
'hive.query.results.cache.max.size' - Is this limit per query result,
total for all users across all HS2 instances or per HS2 instance?
Thanks,
Kris
The JDBC storage handler wiki states:
"You will need to protect the keystore file by only authorize targeted
user to read this file using authorizer (such as ranger). Hive will
check the permission of the keystore file to make sure user has read
permission of it when creating/altering table."
I c
If using a default external table location, in a cluster with Ranger
Authorization, the table location and data are owned by the `hive`
user.
Since the table is external, there doesn't seem to be a way to delete
this data other than impersonating or becoming the `hive` or `hdfs`
principal. Is the
Gopal. That was exactly it.
As always, a succinct, accurate answer.
Thanks,
-Kris
On Mon, Feb 26, 2018 at 8:06 PM, Gopal Vijayaraghavan
wrote:
> Hi,
>
> > Caused by: java.lang.ArrayIndexOutOfBoundsException
> > at org.apache.hadoop.mapred.MapTask$MapOutputBuffer$
> Buffer.write(MapTask.java:14
I have a highly compressed single ORC file based table generated from Hive
DDL. Raw size reports 120GB ORC/Snappy compressed down to 990 MB (ORC with
no compression is still only 1.3GB) . Hive on MR is throwing
ArrayIndexOutOfBoundsException like the following:
Diagnostic Messages for this Task:
I see that Hive doesn't seem to know about an Avro SerDe compressed table
(Hive 1.2.1) in 'describe extended' when determining compression with the
following:
SET hive.exec.compress.output=true;
SET avro.output.codec=snappy;
-- likely because you set those on INSERT and there isn't any DDL
refe
Is there a variable that can be used for the user principal in scratchdir
instead of the JVM user.name?
https://cwiki.apache.org/confluence/display/Hive/Configuration+Properties#ConfigurationProperties-hive.exec.scratchdir
Kris
Is there a list of possible return codes as logged by the
TempletonJobController's map task?
I'm getting an RC of 6 for a pig+hcat job that works from the CLI:
o.a.h.hcatalog.templeton.tool.launchMapper: templeton: Writing exit value 6
to...
-Kris
Clay,
Keep in mind that setting this to false in the global hive-site.xml will
mean that you will not do any client hash table generating and will miss
out on optimizations for other joins. You should set this in your query
directly. Another option is so increase the client side heap to allow fo
, Kristopher Kane wrote:
> Hive .12 on Hadoop 2
>
> I have a table with a mix of STRING and DECIMAL fields that is stored as
> ORC no compression or partitions.
> I wanted to create a copy of this table with CTAS, stored also as ORC.
> The job fails with NumberFormatException at the HiveD
Hive .12 on Hadoop 2
I have a table with a mix of STRING and DECIMAL fields that is stored as
ORC no compression or partitions.
I wanted to create a copy of this table with CTAS, stored also as ORC.
The job fails with NumberFormatException at the HiveDecimal class but I
can't narrow it down the th
16 matches
Mail list logo