Subject: RE: Resources/Distributed Cache on Spark
Without using add files, we’d have to make sure these resources exist on every
node, and would configure a hive session like this:
set myCustomProperty=/path/to/directory/someSubDir/;
select myCustomUDF(‘param1’,’param2’);
With the shared resources
...@gmail.com]
Sent: Thursday, February 8, 2018 12:45 PM
To: user@hive.apache.org
Subject: Re: Resources/Distributed Cache on Spark
It should work. We have tests such as groupby_bigdata.q that run on HoS and
work. They use the "add file" command. What are the exact commands you are
running?
It should work. We have tests such as groupby_bigdata.q that run on HoS and
work. They use the "add file" command. What are the exact commands you are
running? What error are you seeing?
On Thu, Feb 8, 2018 at 6:28 AM, Ray Navarette wrote:
> Hello,
>
>
>
> I’m hoping to find some information abo
Hello,
I'm hoping to find some information about using "ADD FILES " when using
the spark execution engine. I've seen some jira tickets reference this
functionality, but little else. We have written some custom UDFs which require
some external resources. When using the MR execution engine, we