as a HDFS path rather than a local path."
Is there a way to verify my libjars setting for map/reduce job?
Please help!
Regards
Arthur
On 11 Jan, 2015, at 5:35 pm, arthur.hk.c...@gmail.com
wrote:
> Hi,
>
>
>
> mysql>
lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
What would be wrong?
Regards
Arthur
On 11 Jan, 2015, at 5:18 pm, arthur.hk.c...@gmail.com
wrote:
> Hi,
>
>
> 2015-01-04 08:57:12,154 ERROR [main]: DataNucleus.Datastore
> (Lo
ike all jars added with the ADD JAR
>>> command) should then be added to the distributed cache. It looks like this
>>> is where the issue is occurring, but based on path in the error message I
>>> suspect that either Hive or Hadoop is mistaking what should be
Hi,
A question: Why does it need to copy the jar file to the temp folder? Why
couldn’t it use the file defined in using JAR
'hdfs://hadoop/hive/nexr-hive-udf-0.2-SNAPSHOT.jar' directly?
Regards
Arthur
On 4 Jan, 2015, at 7:48 am, arthur.hk.c...@gmail.com
wrote:
> Hi,
>
an alterntive
>> to ADD JAR, Hive auxiliary path functionality should be used as described
>> below.
>>
>> Refer:
>> http://www.cloudera.com/content/cloudera/en/documentation/cloudera-manager/v4-8-0/Cloudera-Manager-Managing-Clusters/cmmc_hive_udf.html​
>>
ee it will live there
> forever based on ur cluster configuration.
>
> You may want to put it as a place where all users can access it like making a
> folder and keeping it read permission
>
> On Wed, Dec 31, 2014 at 11:40 AM, arthur.hk.c...@gmail.com
>
on HDFS, say
> hdfs:///home/nirmal/udf/hiveUDF-1.0-SNAPSHOT.jar.
> 2) In Hive, run CREATE FUNCTION zeroifnull AS 'com.test.udf.ZeroIfNullUDF'
> USING JAR 'hdfs:///home/nirmal/udf/hiveUDF-1.0-SNAPSHOT.jar';
>
> The function definition should be saved in the met
;
> At 2014-12-30 11:01:06, "arthur.hk.c...@gmail.com"
> wrote:
> Hi,
>
> I am using Hive 0.13.1 on Hadoop 2.4.1, I need to automatically load an extra
> JAR file to hive for UDF, below are my steps to create the UDF function. I
> have tried the following but
Hi,
I am using Hive 0.13.1 on Hadoop 2.4.1, I need to automatically load an extra
JAR file to hive for UDF, below are my steps to create the UDF function. I have
tried the following but still no luck to get thru.
Please help!!
Regards
Arthur
Step 1: (make sure the jar in in HDFS)
hive> dfs
Hi,
Please help!
I am using hiveserver2 on HIVE 0.13 on Hadoop 2.4.1, also
nexr-hive-udf-0.2-SNAPSHOT.jar
I can run query from CLI, e.g.
hive> SELECT add_months(sysdate(), +12) FROM DUAL;
Execution completed successfully
MapredLocal task succeeded
OK
2015-12-17
Time taken: 7.393 seconds, Fetche
Hi,
I have managed to resolve the issue by turning the SQL.
Regards
Arthur
On 12 Oct, 2014, at 6:49 am, arthur.hk.c...@gmail.com
wrote:
> Hi,
>
> My Hive version is 0.13.1, I tried a smoke test, after 1 days 7 hours 5
> minutes 19 seconds 70 msec, the job failed with erro
Hi,
My Hive version is 0.13.1, I tried a smoke test, after 1 days 7 hours 5 minutes
19 seconds 70 msec, the job failed with error: Error: GC overhead limit exceeded
LOG:
2014-10-12 06:16:07,288 Stage-6 map = 100%, reduce = 50%, Cumulative CPU
425.35 sec
2014-10-12 06:16:12,431 Stage-6 map = 1
subscribe
13 matches
Mail list logo