the IP
>> address availability and run the job.
>>
>>
>>
>> *Thanks,*
>>
>> *S.RagavendraGanesh*
>>
>> ViSolve Hadoop Support Team
>> ViSolve Inc. | San Jose, California
>> Website: www.visolve.com
>>
>> email: servi...@visolve.c
Hi,
we occasionally run into a BindException causing long running jobs to
occasionally fail.
The stacktrace is below.
Any ideas what this could be caused by?
Cheers,
Krishna
Stacktrace:
379969 [Thread-980] ERROR org.apache.hadoop.hive.ql.exec.Task - Job
Submission failed with exception 'jav
Last time I looked there wasn't much info available on how to reduce the
size of the logs written here (the only suggestions being delete them after
a day).
Is there anything I can do now to reduce what's logged there in the first
place?
Cheers,
Krishna
Hi all,
we've experienced a bug which seems to be caused by having a query
constraint involving partitioned columns. The following query results in
"FAILED: NullPointerException null" being returned nearly instantly:
EXPLAIN SELECT
col1
FROM
tbl1
WHERE
(part_col1 = 2014 AND part_col2 >= 2)
OR
Hi all,
We've been running into this problem a lot recently on a particular reduce
task. I'm aware that I can work around it by uping the
"mapred.task.timeout".
However, I would like to know what the underlying problem is. How can I
find this out?
Alternatively, can I force a generated hive task
uxlib dir. There always is the
> HIVE_AUX_JARS_PATH environment variable (but this introduces a dependency
> on the environment).
>
>
> On Wed, Mar 13, 2013 at 10:26 AM, Krishna Rao wrote:
>
>> Hi all,
>>
>> I'm using the hive json serde and need to run: &qu
Hi all,
I'm using the hive json serde and need to run: "ADD JAR
/usr/lib/hive/lib/hive-json-serde-0.2.jar;", before I can use tables that
require it.
Is it possible to have this jar available automatically?
I could do it via adding the statement to a .hiverc file, but I was
wondering if there is
Hi Sai,
just use the "-f" arg together with the file name. For details see:
https://cwiki.apache.org/Hive/languagemanual-cli.html
Krishna
On 4 March 2013 10:24, Sai Sai wrote:
> Just wondering if it is possible to run a bunch of hive commands from a
> file rather than one a time.
> For ex:
>
Hi all,
I'm occasionally getting the following error, usually after running an
expensive Hive query (creating 20 or so MR jobs):
***
Error during job, obtaining debugging information...
Examining task ID: task_201301291405_1640_r_01 (and more) from job
job_201301291405_1640
Exception in threa
ive table definition of both the tables?
>
> are both the columns of same type ?
>
>
> On Wed, Jan 9, 2013 at 5:15 AM, Krishna Rao wrote:
>
>> Hi all,
>>
>> On running a statement of the form "INSERT INTO TABLE tbl1 PARTITION(p1)
>> SELECT x1 FROM tb
Hi all,
On running a statement of the form "INSERT INTO TABLE tbl1 PARTITION(p1)
SELECT x1 FROM tbl2", I get the following error:
"Failed with exception java.lang.ClassCastException:
org.apache.hadoop.hive.metastore.api.InvalidOperationException cannot be
cast to java.lang.RuntimeException"
How
f thumb for Hive:
> count of operators * 4 + n (n for file ops and other stuff).
>
> cheers,
> Alex
>
>
> On Jan 2, 2013, at 10:35 AM, Krishna Rao wrote:
>
> > A particular query that I run fails with the following error:
> >
> > ***
> &g
On 18 December 2012 02:05, Mark Grover wrote:
> I usually put it in my home directory and that works. Did you try that?
I need it to work for all users. So the cleanest non-duplicating solution,
seems to be in the hive bin directory (and then conf dir, when I upgrade
hive).
;>
>> alternatively, you can create a .hiverc into your home directory and set
>> the parameters you want, these will be included in each session
>>
>>
>> On Fri, Dec 14, 2012 at 4:05 PM, Krishna Rao wrote:
>>
>>> Hi all,
>>>
>>> is
Hi all,
is it possible to set: mapreduce.map.log.level &
mapreduce.reduce.log.level, within some config file?
At the moment I have to remember to set these at the start of a hive
session, or script.
Cheers,
Krishna
> Currently suggested workaround is to use JDBC based import by dropping the
> "--direct" argument.
>
> Links:
> 1: https://issues.apache.org/jira/browse/SQOOP-654
>
> On Tue, Dec 04, 2012 at 05:04:56PM +, Krishna Rao wrote:
> > Hi all,
> >
> >
Hi all,
I'm haivng trouble transfering NULLs in a VARCHAR column in a table in
PostgresQL into Hive. A null value ends up as an empty value in Hive,
rather than NULL.
I'm running the following command:
sqoop import --username -P --hive-import --hive-overwrite
--null-string='\\N' --null-non-stri
are in block
> format are always split table regardless of what compression for the
> block is chosen.The Programming Hive book has an entire section
> dedicated to the permutations of compression options.
>
> Edward
> On Mon, Nov 5, 2012 at 10:57 AM, Krishna Rao
> wrote:
> >
Hi all,
I'm looking into finding a suitable format to store data in HDFS, so that
it's available for processing by Hive. Ideally I would like to satisfy the
following:
1. store the data in a format that is readable by multiple Hadoop projects
(eg. Pig, Mahout, etc.), not just Hive
2. work with a
19 matches
Mail list logo