src: /172.23.108.105:57388,
> dest: /172.23.106.80:50010, bytes: 6733, op: HDFS_WRITE, cliID:
> DFSClient_attempt_201205261626_0011_r_01_0, offset: 0, srvID:
> DS-1416163861-172.23.106.80-50010-1335859555961, blockid:
> blk_4133062118632896877_497881, duration: 17580129*
>
> Reg
-Original Message-
From: Philip Tromans [mailto:philip.j.trom...@gmail.com]
Sent: Tuesday, May 29, 2012 3:16 PM
To: user@hive.apache.org
Subject: Re: dynamic partition import
Is there anything interesting in the datanode logs?
Phil.
On 29 May 2012 10:37, Nitin Pawar
mailto:nitinpawar
Is there anything interesting in the datanode logs?
Phil.
On 29 May 2012 10:37, Nitin Pawar wrote:
> can you check atleast one datanode is running and is not part of blacklisted
> nodes
>
>
> On Tue, May 29, 2012 at 3:01 PM, Nimra Choudhary
> wrote:
>>
>>
>>
>> We are using Dynamic partitioning
All my data nodes are up and running with none blacklisted.
Regards,
Nimra
From: Nitin Pawar [mailto:nitinpawar...@gmail.com]
Sent: Tuesday, May 29, 2012 3:07 PM
To: user@hive.apache.org
Subject: Re: dynamic partition import
can you check atleast one datanode is running and is not part of
can you check atleast one datanode is running and is not part of
blacklisted nodes
On Tue, May 29, 2012 at 3:01 PM, Nimra Choudhary wrote:
> ** **
>
> We are using Dynamic partitioning and facing the similar problem. Below is
> the jobtracker error log. We have a hadoop cluster of 6 nodes, 1.16
We are using Dynamic partitioning and facing the similar problem. Below is the
jobtracker error log. We have a hadoop cluster of 6 nodes, 1.16 TB capacity
with over 700GB still free.
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException:
org.apache.hadoop.ipc.RemoteException: java.io.IOE
So we got it, I hope!
We did take care of the ulimit max open file thing (e.g. 1.3.1.6.1. ulimit on
Ubuntu: http://hbase.apache.org/book/notsoquick.html). But after the switch
from "native" hadoop to cloudera ditribution cdh3u0 we didn't mention to to
this for the users "hdfs", "hbase" AND "mapr
I'm beginning to suspect this myself. We have a import job which has
many smaller files. We've been merging them into a single log file and
partitioning by day however I've seen this and other errors (usually
memory related errors) posted by hive and the load fails.
Our latest error has been
; Betreff: Re: dynamic partition import
> Hi,
>
> I allways get
>
> java.lang.RuntimeException:
> org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while
> processing row (tag=0)
> {"key":{},"value":{"_col0":"1129","
Hi,
I allways get
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException:
Hive Runtime Error while processing row (tag=0)
{"key":{},"value":{"_col0":"1129","_col1":"Campaign","_col2":"34811433","_col3":"group","_col4":"1271859453","_col5":"Soundso","_col6":"93709590","_col
Hello,
can't import files with dynamic partioning. Query looks like this
FROM cost c INSERT OVERWRITE TABLE costp PARTITION (accountId,day) SELECT
c.clientId,c.campaign,c.accountId,c.day DISTRIBUTE BY c.accountId,c.day
Strange thing is: Sometimes it works sometimes mapred fails with something
11 matches
Mail list logo