Try changing one of following property in file listed before them :
hive-exec-log4j.properties:hive.log.dir=/tmp/${user.name}
hive-log4j.properties:hive.log.dir=/tmp/${user.name}
On Thu, Jan 26, 2012 at 3:40 PM, Tucker, Matt wrote:
> We’ve started noticing that some of the hive job logs
> (hi
Thanks for the Quick reply
Will download and try the same
I am curious to check what was the reason for the issue...is version of
HBase/Hive is the problem for the below issue I was mentioning or is there is
issue with the way I was creating Hive table?
PS: I am using cdh3u2 (Cloudera-vm)
Reg
http://dl.dropbox.com/u/19454506/HadoopHIveHbaseReady.tar.gz
Download this its pre connfigured hive and hbase. you need to change some
settings accordingly specific to your linux settings...
On Fri, Jan 27, 2012 at 11:07 AM, Madhusudhana Rao Podila <
madhusudhana_pod...@infosys.com> wrote:
> Hi
Hi
I have a problem in create a Hive table using existing HBase table (using
External Table concept) with multiple columns from column family (not using as
Map)
Case-1 :
I have created a table in HBase and able to map to Hive as an external table
just using only one column from the column fami
We've started noticing that some of the hive job logs
(hive_job_log_mtucker_201201251355_374625982.txt) can become very big, some
upwards of 800MB.
I've tried modifying the Log4J settings to write to a different directory, but
the job logs still end up writing to /tmp/`whoami`/. Am I overlooki
Hello All,
I have a mapred job that does transfermation and outputs to a compresses
SequenceFile (by using org.apache.hadoop.mapred.SequenceFileOutputFormat)
I am able to attach the output to a external hive table (stored as
sequncefile). When i query it ignores the first column value from the fi
It might be worth noting that this is already a work-around that
get_json_object() doesn't return an array if the key we're specifying is an
array, but instead returns a string. The BI user wants to do the following:
select get_json_object(roster_json, '$.memberList.playerId') players
And he want
I don't think there is a better way to implement your query using the
standard SQL/Hive.
A python reducer (or a java UDF) is the way to go.
I don't think clustering would help since there is no way to specify what
you want in HiveQL alone.
igor
decide.com
On Thu, Jan 26, 2012 at 3:23 AM, wrote
Dear all,
I am struggling with a Hive query where I am trying to get the last value for a
column.
Let say I have a table T with three columns: user_id, time, colour and I want
to know for each user_id what is its last colour value.
At the moment I am using the following (naïve) query:
SELECT T
www.rainstor.com
Subject: Re: rainstor
From: swil...@monetate.com
Date: Wed, 25 Jan 2012 22:14:12 -0500
To: user@hive.apache.org
Google?
Sent from my iPhone
On Jan 25, 2012, at 7:34 PM, Dalia Sobhy wrote:
Do anyone have any idea about rainstor ???
Opensource? How to download ? How to us
10 matches
Mail list logo