mapreduce.fileoutputcommitter.marksuccessfuljobs=false;
MAPREDUCE-947, i guess..
~Aniket
On Thu, Oct 4, 2012 at 11:06 PM, Balaraman, Anand <
anand_balara...@syntelinc.com> wrote:
> Hi
>
> ** **
>
> While using Map reduce programs, the output folder where reducer writes
> out the result cont
Hi
While using Map reduce programs, the output folder where reducer writes
out the result contains 2 auto-generated folders: _SUCCESS and _logs.
To avoid generation of _log folder, I can set the configuration
parameter "hadoop.job.history.user.location" with value as "none".
But, I don't know
Hi,
I have a question about using lateral views with multi table insert.
I have a table of data that represents raw log data, the structure of
which makes it onerous to query directly largely because it requires
UNIONTYPE columns. So, I transform that raw table into 3 new tables,
a primary table
I suggest you store unix timestamp in hive, and so you can compare it
as BIGINT without worrying about STRING comparison.
And if your data is to be queried on daily bases, you can split one
big file into small files, say, one file per day, then add them as
partitions of soj_session_container. This
Can you try creating a table like this:
CREATE EXTERNAL TABLE hbase_table_2(key int, value string)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
TBLPROPERTIES ("hbase.table.name" = "xyz");
Now do a select * from hbase
Hi,
In the hbase table I do not see column qualifier, only family.
For testing connection to hbase I also created a table using
CREATE TABLE hbase_table_1(key int, value string)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,c
> "hbase.columns.mapping" = ":key,mtdt:string,il:string,ol:string"
This doesn't look right. The mapping should be of form
COLUMN_FAMILY:COLUMN_QUALIFIER. In this case it seems to be
COLUMN_FAMILY:TYPE which is not right.
On Thu, Oct 4, 2012 at 3:25 PM, wrote:
> Hi,
>
> In hive shell I did
>
> c
The update did not succeed with this error.
Did anyone have similar case before or know anything about this?
On Thu, Oct 4, 2012 at 10:23 AM, Feng Lu wrote:
> Thanks for your reply, Edward.
> But for this case, the update did not succeed.
>
>
>
> On Thu, Oct 4, 2012 at 9:27 AM, Edward Capriolo wr
The issue apparently is not just the number of levels of nesting. I just
created a Hive table with 20 levels of structs within each other. It created
fine. This is more levels than the table that was failing for me. The failing
table had many more fields throughout the levels.
Chuck
-Ori
Hi,
In hive shell I did
create external table myextrenaltable (key string, metadata string, inlinks
string, outlinks string) stored by
'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
with serdeproperties ("hbase.columns.mapping" =
":key,mtdt:string,il:string,ol:string")
tblproperties ("
Can you tell us how you created mapping for the existing table ?
In task log, do you see any connection attempt to HBase ?
Cheers
On Thu, Oct 4, 2012 at 11:30 AM, wrote:
> Hello,
>
> I use hive-0.9.0 with hadoop-0.20.2 and hbase -0.92.1. I have created
> external table, mapping it to an existi
Hello,
I use hive-0.9.0 with hadoop-0.20.2 and hbase -0.92.1. I have created external
table, mapping it to an existing table in hbase. When I do "select * from
myextrenaltable" it returns no results, although scan in hbase shows data, and
I do not see any errors in jobtracker log.
Any ideas ho
Thanks. So is the nesting limit 10 now? Does your 2nd paragraph mean that this
limit cannot easily be raised?
Chuck
-Original Message-
From: Edward Capriolo [mailto:edlinuxg...@gmail.com]
Sent: Thursday, October 04, 2012 11:57 AM
To: user@hive.apache.org
Subject: Re: Limit to columns or
There is an open jira ticket on this. There is a hard coded limit but
it could be raised with some mostly minor code changes.
One of the bigger problems is that hive stores the definition of a
column in JDBC "column" and for some databases larger nested structs
can case issues.
Edward
On Thu, Oc
I am trying to create a large Hive table, with many columns and deeply nested
structs. It is failing with java.lang.ArrayIndexOutOfBoundsException: 10.
Before I spend a lot of time debugging my table declaration, is there some
limit here I should know about? Max number of columns? Max depth of s
Thanks for your reply, Edward.
But for this case, the update did not succeed.
On Thu, Oct 4, 2012 at 9:27 AM, Edward Capriolo wrote:
> Even with this exception I thing the update still succeeds. I do not
> think arc is working 100% correct for anyone (for any version of it).
>
>
>
> On Wed, Oct
Even with this exception I thing the update still succeeds. I do not
think arc is working 100% correct for anyone (for any version of it).
On Wed, Oct 3, 2012 at 11:05 PM, Feng Lu wrote:
> Hi,
>
> I was trying to do "arc diff --update ..." under Ubuntu and got this error:
>
>
> PHP Fatal error:
17 matches
Mail list logo