after a lot of trial and error and doubt...
it's a memory hardware problem (confirmed by memtest) :(
The file is corrupted when moving/writing/reading the 130GB file
thank you for your help and thanks to #hadoop@freenode
--
Laurent "ker2x" Laborde
Sysadmin & DBA at http://www.over-blog.com/
I just noticed that your input file is actually text file. There is
SkipBadRecords feature in Hadoop for text file. But i think hive does
not support that now. But i think you can hack by doing the setting
yourself.
Just look at the SkipBadRecords's code to find the conf name and
value, and set i
Hi all,
Please consider submitting to the:
The Fourth IEEE International Scalable Computing Challenge (SCALE 2011),
sponsored by the IEEE Computer Society Technical Committee on Scalable
Computing (TCSC).
Objective and Focus:
The objective of the Fourth IEEE International Scalable Computing C
thank you for your replies.
i reinstalled hadoop and hive, switched from Cloudera CDH3 to CDH2,
restarted everything from scratch
i've set io.skip.checksum.errors=true
and i still have the same error :(
what's wrong ? :(
the dataset come from a postgresql database and is consistant.
On Tue, Feb
Local tables are like hive tables in all other senses except that they are on
the local disk rather than HDFS. The only other difference I know of is that
when you call "drop table" on a local table, only the metadata on the table
gets deleted. For tables on HDFS, the table data gets deleted w
HDFS gives you the ability to distribute disk access for jobs across
computers.
You don't need to have the file in HDFS to run a hive job.
Local tables are like hive tables in all other senses except that they are
on the local disk rather than HDFS. The only other difference I know of is
that whe
create table test_table ( test string);
load data local inpath '/root/1.csv' into table test_table;
describe extended test_table; # "find the hdfs dir of table test_table, for
my case the data of table test_table is saved under
hdfs /root/hive/warehouse/test_table
dfs -ls /root/hive/warehouse/test_
Thanks Ajo.
Please confirm if my understanding is correct.
That means when I do "LOAD DATA *LOCAL* INPATH 'filepath' [OVERWRITE] INTO
TABLE tablename" data in is local file system. If I need to run HIVE queries
(which in turn would be converted to Map Reduce jobs) I need to pull the
data some other
Look up for local :
http://wiki.apache.org/hadoop/Hive/GettingStarted
-Ajo.
On Tue, Feb 1, 2011 at 3:15 AM, Amlan Mandal wrote:
> Hi All,
> I am a hive newbie.
>
> LOAD DATA *LOCAL* INPATH 'filepath' [OVERWRITE] INTO TABLE tablename
>
> When I use LOCAL keyword does hive create a hdfs file for
Hi All,
I am a hive newbie.
LOAD DATA *LOCAL* INPATH 'filepath' [OVERWRITE] INTO TABLE tablename
When I use LOCAL keyword does hive create a hdfs file for it?
I used above statement to put data into a hive table.
But I could not see any hdfs file in my hive.metastore.warehouse.dir (which
comes f
Hi ,
I updated the JIRA . Kindly give your suggestions so that I can go
ahead and complete the task.
Thanks
On Tue, Feb 1, 2011 at 12:25 PM, bharath vissapragada
wrote:
> Thanks for replying namit..
>
> It is motivating to receive a mail from the authors of Hive :).
>
> I filed the jira based
11 matches
Mail list logo