Not sure--I just truncated the file list from the ls--that was the first file 
(just obfuscated the name)

The command I used to create the directories was:

hdfs dfs -mkdir 715 
then 
hdfs dfs -put myfile.csv 715

[cloudera@localhost data]$ hdfs dfs -ls /715
Found 1 items
-rw-r--r--   1 cloudera supergroup    7853975 2013-02-14 17:03 /715
[cloudera@localhost data]$ hdfs dfs -ls 715
Found 13 items
-rw-r--r--   1 cloudera cloudera    7853975 2013-02-15 00:41 715/40-file.csv

Thanks





________________________________
 From: Dean Wampler <dean.wamp...@thinkbiganalytics.com>
To: user@hive.apache.org; Joseph D Antoni <jdant...@yahoo.com> 
Sent: Friday, February 15, 2013 11:50 AM
Subject: Re: CREATE EXTERNAL TABLE Fails on Some Directories
 

Something's odd about this output; why is there no / in front of 715? I always 
get the full path when I run a -ls command. I would expect either:

/715/file.csv
or
/user/<me>/715/file.csv

Or is that what you meant by "(didn't leave rest of ls results)"?

dean


On Fri, Feb 15, 2013 at 10:45 AM, Joseph D Antoni <jdant...@yahoo.com> wrote:

[cloudera@localhost data]$ hdfs dfs -ls 715
>Found 13 items
>-rw-r--r--   1 cloudera cloudera    7853975 2013-02-15 00:41 715/file.csv 
>(didn't leave rest of ls results)
>
>
>Thanks on the directory--wasn't clear on that..
>
>Joey
>
>
>
>
>
>
>
>
>________________________________
> From: Dean Wampler <dean.wamp...@thinkbiganalytics.com>
>To: user@hive.apache.org; Joseph D Antoni <jdant...@yahoo.com> 
>Sent: Friday, February 15, 2013 11:37 AM
>Subject: Re: CREATE EXTERNAL TABLE Fails on Some Directories
> 
>
>
>You confirmed that 715 is an actual directory? It didn't become a file by 
>accident?
>
>
>By the way, you don't need to include the file name in the LOCATION. It will 
>read all the files in the directory.
>
>
>dean
>
>
>On Fri, Feb 15, 2013 at 10:29 AM, Joseph D Antoni <jdant...@yahoo.com> wrote:
>
>I'm trying to create a series of external tables for a time series of data 
>(using the prebuilt Cloudera VM).
>>
>>
>>The directory structure in HDFS is as such:
>>
>>
>>/711
>>/712
>>/713
>>/714
>>/715
>>/716
>>/717
>>
>>
>>Each directory contains the same set of files, from a different day. They 
>>were all put into HDFS using the following script:
>>
>>
>>for i in *;do hdfs dfs -put $i in $dir;done
>>
>>
>>They all show up with the same ownership/perms in HDFS.
>>
>>
>>Going into Hive to build the tables, I built a set of scripts to do the 
>>loads--then did a sed (changing 711 to 712,713, etc) to a file for each day. 
>>All of my loads work, EXCEPT for 715 and 716. 
>>
>>
>>Script is as follows:
>>
>>
>>create external table 715_table_name
>>(col1 string,
>>col2 string)
>>row format
>>delimited fields terminated by ','
>>lines terminated by '\n'
>>stored as textfile
>>location '/715/file.csv';
>>
>>
>>This is failing with:
>>
>>
>>Error in Metadata MetaException(message:Got except: 
>>org.apache.hadoop.fs.FileAlreadyExistsException Parent Path is not a 
>>directory: /715 715...
>>
>>
>>Like I mentioned it works for all of the other directories, except 715 and 
>>716. Thoughts on troubleshooting path?
>>
>>
>>Thanks
>>
>>
>>Joey D'Antoni
>
>
>
>-- 
>Dean Wampler, Ph.D.
>thinkbiganalytics.com
>+1-312-339-1330
>
>
>
>


-- 
Dean Wampler, Ph.D.
thinkbiganalytics.com
+1-312-339-1330

Reply via email to