why apache hive 0.10 document not found?

2013-03-05 Thread 周梦想
from version 0.80, the release document is not found. http://hive.apache.org/docs/r0.10.0/ Not Found The requested URL /docs/r0.10.0/ was not found on this server. -- Apache/2.4.4 (Unix) OpenSSL/1.0.0g Server at hive.apache.org Port 80

Re: Not able to use the timestamp columns

2013-03-05 Thread Morgan Reece
It looks like your row is of this format: 2415022|OKJNECAA|1900- 01-02 02:00:21.0|0|1|1|1900|1|1|2|1|1900|1|1|Monday|1900Q1|N|N|Y|2415021|2415020|2414657|2414930|N|N|N|N|N| Where your timestamp is in the third field; however, your table only has a single column. Hive is reading y

Re: Hive insert into RCFILE issue with timestamp columns

2013-03-05 Thread Mark Grover
Dileep, Can you use a more contemporary timestamp? Something after Jan 1, 1970 GMT, say Jan 1st, 2013? Let us know what you see. On Tue, Mar 5, 2013 at 2:56 PM, Dileep Kumar wrote: > --hdfs dfs -mkdir /hive/tpcds/date_ts > > create external table date_ts > ( > d_datetimest

Not able to use the timestamp columns

2013-03-05 Thread Dileep Kumar
Hi All, I am looking for some help in using timestamp column and not sure why I am getting this error: Here are how I created the tables and how I am querying it --hdfs dfs -mkdir /hive/tpcds/date_ts create external table date_ts ( d_datetimestamp ) row format delimited fi

Re: Hive insert into RCFILE issue with timestamp columns

2013-03-05 Thread Dileep Kumar
--hdfs dfs -mkdir /hive/tpcds/date_ts create external table date_ts ( d_datetimestamp ) row format delimited fields terminated by '|' location '/hive/tpcds/date_ts'; [cloudera@localhost tmp-work]$ hive -e "select * from date_ts" Logging initialized using configuration in f

Re: Hive sample test

2013-03-05 Thread Dean Wampler
NIce, yea that would do it. On Tue, Mar 5, 2013 at 1:26 PM, Mark Grover wrote: > I typically change my query to query from a limited version of the whole > table. > > Change > > select really_expensive_select_clause > from > really_big_table > where > something=something > group by something=some

Re: Hive sample test

2013-03-05 Thread Mark Grover
I typically change my query to query from a limited version of the whole table. Change select really_expensive_select_clause from really_big_table where something=something group by something=something to select really_expensive_select_clause from ( select * from really_big_table limit 100 )t w

Re: Hive sample test

2013-03-05 Thread Dean Wampler
Unfortunately, it will still go through the whole thing, then just limit the output. However, there's a flag that I think only works in more recent Hive releases: set hive.limit.optimize.enable=true This is supposed to apply limiting earlier in the data stream, so it will give different results t

RE: Hive sample test

2013-03-05 Thread Connell, Chuck
Using the Hive sampling feature would also help. This is exactly what that feature is designed for. Chuck From: Kyle B [mailto:kbi...@gmail.com] Sent: Tuesday, March 05, 2013 1:45 PM To: user@hive.apache.org Subject: Hive sample test Hello, I was wondering if there is a way to quick-verify a

Re: Hive sample test

2013-03-05 Thread Joey D'Antoni
Just add a limit 1 to the end of your query. On Mar 5, 2013, at 1:45 PM, Kyle B wrote: > Hello, > > I was wondering if there is a way to quick-verify a Hive query before it is > run against a big dataset? The tables I am querying against have millions of > records, and I'd like to verify m

Hive sample test

2013-03-05 Thread Kyle B
Hello, I was wondering if there is a way to quick-verify a Hive query before it is run against a big dataset? The tables I am querying against have millions of records, and I'd like to verify my Hive query before I run it against all records. Is there a way to test the query against a small subse

Re: show tables in bin does not display the tables

2013-03-05 Thread Mark Grover
Sai, This is because you are using the default embedded derby database as metastore. When using the embedded derby metastore, the metadata is stored in a relative location. See the value of javax.jdo.option.ConnectionURL. By default, its value is jdbc:derby:;databaseName=metastore_db;create=true m

Re: Location of external table in hdfs

2013-03-05 Thread bharath vissapragada
When you create an external table, original data ('/tmp/states' in this case) is NOT copied to the warehouse folder (or infact any other folder for that matter). So you can find it in '/tmp/states' ifself. On Tue, Mar 5, 2013 at 10:26 PM, Sai Sai wrote: > I have created an external table like bel

Re: Location of external table in hdfs

2013-03-05 Thread Sai Sai
Thanks I figured this is in tmp/states Thanks for your attention. From: Sai Sai To: "user@hive.apache.org" Sent: Tuesday, 5 March 2013 8:56 AM Subject: Re: Location of external table in hdfs I have created an external table like below and wondering where

Re: Location of external table in hdfs

2013-03-05 Thread Dean Wampler
/tmp/states in HDFS. On Tue, Mar 5, 2013 at 10:56 AM, Sai Sai wrote: > I have created an external table like below and wondering where (folder) > in hdfs i can find this: > > CREATE EXTERNAL TABLE states(abbreviation string, full_name string) ROW > FORMAT DELIMITED FIELDS TERMINATED BY '\t' LOCA

Re: Location of external table in hdfs

2013-03-05 Thread Sai Sai
I have created an external table like below and wondering where (folder) in hdfs i can find this: CREATE EXTERNAL TABLE states(abbreviation string, full_name string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t' LOCATION '/tmp/states' ; Any help is really appreciated. Thanks Sai

Re: Error while exporting table data from hive to Oracle through Sqoop

2013-03-05 Thread Dean Wampler
>From the exceptions near the bottom, it looks like you're inserting data that doesn't have unique keys, so it could be a data problem. On Tue, Mar 5, 2013 at 7:54 AM, Ajit Kumar Shreevastava < ajit.shreevast...@hcl.com> wrote: > Hi All, > > ** ** > > I am facing following issue while export

Error while exporting table data from hive to Oracle through Sqoop

2013-03-05 Thread Ajit Kumar Shreevastava
Hi All, I am facing following issue while exporting table from hive to Oracle. Importing table from Oracle to Hive and HDFS is working fine. Please let me know where I lag. I am pasting my screen output here. [hadoop@NHCLT-PC44-2 sqoop-oper]$ sqoop export --connect jdbc:oracle:thin:@10.99.42.

Re: Done SemanticException Line 1:17 issue

2013-03-05 Thread Sai Sai
Thanks for your help Nitin. I have restarted my VM and tried again and it appears to work. Thanks again. Sai From: Sai Sai To: "user@hive.apache.org" Sent: Tuesday, 5 March 2013 4:42 AM Subject: Re: SemanticException Line 1:17 issue Thanks for your help

Re: SemanticException Line 1:17 issue

2013-03-05 Thread Nitin Pawar
this file /tmp/o_small.tsv looks like is existing on your local filesystem try load data local inpath it should work On Tue, Mar 5, 2013 at 6:12 PM, Sai Sai wrote: > Thanks for your help Nitin, here is what it displays: > > satish@ubuntu:~/work/hadoop-1.0.4/bin$ $HADOOP_HOME/bin/hadoop dfs -l

Re: SemanticException Line 1:17 issue

2013-03-05 Thread Sai Sai
Thanks for your help Nitin, here is what it displays: satish@ubuntu:~/work/hadoop-1.0.4/bin$ $HADOOP_HOME/bin/hadoop dfs -ls /tmp/ Warning: $HADOOP_HOME is deprecated. Found 3 items drwxr-xr-x   - satish supergroup  0 2013-03-05 04:12 /tmp/hive-satish -rw-r--r--   1 satish supergroup   

Re: Get the job id for a hive query

2013-03-05 Thread Nitin Pawar
select statement without where clause is just a hdfs cat command. It will not run mapreduce for that. On Tue, Mar 5, 2013 at 5:48 PM, Tim Bittersohl wrote: > Ok, it works. > > For testing, I fired a create table command and a select without a where > clause. Both don’t result in MapReduce

Re: SemanticException Line 1:17 issue

2013-03-05 Thread Nitin Pawar
it exists but where? on your hdfs or local linux filesystem ? so if you are checking the file with ls -l /tmp/ then add word local ls can you provide output of $HADOOP_HOME/bin/hadoop dfs -ls /tmp/ LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename If the keyword LOCAL is spec

AW: Get the job id for a hive query

2013-03-05 Thread Tim Bittersohl
Ok, it works. For testing, I fired a create table command and a select without a where clause. Both don’t result in MapReduce jobs... with a where clause, there is a job created now. Thanks Von: Nitin Pawar [mailto:nitinpawar...@gmail.com] Gesendet: Dienstag, 5. März 2013 12:48 An: use

Re: SemanticException Line 1:17 issue

2013-03-05 Thread Sai Sai
Yes Nitin it exists... but still getting the same issue. From: Nitin Pawar To: user@hive.apache.org; Sai Sai Sent: Tuesday, 5 March 2013 4:14 AM Subject: Re: SemanticException Line 1:17 issue this file /tmp/o_small.tsv is on your local filesystem or hdfs?

Re: SemanticException Line 1:17 issue

2013-03-05 Thread Nitin Pawar
this file /tmp/o_small.tsv is on your local filesystem or hdfs? On Tue, Mar 5, 2013 at 5:39 PM, Sai Sai wrote: > Hello > > I have been stuck on this issue for quite some time and was wondering if > anyone sees any problem with this that i am not seeing: > > I have verified the file exists here

Re: SemanticException Line 1:17 issue

2013-03-05 Thread Sai Sai
Hello I have been stuck on this issue for quite some time and was wondering if anyone sees any problem with this that i am not seeing: I have verified the file exists here and have also manually pasted the file into the tmp folder but still running into the same issue. I am also wondering whic

Re: show tables in bin does not display the tables

2013-03-05 Thread Sai Sai
Hello I have noticed when i execute the following command from hive shell in diff folders it behaves in diff ways and was wondering if this is right: show tables; from the bin folder under my hive install folder it just shows tab_name: myUser@ubuntu:~/work/hive-0.1

Re: Get the job id for a hive query

2013-03-05 Thread Nitin Pawar
if the job is submitted to hadoop, it will come up on jobtracker. Unless you are slow on tracking and your job history retaining is 0, each hive query submitted to jobtracker will be there on jobhistory On Tue, Mar 5, 2013 at 4:58 PM, Tim Bittersohl wrote: > Hi, > > ** ** > > I do have the

AW: Get the job id for a hive query

2013-03-05 Thread Tim Bittersohl
Hi, I do have the following problem monitoring my Hive queries in Hadoop. I create a Sever using the Hive library which connects to an Hadoop cluster (file system, job tracker and Hive metastore are set up on this cluster). The needed parameters for the Hive server I've set in the configura