from version 0.80, the release document is not found.
http://hive.apache.org/docs/r0.10.0/
Not Found
The requested URL /docs/r0.10.0/ was not found on this server.
--
Apache/2.4.4 (Unix) OpenSSL/1.0.0g Server at hive.apache.org Port 80
It looks like your row is of this format:
2415022|OKJNECAA|1900-
01-02
02:00:21.0|0|1|1|1900|1|1|2|1|1900|1|1|Monday|1900Q1|N|N|Y|2415021|2415020|2414657|2414930|N|N|N|N|N|
Where your timestamp is in the third field; however, your table only has a
single column. Hive is reading y
Dileep,
Can you use a more contemporary timestamp? Something after Jan 1, 1970
GMT, say Jan 1st, 2013?
Let us know what you see.
On Tue, Mar 5, 2013 at 2:56 PM, Dileep Kumar wrote:
> --hdfs dfs -mkdir /hive/tpcds/date_ts
>
> create external table date_ts
> (
> d_datetimest
Hi All,
I am looking for some help in using timestamp column and not sure why I am
getting this error:
Here are how I created the tables and how I am querying it
--hdfs dfs -mkdir /hive/tpcds/date_ts
create external table date_ts
(
d_datetimestamp
)
row format delimited fi
--hdfs dfs -mkdir /hive/tpcds/date_ts
create external table date_ts
(
d_datetimestamp
)
row format delimited fields terminated by '|'
location '/hive/tpcds/date_ts';
[cloudera@localhost tmp-work]$ hive -e "select * from date_ts"
Logging initialized using configuration in
f
NIce, yea that would do it.
On Tue, Mar 5, 2013 at 1:26 PM, Mark Grover wrote:
> I typically change my query to query from a limited version of the whole
> table.
>
> Change
>
> select really_expensive_select_clause
> from
> really_big_table
> where
> something=something
> group by something=some
I typically change my query to query from a limited version of the whole table.
Change
select really_expensive_select_clause
from
really_big_table
where
something=something
group by something=something
to
select really_expensive_select_clause
from
(
select
*
from
really_big_table
limit 100
)t
w
Unfortunately, it will still go through the whole thing, then just limit
the output. However, there's a flag that I think only works in more recent
Hive releases:
set hive.limit.optimize.enable=true
This is supposed to apply limiting earlier in the data stream, so it will
give different results t
Using the Hive sampling feature would also help. This is exactly what that
feature is designed for.
Chuck
From: Kyle B [mailto:kbi...@gmail.com]
Sent: Tuesday, March 05, 2013 1:45 PM
To: user@hive.apache.org
Subject: Hive sample test
Hello,
I was wondering if there is a way to quick-verify a
Just add a limit 1 to the end of your query.
On Mar 5, 2013, at 1:45 PM, Kyle B wrote:
> Hello,
>
> I was wondering if there is a way to quick-verify a Hive query before it is
> run against a big dataset? The tables I am querying against have millions of
> records, and I'd like to verify m
Hello,
I was wondering if there is a way to quick-verify a Hive query before it is
run against a big dataset? The tables I am querying against have millions
of records, and I'd like to verify my Hive query before I run it against
all records.
Is there a way to test the query against a small subse
Sai,
This is because you are using the default embedded derby database as
metastore. When using the embedded derby metastore, the metadata is
stored in a relative location.
See the value of javax.jdo.option.ConnectionURL. By default, its value
is jdbc:derby:;databaseName=metastore_db;create=true
m
When you create an external table, original data ('/tmp/states' in
this case) is NOT copied to the warehouse folder (or infact any other
folder for that matter). So you can find it in '/tmp/states' ifself.
On Tue, Mar 5, 2013 at 10:26 PM, Sai Sai wrote:
> I have created an external table like bel
Thanks I figured this is in tmp/states
Thanks for your attention.
From: Sai Sai
To: "user@hive.apache.org"
Sent: Tuesday, 5 March 2013 8:56 AM
Subject: Re: Location of external table in hdfs
I have created an external table like below and wondering where
/tmp/states in HDFS.
On Tue, Mar 5, 2013 at 10:56 AM, Sai Sai wrote:
> I have created an external table like below and wondering where (folder)
> in hdfs i can find this:
>
> CREATE EXTERNAL TABLE states(abbreviation string, full_name string) ROW
> FORMAT DELIMITED FIELDS TERMINATED BY '\t' LOCA
I have created an external table like below and wondering where (folder) in
hdfs i can find this:
CREATE EXTERNAL TABLE states(abbreviation string, full_name string) ROW FORMAT
DELIMITED FIELDS TERMINATED BY '\t' LOCATION '/tmp/states' ;
Any help is really appreciated.
Thanks
Sai
>From the exceptions near the bottom, it looks like you're inserting data
that doesn't have unique keys, so it could be a data problem.
On Tue, Mar 5, 2013 at 7:54 AM, Ajit Kumar Shreevastava <
ajit.shreevast...@hcl.com> wrote:
> Hi All,
>
> ** **
>
> I am facing following issue while export
Hi All,
I am facing following issue while exporting table from hive to Oracle.
Importing table from Oracle to Hive and HDFS is working fine. Please let me
know where I lag. I am pasting my screen output here.
[hadoop@NHCLT-PC44-2 sqoop-oper]$ sqoop export --connect
jdbc:oracle:thin:@10.99.42.
Thanks for your help Nitin.
I have restarted my VM and tried again and it appears to work.
Thanks again.
Sai
From: Sai Sai
To: "user@hive.apache.org"
Sent: Tuesday, 5 March 2013 4:42 AM
Subject: Re: SemanticException Line 1:17 issue
Thanks for your help
this file /tmp/o_small.tsv looks like is existing on your local filesystem
try load data local inpath
it should work
On Tue, Mar 5, 2013 at 6:12 PM, Sai Sai wrote:
> Thanks for your help Nitin, here is what it displays:
>
> satish@ubuntu:~/work/hadoop-1.0.4/bin$ $HADOOP_HOME/bin/hadoop dfs -l
Thanks for your help Nitin, here is what it displays:
satish@ubuntu:~/work/hadoop-1.0.4/bin$ $HADOOP_HOME/bin/hadoop dfs -ls /tmp/
Warning: $HADOOP_HOME is deprecated.
Found 3 items
drwxr-xr-x - satish supergroup 0 2013-03-05 04:12 /tmp/hive-satish
-rw-r--r-- 1 satish supergroup
select statement without where clause is just a hdfs cat command. It will
not run mapreduce for that.
On Tue, Mar 5, 2013 at 5:48 PM, Tim Bittersohl wrote:
> Ok, it works.
>
> For testing, I fired a create table command and a select without a where
> clause. Both don’t result in MapReduce
it exists but where? on your hdfs or local linux filesystem ? so if you
are checking the file with ls -l /tmp/ then add word local
ls can you provide output of $HADOOP_HOME/bin/hadoop dfs -ls /tmp/
LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTO TABLE tablename
If the keyword LOCAL is spec
Ok, it works.
For testing, I fired a create table command and a select without a where
clause. Both dont result in MapReduce jobs... with a where clause, there is
a job created now.
Thanks
Von: Nitin Pawar [mailto:nitinpawar...@gmail.com]
Gesendet: Dienstag, 5. März 2013 12:48
An: use
Yes Nitin it exists... but still getting the same issue.
From: Nitin Pawar
To: user@hive.apache.org; Sai Sai
Sent: Tuesday, 5 March 2013 4:14 AM
Subject: Re: SemanticException Line 1:17 issue
this file /tmp/o_small.tsv is on your local filesystem or hdfs?
this file /tmp/o_small.tsv is on your local filesystem or hdfs?
On Tue, Mar 5, 2013 at 5:39 PM, Sai Sai wrote:
> Hello
>
> I have been stuck on this issue for quite some time and was wondering if
> anyone sees any problem with this that i am not seeing:
>
> I have verified the file exists here
Hello
I have been stuck on this issue for quite some time and was wondering if anyone
sees any problem with this that i am not seeing:
I have verified the file exists here and have also manually pasted the file
into the tmp folder but still running into the same issue.
I am also wondering whic
Hello
I have noticed when i execute the following command from hive shell in diff
folders it behaves in diff ways and was wondering if this is right:
show tables;
from the bin folder under my hive install folder it just shows tab_name:
myUser@ubuntu:~/work/hive-0.1
if the job is submitted to hadoop, it will come up on jobtracker.
Unless you are slow on tracking and your job history retaining is 0, each
hive query submitted to jobtracker will be there on jobhistory
On Tue, Mar 5, 2013 at 4:58 PM, Tim Bittersohl wrote:
> Hi,
>
> ** **
>
> I do have the
Hi,
I do have the following problem monitoring my Hive queries in Hadoop.
I create a Sever using the Hive library which connects to an Hadoop cluster
(file system, job tracker and Hive metastore are set up on this cluster).
The needed parameters for the Hive server I've set in the configura
30 matches
Mail list logo