Try
ALTER TABLE SET TBLPROPERTIES('EXTERNAL'='TRUE');
It worked for me.
igor
decide.com
On Mon, Aug 6, 2012 at 11:08 PM, Babe Ruth wrote:
> Hello,
> I created a managed table in HIVE when i intended for it to be external,
> is it possible for me to change the table back to external?
>
> OR
Hi George,
You can save yourself one copying. Just create a new external table with
different name, fill it with data (either by copying or query like INSERT
OVERWRITE DIRECTORY '/new/table/path' SELECT * FROM oldtable), drop the old
one and then rename the new one to the desired name:
ALTER TABL
I tested that function using main and by printing it out and it works fine.
As I am trying to get the Yesterday's date.
I need my query to be like this as today's date is Aug 6th, so query should
be for Aug 5th. And this works fine for me.
*SELECT * FROM REALTIME where dt= '20120805' LIMIT 10;*
Hi, George,
I think that's the only way you can do now.
--
Best Regards,
longmans
At 2012-08-07 14:08:09,"Babe Ruth" wrote:
Hello,
I created a managed table in HIVE when i intended for it to be external, is
it possible for me to change the table back to external?
OR do I have to copy the
Hello, I created a managed table in HIVE when i intended for it to be
external, is it possible for me to change the table back to external?
OR do I have to copy the data to a new directory, drop the table, then copy it
back?
Thanks,George
Hi Jamal,
Check if the function really returns what it should and that your data are
really in MMdd format. You can do this by simple query like this:
SELECT dt, yesterdaydate('MMdd') FROM REALTIME LIMIT 1;
I don't see anything wrong with the function itself, it works well for me
(althou
Yes I created that file manually. But other files are fine only that
particular file is having problem.
Is there any way I can fix that file?
On Mon, Aug 6, 2012 at 9:51 PM, shashwat shriparv wrote:
> There are some extra information about which file system does not know,
> have you build th
If output file is not too big then ^A can be replaced by using simple
command like-
$ tr "\001" "," < src_file > out_file
Thanks,
Vinod
On Tue, Aug 7, 2012 at 10:27 AM, zuohua zhang wrote:
> Thanks so much! that did work. I have 200+ columns so it is quite
> an ugly thing. No shortcut?
Thanks so much! that did work. I have 200+ columns so it is quite
an ugly thing. No shortcut?
On Mon, Aug 6, 2012 at 9:50 PM, Vinod Singh wrote:
> Change the query to something like-
>
> INSERT OVERWRITE DIRECTORY '/outputable.txt'
> select concat(col1, ',', col2, ',', col3) from myoutp
There are some extra information about which file system does not know,
have you build that file mannually?
On Tue, Aug 7, 2012 at 6:01 AM, Techy Teck wrote:
> Yup that makes sense. But when I tried opening that file using-
>
> hadoop fs -text
> /apps/hdmi-technology/b_apdpds/real-time_new/20120
Change the query to something like-
INSERT OVERWRITE DIRECTORY '/outputable.txt'
select concat(col1, ',', col2, ',', col3) from myoutputtable;
That way columns will be separated by ,.
Thanks,
Vinod
On Tue, Aug 7, 2012 at 10:16 AM, zuohua zhang wrote:
> I used the following that it won't help
I used the following that it won't help?
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
On Mon, Aug 6, 2012 at 9:43 PM, Vinod Singh wrote:
> Columns of a Hive table are separated by ^A character. Instead of doing a
> "SELECT * ", you may like to use concat function to have a separator of
> your
Columns of a Hive table are separated by ^A character. Instead of doing a
"SELECT * ", you may like to use concat function to have a separator of
your choice.
Thanks,
Vinod
On Tue, Aug 7, 2012 at 9:39 AM, zuohua zhang wrote:
> I have used the following to output a hive table to a file:
> DROP T
I have used the following to output a hive table to a file:
DROP TABLE IF EXISTS myoutputable;
CREATE TABLE myoutputtable
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t'
STORED AS TEXTFILE
AS
select
*
from originaltable;
INSERT OVERWRITE DIRECTORY '/outputable.txt'
select * from myoutputtable;
then
There is no built-in support for such things in Hive. You may like to
explore possibility of doing this via shell script or something else to
calculate date dynamically.
Thanks,
Vinod
On Tue, Aug 7, 2012 at 12:09 AM, Techy Teck wrote:
> I am running *Hive 0.6 *and below is the content I have in
Yup that makes sense. But when I tried opening that file using-
hadoop fs -text
/apps/hdmi-technology/b_apdpds/real-time_new/20120731/PDS_HADOOP_REALTIME_EXPORT-part-3-2
I can see my file contents there? Then what's wrong with that file? And is
there any way I can fix that error in that file usin
It could be like the file corresponding to the partition dt='20120731' got
corrupted.
This file as pointed in the error logs should be the culprit.
hdfs://ares-nn/apps/hdmi-technology/b_apdpds/real-time_new/20120731/PDS_HADOOP_REALTIME_EXPORT-part-3-2
Regards
Bejoy KS
Sent from handheld,
In the case here it literally is taking the UNIX timestamp, formatting it in
-mm-dd format and then subtracting the specified integer (in this case 1)
Sent from my Lumia 900
From: ext Techy Teck
Sent: 8/6/2012 3:37 PM
To: user@hive.apache.org
Subject: Re: (Get
I am writing a simple query on our hive table and I am getting some
exception-
select count(*) from table1 where dt='20120731';
java.io.IOException: IO error in map input file
hdfs://ares-nn/apps/hdmi-technology/b_apdpds/real-time_new/20120731/PDS_HADOOP_REALTIME_EXPORT-part-3-2
at
org
Thanks Carla for the suggestion, I am currently using Hive 0.6 and that
Hive version doesn't supports variable substitution with hiveconf variable,
so that is the reason I was looking for some other alternative-
So you are saying basically, If I add your suggestion in my query like
below-
*select
If you are just using it in a query, you can do this:
date_sub(FROM_UNIXTIME(UNIX_TIMESTAMP(),'-MM-dd') , 1)
I generally do my date calculations in a shell script and pass them in with a
hiveconf variable.
Carla
-Original Message-
From: ext Yue Guan [mailto:pipeha...@gmail.com]
Sen
guess you can use sub_date, but you have to get today by some outside
script.
On 08/06/2012 02:10 PM, Techy Teck wrote:
Is there any way to get the current date -1 in Hive means yesterdays
date always?
I am running *Hive 0.6 *and below is the content I have in *hivetest1.hql*file.
set mapred.job.queue.name=hdmi-technology;
set mapred.output.compress=true;
set mapred.output.compression.type=BLOCK;
set mapred.output.compression.codec=org.apache.hadoop.io.compress.LzoCodec;
add jar UserDefinedFunct
Oye, got it. Sorry.
RTFM: hive.exec.drop.ignorenonexistent
On Aug 6, 2012, at 11:06 , Keith Wiley wrote:
> I'm wrapping hive in a web tool and would like to do some basic
> error-checking. If an attempt is made to drop a table that doesn't exist, I
> would like to show an error message. The
Is there any way to get the current date -1 in Hive means yesterdays date
always?
I'm wrapping hive in a web tool and would like to do some basic error-checking.
If an attempt is made to drop a table that doesn't exist, I would like to show
an error message. The problem is, hive doesn't seem to produce any sort of
error when dropping a table that doesn't exists. Furthermor
Hi
I am facing an issue while viewing special characters (such as é) using Hive.
If I view the file in HDFS (using hadoop fs -cat command), it is displayed
correctly as 'é', but when I select the data using Hive, this character alone
gets replaced by a question mark.
Do we have any solut
If you don't want to manage hive table, It doesn't necessarily means you
need to use the vanilla MapReduce.
If your workflow is complex using Hive, it won't be that easy to maintain
it if everything is implemented directly using MapReduce.
I would recommend you to look at libraries such as Cascadin
28 matches
Mail list logo