btw, what error are you getting with ANTLR or HiveParser? If any dependency
class file is missing?
Thanks
Alok
On Thu, Nov 6, 2014 at 12:59 PM, Alok Kumar wrote:
> As Devopam suggested, do it right way from the start.
>
> PS : if queries are written manually, you could get tables too manually.
That would be great!
On Nov 5, 2014, at 10:49 PM, Nitin Pawar wrote:
> May be a JIRA ?
>
> I remember having my own UDF for doing this. If possible I will share the
> code
>
> On Thu, Nov 6, 2014 at 6:22 AM, Jason Dere wrote:
> Hive should probably at least provide a timezone option to from
As Devopam suggested, do it right way from the start.
PS : if queries are written manually, you could get tables too manually. :)
Thanks
Alok
On Wed, Nov 5, 2014 at 6:44 PM, Devopam Mittra wrote:
> hi Ritesh,
> Please reconsider your entire design , it might be helpful to do it now
> than bec
May be a JIRA ?
I remember having my own UDF for doing this. If possible I will share the
code
On Thu, Nov 6, 2014 at 6:22 AM, Jason Dere wrote:
> Hive should probably at least provide a timezone option to
> from_unixtime().
> As you mentioned, Hive doesn't really do any timezone handling, just
Hive should probably at least provide a timezone option to from_unixtime().
As you mentioned, Hive doesn't really do any timezone handling, just assumes
things are in the system's local timezone. It will be a bit of a bigger project
to add better time zone handling to Hive timestamps.
On Nov 5
Hi,
Can anyone help me with the following error?
39552 [Thread-31] INFO org.apache.sqoop.hive.HiveImport - FAILED:
SemanticException Line 2:17 Invalid path
''hdfs://bwdhdbpr059.hadoop.b2wdigital:8020/user/hdfs/RDW12DM.CHAIN_QG''
39905 [main] ERROR org.apache.sqoop.tool.ImportTool - Encounter
The following worked for me:
> CREATE TABLE dropme(key int, value string) PARTITIONED BY (yr int, mth
int);
> SET hive.exec.dynamic.partition.mode=nonstrict
> INSERT INTO TABLE dropme PARTITION(yr,mth)
SELECT stack(1,
2, 'val', 2014, 5) AS
(key, value, yr, mth) FROM singlerow;
Note that you nee
Based on the documentation
https://cwiki.apache.org/confluence/display/Hive/DynamicPartitions
the following CTAS should work:
CREATE TABLE dropme(key int, value string) PARTITIONED BY (yr int, mth int)
AS
SELECT 2 key, 'val' value, 2014 yr, 5 mth FROM singlerow;
but instead it gives me the error:
I see⦠and confirm, it's consistent with Linux/Unix output I get:
date -r 0
Thu 1 Jan 1970 01:00:00 IST
date
Wed 5 Nov 2014 14:49:52 GMT
Got some digging and it actually makes sense. Turns out Ireland didn't
observe daylight saving time in years 1968-1972 as set permanently to
GMT+1=IST.
An
hi Ritesh,
Please reconsider your entire design , it might be helpful to do it now
than becoming unmanageable later.
If unavoidable, please use a metadata based approach for pre-calculating
and keeping the list of tables that you need to refresh prior to firing a
query on them (?)
Hope it helps.
hey Alok,
I want to do this so that I can refresh the dependent tables before I run
my query, so that my query would now run on the current data.
The queries are written manually, so that the only way to do this will be
to parse the query.
Isn't Hive somewhat different from SQL? I have already tr
Hi,
I'm trying to use Hive(0.13) msck repair table command to recover
partitions and it only lists the partitions not added to metastore instead
of adding them to metastore as well.
here's the ouput of the command
partitions not in metastore externalexample:CreatedAt=26 04%3A50%3A56 UTC
2014/pro
Hi,
Why at the first place you would want this? ( just curious )
Few thoughts -
a) Try to get it from the piece of code where these query are being
generated [ if not static in code!], that would be best place to get it.
b) [ if you don't have access to a) ] - try http://zql.sourceforge.net/ ,
i
Hello,
I am trying to parse hive queries so that I can get the table names
on which the query is dependent on.
I have tried the following :
1) downloaded the grammer and used ANTLR to generate the lexer and parser,
but there are some errors as such when I try to build it:
..
symbol:
looks good to me
thanks for the share
On Wed, Nov 5, 2014 at 5:15 PM, Devopam Mittra wrote:
> hi Nitin,
> Thanks for the vital input around Hadoop Home addition. At times such
> things totally go off the radar when you have customized your own
> environment.
>
> As suggested I have shared this
hi Nitin,
Thanks for the vital input around Hadoop Home addition. At times such
things totally go off the radar when you have customized your own
environment.
As suggested I have shared this on github :
https://github.com/devopam/hadoopHA
apologies if there is any problem on github as I have limit
Hello Juan,
As you see the problem is come from the permissions roles, I had have like
this error before and pass it.
check and compare :
1. your hadoop installation is done as 'root' or an other user (if this
is the suoer user)?
2. your hive execution (who -'user'- run hive script)
3.
I have a secured and HA hdfs cluster, and I have been trying to execute a
join operation with beeline CLI.
My issue is that it try to execute mapreduce localy instead by yarn. I set
parameters
mapreduce.framework.name
yarn
mapred.job.tracker
anythin
Note: I've looked into the HiveRunner and hive_test projects as mentioned
in my SO post. However, neither of these support a CDH Hadoop version,
which is what I need to use. Specifically, my CDH version is 2.0.0-cdh4.7.0.
Best Regards,
Nishant Kelkar
On Wed, Nov 5, 2014 at 12:29 AM, Nishant Kelka
Hey All,
I was looking into integration testing Hive for some of my code. What I
need is something that creates an in-memory HDFS ecosystem, and an
on-the-fly HiveServer2 instance along with the Hive metastore. It is
interesting to note that HBase currently does have something that you can
use to
20 matches
Mail list logo