Thanks for your reply Edward.
I see you use CommandProcessorFactory, and then you pass the actual commands
in as strings, such as:
doHiveCommand("create table bla (id int)", c)
However i assume this somewhere gets translated into normal Java commands
inside of hive? Something like Someclass.creat
If i give grants to the user that is specified in my hive-site.xml to
connect to metastore (javax.jdo.option.ConnectionUserName) then i can create
tables and such using remote hive connection. So it seems it is doing the
authorization checks against that user, and not the user that is actually
logg
Using a normal hive connection and authorization it seems to work for me:
hive> revoke all on database default from user koert;
OK
Time taken: 0.043 seconds
hive> create table tmp(x string);
Authorization failed:No privilege 'Create' found for outputs {
database:default}. Use show grant to get more
Yes, Bejoy,
it was the data. I also have to be strict with GROUP BY and not give it any
fields with an aggregator function (unlike MySQL).
Thank you,
Mark
On Wed, Oct 19, 2011 at 11:54 AM, wrote:
> ** Looks like some data problem. Were you using the GROUP BY query on same
> data set?
> But if
Thank you, Igor, understood.
Mark
On Wed, Oct 19, 2011 at 3:44 PM, Igor Tatarinov wrote:
> Yes, MySQL is not SQL-standard compliant here but I really like this
> feature. A lot of times I know that a group-by column is a key for some
> other columns I am selecting. It seems pointless to add all
On Wed, Oct 19, 2011 at 5:18 PM, Koert Kuipers wrote:
> I have the need to do some cleanup on my hive warehouse from java, such as
> deleting tables (both in metastore and the files on hdfs)
>
> I found out how to do this using remote connection:
> org.apache.hadoop.hive.service.HiveClient connec
I have the need to do some cleanup on my hive warehouse from java, such as
deleting tables (both in metastore and the files on hdfs)
I found out how to do this using remote connection:
org.apache.hadoop.hive.service.HiveClient connects to a hive server with
only a few lines of code, and it provide
Yes, MySQL is not SQL-standard compliant here but I really like this
feature. A lot of times I know that a group-by column is a key for some
other columns I am selecting. It seems pointless to add all those other
columns to the group-by list.
Other times, I use this 'hole' in MySQL to pick a (rand
Hi,
I think I've isolated my Hive GROUP BY problem to this question,
In Hive, the GROUP BY needs to be strict. I mean
hive> select property_id["property_id"], log_timestamp from trans group by
property_id["property_id"];
FAILED: Error in semantic analysis: Line 1:35 Expression not in GROUP BY ke
Looks like some data problem. Were you using the GROUP BY query on same data
set?
But if count(*) also throws an error then it comes to square 1,
installation/configuration problem with hive or map reduce.
Regards
Bejoy K S
-Original Message-
From: Mark Kerzner
Date: Wed, 19 Oct 2011
Bejoy,
I've been using this install of Hive for some time now, and simple queries
and joins work fine. It's the GROUP BY that I have problems with, sometimes
even with COUNT(*).
I am trying to isolate the problem now, and reduce it to the smallest query
possible. I am also trying to find a workar
Mark
To ensure your hive installation is fine run two queries
SELECT * FROM trans LIMIT 10;
SELECT * FROM trans WHERE ***;
You can try this for couple of different tables. If these queries return
results and work fine as desired then your hive could be working good.
If it works good as the s
Vikas,
I am using Cloudera CDHU1 on Ubuntu. I get the same results on RedHat CDHU0.
Mark
On Wed, Oct 19, 2011 at 9:47 AM, Vikas Srivastava <
vikas.srivast...@one97.net> wrote:
> install hive with RPM this is correpted!!
>
> On Wed, Oct 19, 2011 at 8:01 PM, Mark Kerzner wrote:
>
>> Here is w
install hive with RPM this is correpted!!
On Wed, Oct 19, 2011 at 8:01 PM, Mark Kerzner wrote:
> Here is what my hive logs say
>
> hive -hiveconf hive.root.logger=DEBUG
>
> 2011-10-19 09:24:35,148 ERROR DataNucleus.Plugin
> (Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" require
Here is what my hive logs say
hive -hiveconf hive.root.logger=DEBUG
2011-10-19 09:24:35,148 ERROR DataNucleus.Plugin
(Log4JLogger.java:error(115)) - Bundle "org.eclipse.jdt.core" requires
"org.eclipse.core.resources" but it cannot be resolved.
2011-10-19 09:24:35,150 ERROR DataNucleus.Plugin
(Log
Hi Mark
What does your Map reduce job logs say? Try figuring out the error form
there. From hive CLI you could hardly find out the root cause of your errors.
From job tracker web UI < http://hostname:50030/jobtracker.jsp> you can easily
browse to failed tasks and get the actual exception fr
Hi Ankit,
I have verified the trunk code base if "hive.metastore.local" this property
is true the flow should not reach to this location
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:127)
Before it self it should return with the client object.
Hope it helps
HI,
I am trying to figure out what I am doing wrong with this query and the
unusual error I am getting. Also suspicious is the reduce % going up and
down.
select trans.property_id, day(trans.log_timestamp) from trans JOIN opts on
trans.ext_booking_id["ext_booking_id"] = opts.ext_booking_id group
Mohit,
I use Hive 0.7.1 and am able to access the file from distributed cache just by
filename. Did you try that?
Mark
- Original Message -
From: "Chinna Rao Lalam 72745"
To: user@hive.apache.org
Sent: Wednesday, October 19, 2011 6:56:38 AM
Subject: Re: Accessing distributed cache in tr
Hello,
I have been trying to learn the Hive query compiler and
I am wondering if there is a way to see the result of semantic
analysis (query block tree)
and non-optimized logical query plan.
I know we can get AST and optimized logical query plan with "explain",
but I want to know the intermediate
Hi ,
I can run query over hive through hive shell and using jdbc connection.
But, I got the below error, When i tried to access the hive metadata using
hive api.
java.lang.IllegalArgumentException: URI: does not have a scheme
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMeta
Hi Rakesh,
Thanks for reply, I have tried hive-0.7.1 api but following error is
occurred.
java.lang.IllegalArgumentException: URI: does not have a scheme
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:127)
at
org.apache.hadoop.hive.metastore.HiveMetaStoreC
Hi,
Check where u have configured some value to the "hive.metastore.uris"
property in hive-default.xml or hive-site.xml
and whether it is valid or not
Hope it helps
Chinna Rao Lalam
- Original Message -
From: kiranprasad
Date: Wednesday, October 19, 2011 3:08 pm
Subject: Re: When t
Can I run HIVE in local mode, example like PIG local mode??
-Original Message-
From: kiranprasad
Sent: Wednesday, October 19, 2011 3:24 PM
To: user@hive.apache.org
Subject: Re: When trying to create table Iam getting exception
PF below Hive-site.xml. there is no password for the DB.
Hi,
Can u post some more details like for the "list file" what command u have
used.
- Original Message -
From: Mohit Gupta
Date: Wednesday, October 19, 2011 3:16 pm
Subject: Re: Accessing distributed cache in transform scripts
To: user@hive.apache.org
> Plz help...Any pointers!!
>
PF below Hive-site.xml. there is no password for the DB.
hive.metastore.local
false
javax.jdo.option.ConnectionURL
jdbc:mysql://10.0.2.65/metastore_db?createDatabaseIfNotExist=true
javax.jdo.option.ConnectionDriverName
com.mysql.jdbc.Driver
javax.jdo.option.ConnectionUserName
how is your metastore set up? Looks like some incorrect or incomplete
hive-site.xml config for metatstore settings.
On 2011/10/19, at 11:38, kiranprasad wrote:
>
> Below mentioned is the table which I ve tried to create.
>
> $ bin/hive
> Hive history
> file=/tmp/kiranprasad.g/hive_job_log_ki
Plz help...Any pointers!!
On 10/19/11, Mohit Gupta wrote:
> Hi All,
>
> I want some read-only data to be available at the reducers / transform
> scripts. I am trying to use distributed cache to achieve this using
> the following steps:
> 1. add file s3://bucket_name/prefix/testfile
> then
> 2.
Below mentioned is the table which I ve tried to create.
$ bin/hive
Hive history
file=/tmp/kiranprasad.g/hive_job_log_kiranprasad.g_201110191249_2139146680.txt
hive> CREATE TABLE arpu (msisdn INT, arpu INT) ROW FORMAT DELIMITED FIELDS
TERMINATED BY ',' STORED AS TEXTFILE;
FAILED: Error in meta
You should provide more information in order to get proper support such as the
exact command you used to create the table.
On 2011/10/19, at 11:16, kiranprasad wrote:
> Hi
>
> Iam new to HIVE, when trying to create a table getting below exception.
> FAILED: Error in metadata: java.lang.Illega
Hi
Iam new to HIVE, when trying to create a table getting below exception.
FAILED: Error in metadata: java.lang.IllegalArgumentException: URI: does not
have a scheme
Regards
Kiran.G
Hi All,
I want some read-only data to be available at the reducers / transform
scripts. I am trying to use distributed cache to achieve this using
the following steps:
1. add file s3://bucket_name/prefix/testfile
then
2. "list file" to find out the location of local copy of testfile.
it shows th
32 matches
Mail list logo