Thanks Shashwat.
I added that and it works.
Now my program is running. I have created a JAR of it and tried to execute
but again some errors are there while executing JAR only.
Error as:
hadoop@ubuntu:~$ java -jar PES.jar
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
have you added *hive_hbase-handler.jar* to your project
On Fri, May 11, 2012 at 10:13 AM, Bhavesh Shah wrote:
> I perform all this above steps and whenI tried to run in eclipse I got
> error.
> The error as:
>
> *java.sql.SQLException: Method not supported
> at
> org.apache.hadoop.hive.jdbc
I perform all this above steps and whenI tried to run in eclipse I got
error.
The error as:
*java.sql.SQLException: Method not supported
at
org.apache.hadoop.hive.jdbc.HiveStatement.executeUpdate(HiveStatement.java:210)
at TestSP.quarterTable(TestSP.java:598)
at TestSP.main(TestSP.java
I think if I create index for one table
When I excute “select c1,c2 from tab where index_col=1”, should not start
mapreduce
But it was start .
So how to use a index without mapreduce?
Compact index and bitmap index all was tested , all need mapreduce .
It's simpler than this. All files look the same -- and are often very simple
delimited text -- whether managed or external. The only difference is that the
files associated with a managed table are dropped when the table is dropped and
files that are loaded into a managed table are moved into
The only actual differences is:
If you drop a managed table the LOCATION it refers to will be deleted.
If you drop an external table the LOCATION it refers to will not be deleted.
Confusion happens because when hive creates a managed table it defaults to :
fs.default.name+/user/hive/warehouse/+t
Hello,
My thoughts are rather straightforward: it is best not to think of hive
as a data warehouse at all. period.
It is better to think of it as SQL to MapReduce translation layer with some
meta data to help guide the process.
With this in mind, and if you really have lots of data, what you
sure, makes sense.
here's what I was trying to achieve - a sort merge bucket join.
as I understand, to get an SMB join, the bucket columns have to be the join key
columns.
I am joining two tables- the key for each row in those tables is a composite
key= acct + date.
now, does that mean, I cannot
Whew, thanks everyone! I think wrapping quotes around that did it.
Nicole, I was going to attempt that as a last resort. But the actual query is
much longer and it would be extremely undesirable to do so.
Regards,
Saurabh
> From: nicole.ge...@memorylane
Hi Ameet,
That's the correct behaviour.
In Hive, clustering and sorting happens within a partition. Inside each
partition, there is only one value associated with the partition column
therefore it would have no impact on clustering and sorting. Therefore, putting
the partition column in cluster
Another option is to do it all in the shell:
#/bin/sh
somedate=$(date -d '2 day ago' +"%Y-%m-%d")
echo "$somedate"
script="
select count(*)
from myschema.mytable
where local_dt > $somedate
"
echo "$script"
hive -e $script > output.dat
On 5/10/12 11:34 AM, "Tucker, Matt" wrote:
>You'll want to
You'll want to wrap ${hiveconf:ref_date} in quotes, so that's it's passed as a
string in the query.
SELECT "${hiveconf:ref_date}" FROM dummytable LIMIT 1;
Matt Tucker
Associate eBusiness Analyst
Walt Disney Parks and Resorts Online
Ph: 407-566-2545
Tie: 8-296-2545
CONFIDENTIAL
-Original M
I think you have to put quotes around the variable to tell give that you
are comparing against a string...
Ashish
On May 10, 2012 2:06 PM, "Saurabh S" wrote:
>
> I'm having a hard time passing a date as a hive environment variable.
>
> The setting is this: The table I'm querying is partitioned o
Hi All,
I am not able to create a table with partition column also included in the
clustered by clause.
create table abc ( col1, col2, col3 )
partitioned by (col3)
clustered by (col1,col3) sorted by (col1,col3) into 10 buckets;
fails with : FAILED: Error in semantic analysis: Invalid column refe
I'm having a hard time passing a date as a hive environment variable.
The setting is this: The table I'm querying is partitioned on a date column,
say, local_dt. I wish to query on last two days' worth of data. Unfortunately
there seems to be no way of getting the current date without either sc
When ever you execute any query except "select * from tablename" hive runs
mapreduce job in background for which it needs hadoop to be properly
configured and proper commication between hadoop and hive. the error you
specified happens when the hive not able to connect to hadoop properly
here is t
Hi Guys,
I have installed Hive (0.8.0 version), and HBase (0.90.5).
I had to recompile hive for the handler.
Anyway everything works nicely when I am in command line.
Now, I need to launch hive queries from Java. I am using JDBC.
It works for all the queries except the one using the handler.
It
Some suggestion :
1. Chown of the hive folder
2. change permission of hive folder to 755
3. Set this to hive-site.xml
hive.exec.scratchdir
/home/yourusername/mydir
Scratch space for Hive jobs
3. put
hadoop-0.20-core.jar
hive/lib/hive-exec-0.7.1.jar
hive/lib/hive-jdbc-0.7.1.j
It looks more like a permissions problem to me. Just make sure that
whatever directories hadoop is writing to are owned by hadoop itself.
Also it looks a little weird to me that it is using the
"RawLocalFileSystem" instead of the "DistributedFileSystem". You might want
to look at "fs.default.name"
For this error : "
*java.sql.SQLException: Query returned non-zero code: 9, cause: FAILED:
Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.MapRedTask
at org.apache.hadoop.hive.jdbc.HiveStatement.executeQuer* "
Go to this link :
http://docs.amazonwebservices.com/ElasticMapRed
Check out this
https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Authorization
Any how something need to be written on middle layer. you can not expect
full proof solution from default hive authentication and roles.
On Thu, May 10, 2012 at 8:22 PM, Raghunath, Ranjith <
ranjith
Addition to my last reply ,,, come out from hive cli.. when you are going
to run your jdbc code. and dont try to connect using cli when your jdbc
code is running
Cheers
Shashwat.
On Thu, May 10, 2012 at 8:39 PM, shashwat shriparv <
dwivedishash...@gmail.com> wrote:
> For this error : "
> *ja
Hello
Try to keep set of records which you need for particular analysis in same
table. Generally we use Pig to feed data to hive tables and we have
arranged our tables such that all the data which is to required for
particular report is right present in that table. This helps to improve
hive perfo
Also of most of the things that you will be doing is full scans as opposed
to needle in haystack queries there is usually no point in paying the
overhead of running hbase region servers. Only if your data is heavily
accessed by a key is the overhead of hbase justified. Another case could be
when pa
For this error : "
*java.sql.SQLException: Query returned non-zero code: 9, cause: FAILED:
Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.MapRedTask
at org.apache.hadoop.hive.jdbc.HiveStatement.executeQuer* "
Go to this link :
http://docs.amazonwebservices.com/ElasticMapRed
Anyone implementing authorization and roles within their hive environment? If
so how successful has it been?
Thanks,
Ranjith
On Thu, May 10, 2012 at 10:16 AM, Kuldeep Chitrakar
wrote:
> Does that mean all data in one BigTable in de-normalized form? Then whats the
> main benefit of using Hive against Hbase as Hbase also recommends Highly de
> normalized BigTable.
>
>
> Thanks,
> Kuldeep
> -Original Message-
> F
Does that mean all data in one BigTable in de-normalized form? Then whats the
main benefit of using Hive against Hbase as Hbase also recommends Highly de
normalized BigTable.
Thanks,
Kuldeep
-Original Message-
From: Edward Capriolo [mailto:edlinuxg...@gmail.com]
Sent: 10 May 2012 19:24
On Thu, May 10, 2012 at 9:26 AM, Kuldeep Chitrakar
wrote:
> Hi
>
>
>
> I have data warehouse implementation for Click Stream data analysis on
> RDBMS. Its a start schema (Dimensions and Facts).
>
>
>
> Now if i want to move to Hive, Do i need to create same data model as
> Dimensions and facts and
Hi Bhavesh,
You will have to check your Jobtracker logs for more details. If you are using
AWS, they should be in your S3 logs directory under
/daemons//hadoop-hadoop-jobtracker...log
Mark
Mark Grover, Business Intelligence Analyst
OANDA Corporation
www: oanda.com www: fxtrade.com
- Orig
Hi
I have data warehouse implementation for Click Stream data analysis on RDBMS.
Its a start schema (Dimensions and Facts).
Now if i want to move to Hive, Do i need to create same data model as
Dimensions and facts and join them.
I should create a big de-normalized table which contains all tex
Hello all,
I have one query. It is executing fine on Hive CLI and returning the result.
But when I am executing it with the help Hive JDBC I am getting error as:
*java.sql.SQLException: Query returned non-zero code: 9, cause: FAILED:
Execution Error, return code 2 from
org.apache.hadoop.hive.ql.ex
32 matches
Mail list logo