Re: Variable resolution Fails

2013-04-30 Thread Sanjay Subramanian
+1 agreed Also as a general script programming practice I check if the variables I am going to use are NON empty before using them…nothing related to Hive scripts If [ ${freq} == "" ] then echo "variable freq is empty…exiting" exit 1 Fi From: Anthony Urso mailto:antho...@cs.ucla.edu>>

Re: Variable resolution Fails

2013-04-30 Thread Anthony Urso
Your shell is expanding the variable ${env:freq}, which doesn't exist in the shell's environment, so hive is getting the empty string in that place. If you are always intending to run your query like this, just use ${freq} which will be expanded as expected by bash and then passed to hive. Cheers

unresolved dependency from ivy when building hive 0.10

2013-04-30 Thread Eric Chu
Hi, After upgrading Hive to 0.10 we often observe the following build error. We notice that when we run our dumpcache job (which includes deleting the ivy cache), the problem is *sometimes, but not always,* resolved. Does anyone know the cause of this problem or know of a better solution? Thanks

[VOTE] Apache Hive 0.11.0 Release Candidate 1

2013-04-30 Thread Ashutosh Chauhan
Hey all, Based on feedback from folks, I have respun release candidate, RC1. Please take a look. It basically fixes the size bloat of tarball. Source tag for RC1 is at: https://svn.apache.org/repos/asf/hive/tags/release-0.11.0-rc1 Source tar ball and convenience binary artifacts can be found a

Variable resolution Fails

2013-04-30 Thread sumit ghosh
Hi,   The following variable freq fails to resolve:   bash-4.1$ export freq=MNTH bash-4.1$ echo $freq MNTH bash-4.1$ hive -e "select ${env:freq} as dr  from dual" Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j.properties Hive history file=/hadoop1/hive_querylog/sum

Re: Very poor read performance with composite keys in hbase

2013-04-30 Thread kulkarni.swar...@gmail.com
That depends on how dynamic your data is. If it is pretty static, you can also consider using something like Create Table As Select (CTAS) to create a snapshot of your data to HDFS and then run queries on top of that data. So your query might become something like: create table my_table as select

RE: Very poor read performance with composite keys in hbase

2013-04-30 Thread Rupinder Singh
Swarnim, Thanks. So this means custom map reduce is the viable option when working with hbase tables having composite keys, since it allows to set the start and stop keys. Hive+Hbase combination is out. Regards Rupinder From: kulkarni.swar...@gmail.com [mailto:kulkarni.swar...@gmail.com] Sent:

Can a bucket be added to a partition?

2013-04-30 Thread Babe Ruth
Hello, I have a table that is already created and is partitioned dynamically by day. i would like all future partitions to be bucketed on two columns. Can I add a bucket to a partitions in an already existing table? Thanks,George

Hive 0.10.0 Postgres Schema script?

2013-04-30 Thread Leena Gupta
Hello, Does anyone know where can I download the Postgres schema script for Hive 0.10 ? Please let me know. Thanks!

Re: Very poor read performance with composite keys in hbase

2013-04-30 Thread kulkarni.swar...@gmail.com
Rupinder, Hive supports a filter pushdown[1] which means that the predicates in the where clause are pushed down to the storage handler level where either they get handled by the storage handler or delegated to hive if they cannot handle them. As of now, the HBaseStorageHandler only supports primi

Re: Very poor read performance with composite keys in hbase

2013-04-30 Thread Sanjay Subramanian
My experience with hive + hbase has been about 8x slower on an average. So I went ahead with hive only option. Sent from my iPhone On Apr 30, 2013, at 11:19 PM, "Rupinder Singh" mailto:rsi...@care.com>> wrote: Hi, I have an hbase cluster where I have a table with a composite key. I map this

RE: Very poor read performance with composite keys in hbase

2013-04-30 Thread Rupinder Singh
Here it is: select * from event where key.name='Signup' and key.dateCreated='2013-03-06 16:39:55.353' and key.uid='7af4c330-5988-4255-9250-924ce5864e3bf'; From: kulkarni.swar...@gmail.com [mailto:kulkarni.swar...@gmail.com] Sent: Tuesday, April 30, 2013 11:25 PM To: user@hive.apache.org Cc: u..

Re: Very poor read performance with composite keys in hbase

2013-04-30 Thread kulkarni.swar...@gmail.com
Can you show your query that is taking 700 seconds? On Tue, Apr 30, 2013 at 12:48 PM, Rupinder Singh wrote: > Hi, > > ** ** > > I have an hbase cluster where I have a table with a composite key. I map > this table to a Hive external table using which I insert/select data > into/from this t

Very poor read performance with composite keys in hbase

2013-04-30 Thread Rupinder Singh
Hi, I have an hbase cluster where I have a table with a composite key. I map this table to a Hive external table using which I insert/select data into/from this table: CREATE EXTERNAL TABLE event(key struct, {more columns here}) ROW FORMAT DELIMITED COLLECTION ITEMS TERMINATED BY '~' STORED BY

Re: Compiling Hive ODBC

2013-04-30 Thread Sebastien FLAESCH
So... could finally compile Hive ODBC by patching the Thrift.h header... I think there is some configure header issue with the HAVE_* constants the are defined in the /include/thrift/config.h header. It will only include the required headers if HAVE_CONFIG_H is defined: #define HAVE_CONFIG_H 1

Re: Compiling Hive ODBC

2013-04-30 Thread Sebastien FLAESCH
FYI: hive@orca:~/hive-0.10.0/src$ g++ --version g++ (Debian 4.4.5-8) 4.4.5 Seb On 04/30/2013 04:13 PM, Sebastien FLAESCH wrote: Making some progress... After disabling some options to build thrift - because it installs some files in the system directory (/usr/lib/php), and I - do not - want t

Re: Compiling Hive ODBC

2013-04-30 Thread Sebastien FLAESCH
Making some progress... After disabling some options to build thrift - because it installs some files in the system directory (/usr/lib/php), and I - do not - want that, I did following configure for thrift: ./configure --prefix=/home/hive/thrift-0.9.0 \ --with-qt4=no \ --with-csharp=no \

Re: Compiling Hive ODBC

2013-04-30 Thread Sebastien FLAESCH
Thank you Carl, but I still have problems to build Hive ODBC: I followed the instruction from this page (I believe the doc is wrong): https://cwiki.apache.org/Hive/hiveodbc.html Where it says: "Build the Hive client by running the following command from HIVE_HOME..." Here is my env:

Re: [VOTE] Apache Hive 0.11.0 Release Candidate 0

2013-04-30 Thread Carl Steinbach
I think the source tarball must be corrupted. It's 664MB in size, which is roughly 630MB larger than the 0.10.0 release tarball. I haven't been able to take a look at it yet because the apache archive site keeps throttling my connection midway through the download. On Mon, Apr 29, 2013 at 10:42