unsubscribe

2016-08-01 Thread zhang jp
unsubscribe

Re: How to pass the TestHplsqlDb test in hive-hplsql?

2016-08-01 Thread Zhenyi Zhao
Thank you Dmitry, I found it. If I want integrate my database (like hive) with hplsql, is there any test case suite to estimate its compatibility ? Emerson 2016-08-01 20:18 GMT+08:00 Dmitry Tolpeko : > Please try to find them in ./ql/src/test/queries/clientpositive directory > (see topn.q fi

Re: Doubt on Hive Partitioning.

2016-08-01 Thread Qiuzhuang Lian
Is this partition pruning fixed in MR too except for TEZ in newer hive version? Regards, Q On Mon, Aug 1, 2016 at 8:48 PM, Jörn Franke wrote: > It happens in old hive version of the filter is only in the where clause > and NOT in the join clause. This should not happen in newer hive version. >

Re: Hive transactional table with delta files, Spark cannot read and sends error

2016-08-01 Thread Gopal Vijayaraghavan
> I am on Spark 1.6.1 and getting the following error Ah, I realize that it's yet to be released officially. Here's the demo from HadoopSummit - I doubt this will ever be available for older spark releases but will be a datasource package lik

Re: How can I force Hive to start compaction on a table immediately

2016-08-01 Thread Mich Talebzadeh
Thanks Alan. One crude solution would be to copy data from the ACID table to a simple table and present that table to Spark to see the data. This is basically Spark optimiser issue not the engine itself My Hive runs on Spark query engine and all works fine there. HTH Dr Mich Talebzadeh Link

Re: How can I force Hive to start compaction on a table immediately

2016-08-01 Thread Alan Gates
There’s no way to force immediate compaction. If there are compaction workers in the metastore that aren’t busy they should pick that up immediately. But there isn’t an ability to create a worker thread and start compacting. Alan. > On Aug 1, 2016, at 14:50, Mich Talebzadeh wrote: > > > Ra

Re: Hive transactional table with delta files, Spark cannot read and sends error

2016-08-01 Thread Mich Talebzadeh
Thanks Gopal. I am on Spark 1.6.1 and getting the following error scala> var conn = LlapContext.newInstance(sc, hs2_url); :28: error: not found: value LlapContext var conn = LlapContext.newInstance(sc, hs2_url); Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?i

Re: Hive transactional table with delta files, Spark cannot read and sends error

2016-08-01 Thread Gopal Vijayaraghavan
> Spark fails reading this table. What options do I have here? Would your issue be the same as https://issues.apache.org/jira/browse/SPARK-13129? LLAPContext in Spark can read those tables with ACID semantics (as in delete/updates will work right). var conn = LlapContext.newInstance(sc, hs2_u

How can I force Hive to start compaction on a table immediately

2016-08-01 Thread Mich Talebzadeh
Rather than queuing it hive> alter table payees COMPACT 'major'; Compaction enqueued. OK Thanks Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw *

Hive transactional table with delta files, Spark cannot read and sends error

2016-08-01 Thread Mich Talebzadeh
Hi, This is an ORC transactional table Hive 2, Spark 1.6.1 hive> show create table payees; OK CREATE TABLE `payees`( `transactiondescription` string, `hits` int, `hashtag` string) CLUSTERED BY ( transactiondescription) INTO 256 BUCKETS ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc

Re: Doubt on Hive Partitioning.

2016-08-01 Thread Gopal Vijayaraghavan
> WHERE p IN (SELECT p FROM t2) > here we could argue that Hive could optimize this by computing the sub >query first, > and then do the partition pruning, but sadly I don't think this >optimisation has been implemented yet It is implemented already -

Re: Doubt on Hive Partitioning.

2016-08-01 Thread Jörn Franke
It happens in old hive version of the filter is only in the where clause and NOT in the join clause. This should not happen in newer hive version. You can check it by executing explain dependency query. > On 01 Aug 2016, at 11:07, Abhishek Dubey wrote: > > Hi All, > > I have a very big tabl

Re: Some dates add/less a day...

2016-08-01 Thread Julián Arocena
Thank you so much! I´m testing it. Best regards, *Arocena Julian* | Developer jaroc...@temperies.com +54 249 4437 972 9 de Julio 509 | Tandil | Buenos Aires | Argentina +1 (408)524-3071 I (650)704-7915 440 N. Wolfe Road, Sunnyvale CA 94085 | San Francisco | USA www.temperies.com 2016-07

Re: How to pass the TestHplsqlDb test in hive-hplsql?

2016-08-01 Thread Dmitry Tolpeko
Please try to find them in ./ql/src/test/queries/clientpositive directory (see topn.q file for example). Thanks, Dmitry On Mon, Aug 1, 2016 at 11:34 AM, Zhenyi Zhao wrote: > Hi Dmitry, > > Thank you for your answer. you said “*src* and *sample_07* are > sample tables supplied with Hive” ,

Re: Doubt on Hive Partitioning.

2016-08-01 Thread Furcy Pin
Hi Abhishek, Yes, it can happen. The only such scenarios I can think of are when you use a WHERE clause with a non-constant clause. As far as I know, partition only work on constant clauses, because it has to evaluate them *before* starting the query in order to prune the partitions. For instanc

Doubt on Hive Partitioning.

2016-08-01 Thread Abhishek Dubey
Hi All, I have a very big table t with billions of rows and it is partitioned on a column p. Column p has datatype text and values like '201601', '201602'...upto '201612'. And, I am running a query like : Select columns from t where p='201604'. My question is : Can there be a scenario/conditio

Re: How to pass the TestHplsqlDb test in hive-hplsql?

2016-08-01 Thread Zhenyi Zhao
Hi Dmitry, Thank you for your answer. you said “*src* and *sample_07* are sample tables supplied with Hive” , where can I find this infomation of these tables. Emerson 2016-08-01 16:27 GMT+08:00 Dmitry Tolpeko : > Hi Emerson, > > I did not commit TestHplsqlDb.java since Apache Pre-commit t

Re: How to pass the TestHplsqlDb test in hive-hplsql?

2016-08-01 Thread Dmitry Tolpeko
Hi Emerson, I did not commit TestHplsqlDb.java since Apache Pre-commit test starts executing it, and I did not manage how to pass it (there are connection errors). I can commit it as java_ to prevent execution, or someone can help with connection errors. Some table DDLs are here: https://github.c

Re: Hive on spark

2016-08-01 Thread Mich Talebzadeh
Hi, You can download the pdf from here HTH Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

Sample programs, for read/write of "timestamp data" to Parquet-files

2016-08-01 Thread Ravi Tatapudi
Hello, I have a test-application (a stand-alone java-program) for reading (& writing) "Parquet-files". This program is built using "Parquet-Avro API". Using this program, I could read datatypes such as "CHAR, VARCHAR, INT, FLOAT, DOUBLE...etc, but failing to read "timestamp" data from Parque