Venkat,
The goal of this project is to execute existing PL/SQL in Hive as much as
possible, not to migrate. In case when some design restrictions are faced
the code has to be redesigned, but hopefully most of the remaining code
remained untouched, no need to convert everything to bash/Python etc.
2015-03-03 1:03 GMT+08:00 P lva :
> Hello Everyone,
>
> I was able to look up the query of hive using hive.query.name from job
> history server. I wasn't able to find a similar parameter for tez.
>
> Is there a way where you could find out all the queries that ran in a tez
> session ?
>
> Thanks
>
Hi,
I notice there's one folder example which contains sample data and sample
queries. But I didn't find any document about how to use these data and
queries. Could anyone point it to me ? Thanks
It seems that the remote spark context failed to come up. I saw you're
using Spark standalone cluster. Please make sure spark cluster is up. You
may try spark.master=local first.
On Mon, Mar 2, 2015 at 5:15 PM, scwf wrote:
> yes, have placed spark-assembly jar in hive lib folder.
>
> hive.log---
Is there a simple way to migrate from PL/SQL to PL/HQL?
Regards,
Venkat
From: Dmitry Tolpeko [mailto:dmtolp...@gmail.com]
Sent: Friday, February 27, 2015 1:36 PM
To: user@hive.apache.org
Subject: PL/HQL - Procedural SQL-on-Hadoop
Let me introduce PL/HQL, an open source tool that implements proce
yes, have placed spark-assembly jar in hive lib folder.
hive.log---
bmit.2317151720491931059.properties --class
org.apache.hive.spark.client.RemoteDriver
/opt/cluster/apache-hive-1.2.0-SNAPSHOT-bin/lib/hive-exec-1.2.0-SNAPSHOT.jar
--remote-host M151 --remote-port 56996 --conf
hive.spark.clien
hi, folks,
I am using the HBaseintergration feature from hive (
https://cwiki.apache.org/confluence/display/Hive/HBaseIntegration) to load
TPCH tables into HBase. Hive 0.13 and HBase 0.98.6.
The load works well. However, as documented here:
https://cwiki.apache.org/confluence/display/Hive/HBaseIn
Hi, I just moved from MR1 to YARN (CDH 4.x to CDH 5.2). After this, I see
that all the loading jobs which are mostly like the following are running
really slow.
insert overwrite table desttable partition (partname) select * from sourcetable
>From what I can see, even if I set the number of redu
Thanks Alexander!
On Mon, Mar 2, 2015 at 10:31 AM, Alexander Pivovarov
wrote:
> yes, we even have a ticket for that
> https://issues.apache.org/jira/browse/HIVE-9600
>
> btw can anyone test jdbc driver with kerberos enabled?
> https://issues.apache.org/jira/browse/HIVE-9599
>
>
> On Mon, Mar 2,
hive> create table test1 (c1 array) row format delimited collection
items terminated by ',';
OK
hive> insert into test1 select array(1,2,3) from dual;
OK
hive> select * from test1;
OK
[1,2,3]
hive> select c1[0] from test1;
OK
1
$ hadoop fs -cat /apps/hive/warehouse/test1/00_0
1,2,3
On Su
yes, we even have a ticket for that
https://issues.apache.org/jira/browse/HIVE-9600
btw can anyone test jdbc driver with kerberos enabled?
https://issues.apache.org/jira/browse/HIVE-9599
On Mon, Mar 2, 2015 at 10:01 AM, Nick Dimiduk wrote:
> Heya,
>
> I've like to use jmeter against HS2/JDBC a
Heya,
I've like to use jmeter against HS2/JDBC and I'm finding the "standalone"
jar isn't actually standalone. It appears to include a number of
dependencies but not Hadoop Common stuff. Is there a packaging of this jar
that is actually standalone? Are there instructing for using this
standalone j
Hello All,
I have a couple of Sequence files on HDFS. I now need to load these files
into an ORC table. One option is to create an external table of
SequenceFile format and then load it into the ORC table by using the INSERT
OVERWRITE command.
I am looking for an alternative without using an int
Hello Everyone,
I was able to look up the query of hive using hive.query.name from job
history server. I wasn't able to find a similar parameter for tez.
Is there a way where you could find out all the queries that ran in a tez
session ?
Thanks
there is no sampling for order by in Hive. Hive uses a single reducer for
order by (if you're talking about MR execution engine).
Hive on Spark is different for this, thought.
Thanks,
Xuefu
On Mon, Mar 2, 2015 at 2:17 AM, Jeff Zhang wrote:
> Order by usually invoke 2 steps (sampling job and re
Could you check your hive.log and spark.log for more detailed error
message? Quick check though, do you have spark-assembly.jar in your hive
lib folder?
Thanks,
Xuefu
On Mon, Mar 2, 2015 at 5:14 AM, scwf wrote:
> Hi all,
> anyone met this error: HiveException(Failed to create spark client.)
>
Hi all,
anyone met this error: HiveException(Failed to create spark client.)
M151:/opt/cluster/apache-hive-1.2.0-SNAPSHOT-bin # bin/hive
Logging initialized using configuration in
jar:file:/opt/cluster/apache-hive-1.2.0-SNAPSHOT-bin/lib/hive-common-1.2.0-SNAPSHOT.jar!/hive-log4j.properties
[I
Order by usually invoke 2 steps (sampling job and repartition job) but hive
only run one mr job for order by, so wondering when and where does hive do
sampling ? client side ?
--
Best Regards
Jeff Zhang
Hi,
I got the attached error on a map-side join where a serialized table
contains an array column.
When setting map-side join off via setting
hive.mapjoin.optimized.hashtable=false, exceptions do not occur.
It seems that a wrong ObjectInspector was set at
CommonJoinOperator#initializeOp.
I am u
19 matches
Mail list logo