We use Hive-0.12 and are planning to use HiveServer2 with cloudera Hue. The
scenario is like this. There are frequent additions of Hive UDFs by users
and this requires frequent Hive deployment. To pick up these changes, we
need to restart HiveServer2.
When we submit a query to HiveServer2, the job
Hi,
I have a table created by the following query
CREATE EXTERNAL TABLE IF NOT EXISTS partition_table (partkey STRING)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.binarysortable.BinarySortableSerDe'
STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.had
I did some looking around on the wiki, and couldn't find any examples
connecting to the Thrift server using Java. I understand that JDBC is the
preferred way with Java, but it seems that the Thrift client has a few
additions that I would like to use (for example getQueryPlan).
For the convenien
that makes no sense. if the column is an int it isn't going to sort like a
string. I smell a user error somewhere.
On Tue, Mar 11, 2014 at 6:21 AM, Arafat, Moiz wrote:
> Hi ,
>
> I have a table that has a partition column partition_hr . Data Type is int
> (partition_hrint) . When i run
What version of hive you are using?
It is good to know that if it works in newer version.
Yong
Date: Tue, 11 Mar 2014 08:33:06 +0100
Subject: Re: Using an UDF in the WHERE (IN) clause
From: petter.von.dolw...@gmail.com
To: user@hive.apache.org
Hi Young,
I must argue that the partition pruning do
Hi ,
I have a table that has a partition column partition_hr . Data Type is int
(partition_hrint) . When i run a sort on this column the output is like
this.
0
1
10
11
12
13
14
15
16
17
18
19
2
20
21
22
23
3
4
5
6
7
8
9
I expected the output like this .
0
1
2
3
4
5
6
7
8
9
10
.
.
a
Thanks for your Response!!
hive> LOAD DATA INPATH 's3://test.com/' OVERWRITE INTO TABLE test PARTITION
(dt='2014-01-01');
FAILED: Error in semantic analysis: line 1:17 Invalid Path 's3://test.com/':
only "file" or "hdfs" file systems accepted. s3 file system is not
supported.
My data is partition
I am running Hadoop 2.2.0.2.0.6.0-101 on a single node.
I am trying to run Java MRD program that writes data to an existing Hive
table from Eclipse under regular user. I get exception:
org.apache.hadoop.security.AccessControlException: Permission denied:
user=dev, access=WRITE,
inode="/apps/hi
Hi,
Any exceptions in the log file?
Try this for loading into partions. In your scenario if it is possible
with load query.
LOAD DATA INPATH '/user/myname/kv2.txt' OVERWRITE INTO TABLE invites
PARTITION (ds='2008-08-15');
Hope it helps,
Chinna
>
>
Hi,
I have a table partitioned where the partition column is of type INT. When
creating a view on this table the partition column shows up as STRING. I
can still however issue queries towards the view treating the partition
column as an INT, e.g. SELECT * from myview where partitionCol=1;
What is
Hi Navis,
I suspected that the parser only accepted an expression like (value1,
value2, value3...) as input. I guess one solution as you say would be to
add an array as an allowed argument to IN. I do not know if other SQL
dialects allow this. Another way would be to introduce a new type of UDF
th
Then you should use BETWEEN, not IN. BETWEEN can be used for PPD, afaik.
2014-03-11 16:33 GMT+09:00 Petter von Dolwitz (Hem)
:
> Hi Young,
>
> I must argue that the partition pruning do actually work if I don't use the
> IN clause. What I wanted to achieve in my original query was to specify a
> r
Hi Young,
I must argue that the partition pruning do actually work if I don't use the
IN clause. What I wanted to achieve in my original query was to specify a
range of partitions in a simple way. The same query can be expressed as
SELECT * FROM mytable WHERE partitionCol >= UDF("2014-03-10") and
Hi,
I'm using hive (with external tables) to process data stored on amazon S3.
My data is partitioned as follows:
DIR s3://test.com/2014-03-01/
DIR s3://test.com/2014-03-02/
DIR s3://test.com/2014-03-03/
14 matches
Mail list logo