I use hive version 0.13.1 in cloudera vm 5.3.0.
I store some (more or less) complex data structures on HDFS and want to
query these data from hive, because I want to develop a somewhat
“relational” structure on this data.
The corresponding avro schema has some union types in it. I created some
Hi All,
Can experts share your view on Hive behaviour in below scenario.
I am facing below issue on using alter partition locations in hive.
*select count(*) from table1 where dt = 201501;*
*Total jobs = 1*
*Launching Job 1 out of 1*
*Number of reduce tasks determined at compile time: 1*
*In or
I found the patch here https://issues.apache.org/jira/browse/HIVE-9119. But
upgrading is the last thing we want to do.
Also, I found another solution which changes the
hive.server2.enable.doAs property
to false to use the same user to do the hive operations. But I'm not sure
whether a user here is
Thanks. But is there any other solution besides upgrading?
2015-04-30 10:55 GMT+08:00 Xuefu Zhang :
> This is a known issue and has been fixed in later releases.
>
> --Xuefu
>
> On Wed, Apr 29, 2015 at 7:44 PM, Shady Xu wrote:
>
>> Recently I found in the zookeeper log that there were too many c
This is a known issue and has been fixed in later releases.
--Xuefu
On Wed, Apr 29, 2015 at 7:44 PM, Shady Xu wrote:
> Recently I found in the zookeeper log that there were too many client
> connections and it was hive that was establishing more and more connections.
>
> I modified the max clie
Recently I found in the zookeeper log that there were too many client
connections and it was hive that was establishing more and more connections.
I modified the max client connection property in zookeeper and that fixed
the problem, temporarily. But the connections hive made to zookeeper were
sti
Filed https://issues.apache.org/jira/browse/HIVE-10545 for this; we're
planning on taking this up in the next couple of weeks.
On 3/30/15 4:48 PM, Andrew Mains wrote:
hive's hbase integration doesn't currently seem to support predicate
pushdown for queries over HBase snapshots.
Yes, the first 2 ids have the same meaning. The new id (subtransaction id) is
to support multi-statement transactions.
Each statement within BEGIN TRANSACTION/COMMIT block that modifies data will
create a new delta dir.
For example, you may have more than one insert stmt (same table) in a
trans
Thanks for the heads-up and use case validation.
In the case of the file names, what function does the additional id perform
(presuming the first two are still transaction id bounds)?
On 29 April 2015 at 18:37, Eugene Koifman wrote:
> This is not an answer to your question, but FYI. The work
Just do the following at the os level . Note 10010 below is the port my
hiveserver is running on . By default it is 1
netstat -alnp|egrep 'Local|10010'
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Proto Recv-Q
This is not an answer to your question, but FYI. The work in
https://issues.apache.org/jira/browse/HIVE-9675 will change how the delta files
are named which may affect your work.
Once that work is complete, the deltas will be named delta_xxx_yyy_zz, so you
may have delta_002_002_1,delta_002_002
Sorry, I misunderstood.
AFAIK you can't do that.
Daniel
> On 29 באפר׳ 2015, at 18:49, Yosi Botzer wrote:
>
> Hi,
>
> I have parquet files that are the product of map-reduce job.
>
> I have used AvroParquetOutputFormat in order to produce them, so I have an
> avro schema file describing the s
You should be able to get the schema out using parquet tools:
http://blog.cloudera.com/blog/2015/03/converting-apache-avro-data-to-parquet-format-in-apache-hadoop/
Daniel
> On 29 באפר׳ 2015, at 18:49, Yosi Botzer wrote:
>
> Hi,
>
> I have parquet files that are the product of map-reduce job.
>
Hi,
I'm implementing a tap to read Hive ORC ACID date into Cascading jobs and
I've hit a couple of issues for a particular scenario. The case I have is
when data has been written into a transactional table and a compaction has
not yet occurred. This can be recreated like so:
CREATE TABLE test_tab
Hi,
I have parquet files that are the product of map-reduce job.
I have used AvroParquetOutputFormat in order to produce them, so I have an
avro schema file describing the structure of the data.
When I wan to create avro based table in hive I can use:
TBLPROPERTIES
('avro.schema.url'='hdfs:///sc
Hi Hive Users,
I'm executing one of my HQL query which has join, union and insert
overwrite operation, which is working fine if i run it just once.
If i execute the same job second I'm facing this issue.
Can someone help me to identify in which scenario we get this exception ?
Please find attachm
typed 'ps -ef|grep hive' , then u will find the hiveserver2 deamon
From: abdallah.cheb...@murex.com
To: user@hive.apache.org
Subject: RE: Stopping HiveServer2
Date: Wed, 29 Apr 2015 11:15:57 +
Typed ‘hiveserver2’ in my terminal
From: Nitin Pawar [mailto:nitinpawar...@gmail.com]
Typed ‘hiveserver2’ in my terminal
From: Nitin Pawar [mailto:nitinpawar...@gmail.com]
Sent: Wednesday, April 29, 2015 2:01 PM
To: user@hive.apache.org
Subject: Re: Stopping HiveServer2
how did you start it ?
On Wed, Apr 29, 2015 at 4:26 PM, CHEBARO Abdallah
mailto:abdallah.cheb...@murex.com>>
Sudo service hiveserver2 stop
On Wednesday, April 29, 2015, CHEBARO Abdallah
wrote:
> Hello,
>
>
>
> How can I stop hiveserver2? I am not able to find the command.
>
>
>
> Thanks
>
> ***
>
> This e-mail contains information for the intended recipient only. It may
>
how did you start it ?
On Wed, Apr 29, 2015 at 4:26 PM, CHEBARO Abdallah <
abdallah.cheb...@murex.com> wrote:
> Hello,
>
>
>
> How can I stop hiveserver2? I am not able to find the command.
>
>
>
> Thanks
>
> ***
>
> This e-mail contains information for the intended
Hello,
How can I stop hiveserver2? I am not able to find the command.
Thanks
***
This e-mail contains information for the intended recipient only. It may
contain proprietary material or confidential information. If you are not the
intended recipient you are not auth
21 matches
Mail list logo