I'm running into this error while doing a dynamic partition insert. Heres
how I created the table:
CREATE TABLE `part_table`(
`c1` bigint,
`c2` bigint,
`c3` bigint)
PARTITIONED BY (p1 string, `p2` string)
STORED AS PARQUET;
Here is the insert table:
SET hive.exec.dynamic.pa
Thanks Elliot,
For the immediate reply.
But as per hive locking mechanism,
While inserting data to a partition hive acquires exclusive lock on that
partition and shared lock on the entire table.
How is it possible to insert data into a different partition of the same
table while having shared lo
I presume you mean "into different partitions of a table at the same
time"? This should be possible. It is certainly supported by the streaming
API, which is probably where you want to look if you need to insert large
volumes of data to multiple partitions concurrently. I can't see why it
would not
Can we insert data in different partitions of a table at a time.
Waiting for inputs .
Thanks in advance.
- suyog
Hi Friends
Thanks it worked now.I dont think we need to compose it to any functions as we
are making use of stdin read as given in examples over the ne
From: ryan.har...@zionsbancorp.com
To: user@hive.apache.org
Subject: RE: Running python UDF in hive
Date: Thu, 20 Aug 2015 19:24:24 +
Thanks for all who replied.
After several retries in vain, I felt the issue might be because of version
mismatch. So I took Hadoop 2.7.1, Spark 1.3.1 and Hive 1.2.1. Tried with
these combination and I could successfully test the hive transactions
without any issue.
Regards,
Sarath.
On Fri, Aug 7
Thanks All.
I will implement the suggested points and share the output.
Thanks again for all the help.
Thanks and Regards
Nishant Aggarwal, PMP
Cell No:- +91 99588 94305
On Fri, Aug 21, 2015 at 10:33 AM, Jörn Franke wrote:
> Additionally, although it is a PoC you should have a realistic data m