Hi All,
We are using for Spark SQL :
- Hive :1.2.1
- Spark : 1.3.1
- Hadoop :2.7.1
Let me know if needs other details to debug the issue.
Regards
Sanjiv Singh
Mob : +091 9990-447-339
On Sun, Mar 13, 2016 at 1:07 AM, Mich Talebzadeh
wrote:
> Hi,
>
> Thanks for the input. I use Hive
Hi,
Thanks for the input. I use Hive 2 and still have this issue.
1. Hive version 2
2. Hive on Spark engine 1.3.1
3. Spark 1.5.2
I have added Hive user group to this as well. So hopefully we may get some
resolution.
HTH
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/pr
Hi,
I have suffered from Hive Streaming , Transactions enough, so I can share
my experience with you.
1) It's not a problem of Spark. It happens because of "peculiarities" /
bugs of Hive Streaming. Hive Streaming, transactions are very raw
technologies. If you look at Hive JIRA, you'll see sever
This is an interesting one as it appears that a hive transactional table
1. Hive version 2
2. Hive on Spark engine 1.3.1
3. Spark 1.5.2
hive> create table default.foo(id int) clustered by (id) into 2 buckets
STORED AS ORC TBLPROPERTIES ('transactional'='true');
hive> insert into defaul
Hi All,
I am facing this issue on HDP setup on which COMPACTION is required only
once for transactional tables to fetch records with Spark SQL.
On the other hand, Apache setup doesn't required compaction even once.
May be something got triggered on meta-store after compaction, Spark SQL
start rec