If you're using HiveContext then all metadata is in the Hive metastore as 
defined in hive-site.xml.

Concurrent writes should be fine as long as you're using a concurrent metastore 
db.
________________________________
From: Flavio Pompermaier<mailto:pomperma...@okkam.it>
Sent: ‎8/‎16/‎2014 1:26 PM
To: u...@spark.incubator.apache.org<mailto:u...@spark.incubator.apache.org>
Subject: RE: Does HiveContext support Parquet?


Hi to all, sorry for not being fully on topic but I have 2 quick questions 
about Parquet tables registered in Hive/sparq:

1) where are the created tables stored?
2) If I have multiple hiveContexts (one per application) using the same parquet 
table, is there any problem if inserting concurrently from all applications?

Best,
FP

On Aug 16, 2014 5:29 PM, "lyc" 
<yanchen....@huawei.com<mailto:yanchen....@huawei.com>> wrote:
Thanks for your help.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Does-HiveContext-support-Parquet-tp12209p12231.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: 
user-unsubscr...@spark.apache.org<mailto:user-unsubscr...@spark.apache.org>
For additional commands, e-mail: 
user-h...@spark.apache.org<mailto:user-h...@spark.apache.org>

Reply via email to