you specify the path its automatically created as an external table. The
schema will be discovered.
On Wed, Sep 9, 2015 at 9:33 PM, Mohammad Islam
wrote:
Hi,I want to create an external hive table using HiveContext. I have the
following :1. full path/location of parquet data directory2. name o
In addition to Cheng's comment --
I found the similar problem when hive-site.xml is not in the class path. A
proper stack trace can pinpoint the problem.
In the mean time, you can add it into your environment through
HADOOP_CLASSPATH. (export HADOOP_CONF_DIR=/etc/hive/conf/)
See more at
http:
Hi,I want to create an external hive table using HiveContext. I have the
following :1. full path/location of parquet data directory2. name of the new
table3. I can get the schema as well.
What API will be the best (for 1,3.x or 1.4.x)? I can see 6
createExternalTable() APIs but not sure which o
I got a similar problem.I'm not sure if your problem is already resolved.
For the record, I solved this type of error by calling
sc..setMaster("yarn-cluster"); If you find the solution, please let us know.
Regards,Mohammad
On Friday, March 6, 2015 2:47 PM, nitinkak001
wrote:
I am
Thanks Tobias for the answer.Does it work for "driver" as well?
Regards,Mohammad
On Monday, December 1, 2014 5:30 PM, Tobias Pfeiffer
wrote:
Hi,
have a look at the documentation for spark.driver.extraJavaOptions (which seems
to have disappeared since I looked it up last week) and
s
Hi,How to pass the Java options (such as "-XX:MaxMetaspaceSize=100M") when
lunching AM or task containers?
This is related to running Spark on Yarn (Hadoop 2.3.0). In Map-reduce case,
setting the property such as
"mapreduce.map.java.opts" would do the work.
Any help would be highly appreciated.