Does Spark 1.3.1 support Hive 1.0? If not, which version of Spark will
start supporting Hive 1.0?
--
Kannan
thin a partition).
>
> Default value of mapred.map.tasks is 2
> <https://hadoop.apache.org/docs/r1.0.4/mapred-default.html>. You may see
> that the Spark SQL result can be divided into two sorted parts from the
> middle.
>
> Cheng
>
> On 2/19/15 10:33 AM, Kannan Raja
SparkConf.scala logs a warning saying SPARK_CLASSPATH is deprecated and we
should use spark.executor.extraClassPath instead. But the online
documentation states that spark.executor.extraClassPath is only meant for
backward compatibility.
https://spark.apache.org/docs/1.2.0/configuration.html#execu
u, Feb 26, 2015 at 2:43 PM, Kannan Rajah wrote:
> > SparkConf.scala logs a warning saying SPARK_CLASSPATH is deprecated and
> we
> > should use spark.executor.extraClassPath instead. But the online
> > documentation states that spark.executor.extraClassPath is only meant
Vanzin wrote:
> On Thu, Feb 26, 2015 at 5:12 PM, Kannan Rajah wrote:
> > Also, I would like to know if there is a localization overhead when we
> use
> > spark.executor.extraClassPath. Again, in the case of hbase, these jars
> would
> > be typically available on all no
Running a simple word count job in standalone mode as a non root user from
spark-shell. The spark master, worker services are running as root user.
The problem is the _temporary under /user/krajah/output2/_temporary/0 dir
is being created with root permission even when running the job as non root
Ignore the question. There was a Hadoop setting that needed to be set to
get it working.
--
Kannan
On Wed, Apr 1, 2015 at 1:37 PM, Kannan Rajah wrote:
> Running a simple word count job in standalone mode as a non root user from
> spark-shell. The spark master, worker services are runn