gt; similar discussion if you haven't seen it already
> http://stackoverflow.com/questions/22150417/hadoop-mapreduce-java-lang-unsatisfiedlinkerror-org-apache-hadoop-util-nativec
>
> Thanks
> Best Regards
>
> On Mon, Sep 7, 2015 at 7:41 AM, dong.yajun wrote:
>
>> hi a
hi all,
I met problem that can't read the file with snappy encoding from HDFS in
Spark1.4.1,
I have configured the SPARK_LIBRARY_PATH property in conf/spark-env.sh to
the native path of Hadoop and restarted the spark cluster
SPARK_LIBRARY_PATH=$SPARK_LIBRARY_PATH:/opt/app/install/cloudera/parcel
e refactoring around interaction with
> Hive, such as SPARK-7491.
>
> It would not be straight forward to port ORC support to 1.3
>
> FYI
>
> On Fri, Aug 21, 2015 at 10:21 PM, dong.yajun wrote:
>
>> hi Ted,
>>
>> thanks for your reply, are there any other way
hi Ted,
thanks for your reply, are there any other way to do this with spark 1.3?
such as write the orcfile manually in foreachPartition method?
On Sat, Aug 22, 2015 at 12:19 PM, Ted Yu wrote:
> ORC support was added in Spark 1.4
> See SPARK-2883
>
> On Fri, Aug 21, 2015 at 7:36 PM
Hi list,
Is there a way to save the RDD result as Orcfile in spark1.3? due to some
reasons we can't upgrade our spark version to 1.4 now.
--
*Ric Dong*