You can safely ignore it. Native libs aren't set with HADOOP_HOME. See
Hadoop docs on how to configure this if you're curious, but you really
don't need to.

On Thu, Dec 24, 2015 at 12:19 PM, Bilinmek Istemiyor
<benibi...@gmail.com> wrote:
> Hello,
>
> I have apache spark 1.5.1 installed with the help of  this user group. I
> receive following error when I start pyshell
>
> WARN NativeCodeLoader: Unable to load native-hadoop library for your
> platform... using builtin-java classes where applicable
>
> Later I have downloaded native binary from hadoop site and defined
> environment variabla HADOOP_HOME
>
> How can I make spark use hadoop?
>
>
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to