It means your client app is using Hadoop 2.x and your HDFS is Hadoop 1.x.

On Thu, Jan 22, 2015 at 10:32 PM, ey-chih chow <eyc...@hotmail.com> wrote:
> I looked into the namenode log and found this message:
>
> 2015-01-22 22:18:39,441 WARN org.apache.hadoop.ipc.Server: Incorrect header
> or version mismatch from 10.33.140.233:53776 got version 9 expected version
> 4
>
> What should I do to fix this?
>
> Thanks.
>
> Ey-Chih
>
> ________________________________
> From: eyc...@hotmail.com
> To: yuzhih...@gmail.com
> CC: user@spark.apache.org
> Subject: RE: spark 1.1.0 save data to hdfs failed
> Date: Wed, 21 Jan 2015 23:12:56 -0800
>
> The hdfs release should be hadoop 1.0.4.
>
> Ey-Chih Chow
>
> ________________________________
> Date: Wed, 21 Jan 2015 16:56:25 -0800
> Subject: Re: spark 1.1.0 save data to hdfs failed
> From: yuzhih...@gmail.com
> To: eyc...@hotmail.com
> CC: user@spark.apache.org
>
> What hdfs release are you using ?
>
> Can you check namenode log around time of error below to see if there is
> some clue ?
>
> Cheers
>
> On Wed, Jan 21, 2015 at 4:51 PM, ey-chih chow <eyc...@hotmail.com> wrote:
>
> Hi,
>
> I used the following fragment of a scala program to save data to hdfs:
>
>     contextAwareEvents
>     .map(e => (new AvroKey(e), null))
>     .saveAsNewAPIHadoopFile("hdfs://" + masterHostname + ":9000/ETL/output/"
> + dateDir,
>                             classOf[AvroKey[GenericRecord]],
>                             classOf[NullWritable],
>                             classOf[AvroKeyOutputFormat[GenericRecord]],
>                             job.getConfiguration)
>
> But it failed with the following error messages.  Is there any people who
> can help?  Thanks.
>
> Ey-Chih Chow
>
> =============================================
>
> Exception in thread "main" java.lang.reflect.InvocationTargetException
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:40)
>         at
> org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
> Caused by: java.io.IOException: Failed on local exception:
> java.io.EOFException; Host Details : local host is:
> "ip-10-33-140-157/10.33.140.157"; destination host is:
> "ec2-54-203-58-2.us-west-2.compute.amazonaws.com":9000;
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1415)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1364)
>         at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>         at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
>         at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:744)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy15.getFileInfo(Unknown Source)
>         at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1925)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1079)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1075)
>         at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1075)
>         at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400)
>         at
> org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:145)
>         at
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:900)
>         at
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:832)
>         at com.crowdstar.etl.ParseAndClean$.main(ParseAndClean.scala:101)
>         at com.crowdstar.etl.ParseAndClean.main(ParseAndClean.scala)
>         ... 6 more
> Caused by: java.io.EOFException
>         at java.io.DataInputStream.readInt(DataInputStream.java:392)
>         at
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1055)
>         at org.apache.hadoop.ipc.Client$Connection.run(Client.java:950)
>
> =======================================================
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/spark-1-1-0-save-data-to-hdfs-failed-tp21305.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to