I don't know the Spark issue but the Hadoop context is clear.

old api -> org.apache.hadoop.mapred
new api -> org.apache.hadoop.mapreduce

You might only need to change your import.

Regards

Bertrand


On Wed, Mar 19, 2014 at 11:29 AM, Pariksheet Barapatre <pbarapa...@gmail.com
> wrote:

> Hi,
>
> Trying to read HDFS file with TextInputFormat.
>
> scala> import org.apache.hadoop.mapred.TextInputFormat
> scala> import org.apache.hadoop.io.{LongWritable, Text}
> scala> val file2 =
> sc.newAPIHadoopFile[LongWritable,Text,TextInputFormat]("hdfs://
> 192.168.100.130:8020/user/hue/pig/examples/data/sonnets.txt")
>
>
> This is giving me the error.
>
> <console>:14: error: type arguments
> [org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text,org.apache.hadoop.mapred.TextInputFormat]
> conform to the bounds of none of the overloaded alternatives of
>  value newAPIHadoopFile: [K, V, F <:
> org.apache.hadoop.mapreduce.InputFormat[K,V]](path: String, fClass:
> Class[F], kClass: Class[K], vClass: Class[V], conf:
> org.apache.hadoop.conf.Configuration)org.apache.spark.rdd.RDD[(K, V)] <and>
> [K, V, F <: org.apache.hadoop.mapreduce.InputFormat[K,V]](path:
> String)(implicit km: scala.reflect.ClassTag[K], implicit vm:
> scala.reflect.ClassTag[V], implicit fm:
> scala.reflect.ClassTag[F])org.apache.spark.rdd.RDD[(K, V)]
>        val file2 =
> sc.newAPIHadoopFile[LongWritable,Text,TextInputFormat]("hdfs://
> 192.168.100.130:8020/user/hue/pig/examples/data/sonnets.txt")
>
>
> What is correct syntax if I want to use TextInputFormat.
>
> Also, how to use customInputFormat. Very silly question but I am not sure
> how and where to keep jar file containing customInputFormat class.
>
> Thanks
> Pariksheet
>
>
>
> --
> Cheers,
> Pari
>

Reply via email to