Are you missing the hadoop-lzo package? it's not part of Hadoop/Spark.

On Sat, Nov 19, 2016 at 4:20 AM learning_spark <
dibyendu.chakraba...@gmail.com> wrote:

> Hi Users, I am not sure about the latest status of this issue:
> https://issues.apache.org/jira/browse/SPARK-2394 However, I have seen the
> following link:
> https://github.com/awslabs/emr-bootstrap-actions/blob/master/spark/examples/reading-lzo-files.md
> My experience is limited, but I had had partial success from Spark shell,
> but my stand alone program did not even compile. I suspect some jar file is
> required. val files =
> sc.newAPIHadoopFile("s3://support.elasticmapreduce/spark/examples/lzodataindexed/*.lzo",
> classOf[com.hadoop.mapreduce.LzoTextInputFormat],classOf[org.apache.hadoop.io.LongWritable],classOf[org.apache.hadoop.io.Text])
> Does any one know how to do this from a stand alone program? Thanks and
> regards,
> ------------------------------
> View this message in context: Reading LZO files with Spark
> <http://apache-spark-user-list.1001560.n3.nabble.com/Reading-LZO-files-with-Spark-tp28106.html>
> Sent from the Apache Spark User List mailing list archive
> <http://apache-spark-user-list.1001560.n3.nabble.com/> at Nabble.com.
>

Reply via email to