Github user mateiz commented on a diff in the pull request:
https://github.com/apache/spark/pull/1658#discussion_r19453138
--- Diff:
core/src/main/scala/org/apache/spark/api/java/JavaSparkContext.scala ---
@@ -220,6 +227,83 @@ class JavaSparkContext(val sc: SparkContext) extends
JavaSparkContextVarargsWork
def wholeTextFiles(path: String): JavaPairRDD[String, String] =
new JavaPairRDD(sc.wholeTextFiles(path))
+ /**
+ * Read a directory of binary files from HDFS, a local file system
(available on all nodes),
+ * or any Hadoop-supported file system URI as a byte array. Each file is
read as a single
+ * record and returned in a key-value pair, where the key is the path of
each file,
+ * the value is the content of each file.
+ *
+ * <p> For example, if you have the following files:
+ * {{{
+ * hdfs://a-hdfs-path/part-00000
+ * hdfs://a-hdfs-path/part-00001
+ * ...
+ * hdfs://a-hdfs-path/part-nnnnn
+ * }}}
+ *
+ * Do
+ * `JavaPairRDD<String, byte[]> rdd =
sparkContext.dataStreamFiles("hdfs://a-hdfs-path")`,
+ *
+ * <p> then `rdd` contains
+ * {{{
+ * (a-hdfs-path/part-00000, its content)
+ * (a-hdfs-path/part-00001, its content)
+ * ...
+ * (a-hdfs-path/part-nnnnn, its content)
+ * }}}
+ *
+ * @note Small files are preferred, large file is also allowable, but
may cause bad performance.
+ *
+ * @param minPartitions A suggestion value of the minimal splitting
number for input data.
+ */
+ def binaryFiles(path: String, minPartitions: Int = defaultMinPartitions):
+ JavaPairRDD[String,PortableDataStream] = new
JavaPairRDD(sc.binaryFiles(path,minPartitions))
+
+ /**
+ * Read a directory of files as DataInputStream from HDFS,
+ * a local file system (available on all nodes), or any Hadoop-supported
file system URI
+ * as a byte array. Each file is read as a single record and returned in
a
+ * key-value pair, where the key is the path of each file, the value is
the content of each.
+ *
+ * <p> For example, if you have the following files:
+ * {{{
+ * hdfs://a-hdfs-path/part-00000
+ * hdfs://a-hdfs-path/part-00001
+ * ...
+ * hdfs://a-hdfs-path/part-nnnnn
+ * }}}
+ *
+ * Do
+ * `JavaPairRDD<String,DataInputStream> rdd =
sparkContext.binaryFiles("hdfs://a-hdfs-path")`,
+ *
+ * <p> then `rdd` contains
+ * {{{
+ * (a-hdfs-path/part-00000, its content)
+ * (a-hdfs-path/part-00001, its content)
+ * ...
+ * (a-hdfs-path/part-nnnnn, its content)
+ * }}}
+ *
+ * @note Small files are preferred, large file is also allowable, but
may cause bad performance.
+ *
+ * @param minPartitions A suggestion value of the minimal splitting
number for input data.
+ */
+ def binaryArrays(path: String, minPartitions: Int =
defaultMinPartitions):
--- End diff --
I'd still remove this. It's confusing to see the API in just one language,
and with Java 8, the extra class will be a one-liner.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]