Hi,

I would like to know what is the most efficient way of reading tsv in
Scala, Python and Java with Spark 2.0.

I believe with Spark 2.0 CSV is a native source based on Spark-csv module,
and we can potentially read a "tsv" file by specifying

1. Option ("delimiter","\t") in Scala
2. sep declaration in Python.

However I am unsure what is the best way to achieve this in Java.
Furthermore, are the above most optimum ways to read a tsv file?

Appreciate a response on this.

Regards.

Reply via email to