What are some efficient ways to read a large file into RDDs?

For example, have several executors read a specific/unique portion of the
file and construct RDDs. Is this possible to do in Spark?

Currently, I am doing a line-by-line read of the file at the driver and
constructing the RDD.

Reply via email to