You can do the following, make sure you the no of executors requested equal
the number of executors on your cluster.
import scala.sys.process._
import org.apache.hadoop.security.UserGroupInformation
import org.apache.spark.deploy.SparkHadoopUtil
sc.parallelize(0 to 10).map { _ =>(("hostname".!!).t
Have you seen .pipe()?
On 11/2/15, 5:36 PM, "patcharee" wrote:
>Hi,
>
>Is it possible to execute native system commands (in parallel) Spark,
>like scala.sys.process ?
>
>Best,
>Patcharee
>
>-
>To unsubscribe, e-mail: user-un
Hi,
Is it possible to execute native system commands (in parallel) Spark,
like scala.sys.process ?
Best,
Patcharee
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.a