I think you may be missing a key word here. Are you saying that the machine
has multiple interfaces and it is not using the one you expect or the
receiver is not running on the machine you expect?
On Sep 26, 2014 3:33 AM, "centerqi hu" <[email protected]> wrote:

> Hi all
> My code is as follows:
>
> /usr/local/webserver/sparkhive/bin/spark-submit
> --class org.apache.spark.examples.streaming.FlumeEventCount
> --master yarn
> --deploy-mode cluster
> --queue  online
> --num-executors 5
> --driver-memory 6g
> --executor-memory 20g
> --executor-cores 5 target/scala-2.10/simple-project_2.10-1.0.jar
> 10.1.15.115 60000
>
> However, the receiver does not in the 10.1.15.115, but the random
> choice of one slave host.
>
> How to solve this problem?
>
> Thanks
>
>
> --
> [email protected]|齐忠
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>
>

Reply via email to