I'm testing the Flume + Spark integration example (flume count).

I'm deploying the job using yarn cluster mode.

I first logged into the Yarn cluster, then submitted the job and passed in
a specific worker node's IP to deploy the job. But when I checked the
WebUI, it failed to bind to the specified IP because the receiver was
deployed to a different host, not the one I asked it to. Do you know?

For your information,  I've also tried passing the IP address used by the
resource manager to find resources but no joy. But when I set the host to
'localhost' and deploy to the cluster it is binding a worker node that is
selected by the resource manager.

Reply via email to