Thanks for the info, I have managed to launch a HA cluster with adding
rpc.address for all job managers.
But it did not work with start-cluster.sh, I had to add one by one.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi,
It will use HA settings as long as you specify the high-availability:
zookeeper. The jobmanager.rpc.adress is used by the jobmanager as a binding
address. You can verify it by starting two jobmanagers and then killing the
leader.
Best,
Dawid
On Tue, 21 Aug 2018 at 17:46, mozer
wrote:
> Yeah,
Yeah, you are right. I have already tried to set up jobmanager.rpc.adress and
it works in that case, but if I use this setting I will not be able to use
HA, am i right ?
How the job manager can register to zookeeper with the right address but not
localhost ?
--
Sent from: http://apache-flink-
Hi,
In your case the jobmanager binds itself to localhost and that's what it
writes to zookeeper. Try starting the jobmanager manually with
jobmanager.rpc.address set to the ip of machine you are running the
jobmanager. In other words make sure the jobmanager binds itself to the
right ip.
Regards
FQD or full ip; tried all of them, still no changes ...
For ssh connection, I can connect to each machine without passwords.
Do you think that the problem can come from :
*high-availability.storageDir: file:///shareflink/recovery* ?
I don't use a HDFS storage but NAS file system which is co
First of all try with FQD or full ip.
Also in order to run HA cluster you need to make sure that you have
password less ssh access to your slaves and master communication. .
On Tue, Aug 21, 2018 at 4:15 PM mozer
wrote:
> I am trying to install a Flink HA cluster (Zookeeper mode) but the task