More details :

Command =
/usr/lib/flink/bin/flink run -m yarn-cluster -yn 48 -ytm 5120 -yqu batch1 -ys 4 
--class com.bouygtel.kubera.main.segstage.MainGeoSegStage 
/home/voyager/KBR/GOS/lib/KUBERA-GEO-SOURCE-0.0.1-SNAPSHOT-allinone.jar  -j 
/home/voyager/KBR/GOS/log -c /home/voyager/KBR/GOS/cfg/KBR_GOS_Config.cfg


Start of trace is :
Found YARN properties file /tmp/.yarn-properties-voyager
YARN properties set default parallelism to 24
Using JobManager address from YARN properties 
bt1shli3.bpa.bouyguestelecom.fr/172.21.125.28:36700
YARN cluster mode detected. Switching Log4j output to console


Content of /tmp/.yarn-properties-voyager
Is related to the streaming cluster :

#Generated YARN properties file
#Thu Dec 03 11:03:06 CET 2015
parallelism=24
dynamicPropertiesString=yarn.heap-cutoff-ratio\=0.6@@yarn.application-attempts\=10@@recovery.mode\=zookeeper@@recovery.zookeeper.quorum\=h1r1en01\:2181@@recovery.zookeeper.path.root\=/flink@@state.backend\=filesystem@@state.backend.fs.checkpointdir\=hdfs\:///tmp/flink/checkpoints@@recovery.zookeeper.storageDir\=hdfs\:///tmp/flink/recovery/
jobManager=172.21.125.28\:36700




De : LINZ, Arnaud
Envoyé : jeudi 3 décembre 2015 11:01
À : user@flink.apache.org
Objet : RE: HA Mode and standalone containers compatibility ?

Yes, it does interfere, I do have additional task managers. My batch 
application comes in my streaming cluster Flink’s GUI instead of creating its 
own container with its own GUI despite the –m yarn-cluster option.

De : Till Rohrmann [mailto:trohrm...@apache.org]
Envoyé : jeudi 3 décembre 2015 10:36
À : user@flink.apache.org<mailto:user@flink.apache.org>
Objet : Re: HA Mode and standalone containers compatibility ?

Hi Arnaud,

as long as you don't have HA activated for your batch jobs, HA shouldn't have 
an influence on the batch execution. If it interferes, then you should see 
additional task manager connected to the streaming cluster when you execute the 
batch job. Could you check that? Furthermore, could you check that actually a 
second yarn application is started when you run the batch jobs?

Cheers,
Till

On Thu, Dec 3, 2015 at 9:57 AM, LINZ, Arnaud 
<al...@bouyguestelecom.fr<mailto:al...@bouyguestelecom.fr>> wrote:

Hello,



I have both streaming applications & batch applications. Since the memory needs 
are not the same, I was using a long-living container for my streaming apps and 
new short-lived containers for hosting each batch execution.



For that, I submit streaming jobs with "flink run"  and batch jobs with "flink 
run -m yarn-cluster"



This was working fine until I turned zookeeper HA mode on for my streaming 
applications.

Even if I don't set it up in the yaml flink configuration file, but with -D 
options on the yarn_session.sh command line, now my batch jobs try to run in 
the streaming container, and fails because of the lack of ressources.



My HA options are :

-Dyarn.application-attempts=10 -Drecovery.mode=zookeeper 
-Drecovery.zookeeper.quorum=h1r1en01:2181 -Drecovery.zookeeper.path.root=/flink 
 -Dstate.backend=filesystem 
-Dstate.backend.fs.checkpointdir=hdfs:///tmp/flink/checkpoints 
-Drecovery.zookeeper.storageDir=hdfs:///tmp/flink/recovery/



Am I missing something ?



Best regards,

Aranud

________________________________

L'intégrité de ce message n'étant pas assurée sur internet, la société 
expéditrice ne peut être tenue responsable de son contenu ni de ses pièces 
jointes. Toute utilisation ou diffusion non autorisée est interdite. Si vous 
n'êtes pas destinataire de ce message, merci de le détruire et d'avertir 
l'expéditeur.

The integrity of this message cannot be guaranteed on the Internet. The company 
that sent this message cannot therefore be held liable for its content nor 
attachments. Any unauthorized use or dissemination is prohibited. If you are 
not the intended recipient of this message, then please delete it and notify 
the sender.

Reply via email to