Fang Jin
Ask A question, if schedulis and dss are installed on different machines, dss 
stand-alone version is installed on machine A (including hadoop\hive\spark), 
and schedulis is installed on machine B, then when configuring 
azkaban.properties of schedulis,
hadoop.home=
hadoop.conf.dir=
hive.home=
spark.home=
How d are these values specified?



Zhang Huajin
The normal practice is to make a copy of the schedule node



Fang Jin
That's two separate sets of hadoop\hive\spark, right?



Zhang Huajin
If not, it is recommended to learn related concepts



Fang Jin
Do you want to build a hadoop cluster, or just copy the hadoop related 
directory to the past?



Zhang Huajin
for reference only



CDH5.16.1 cluster enterprises are truly deployed offline
https://mp.weixin.qq.com/s/65y28Fs61IrOehHkxpWxoA



In general, all deployed linkis or scheduled machines must be gateway nodes. 
New deployment gateway can refer to this



How do I configure a Gateway node for a Kerberos environment outside of a CDH 
cluster
https://mp.weixin.qq.com/s/Y6iirtXuamJGtIzllKX2Og



Fang Jin
Ok, thank you.



Fang Jin
问个问题,如果schedulis与dss装在不同机器上,dss单机版先装在A机器上(包括hadoop\hive\spark), 
schedulis装在B机器上,那在配置schedulis的azkaban.properties时,
hadoop.home=
hadoop.conf.dir=
hive.home=
spark.home=
这几个值怎么指定 ??



Zhang Huajin
正常做法是拷贝一份放调度节点



Fang Jin
拷贝一份那不是两套独立的hadoop\hive\spark了吗?



Zhang Huajin
不是的 建议学习一下相关概念



Fang Jin
是要建一个hadoop集群,还是把hadoop相关的目录拷过去就行了,具体说说呢?



Zhang Huajin
供参考



CDH5.16.1集群企业真正离线部署
https://mp.weixin.qq.com/s/65y28Fs61IrOehHkxpWxoA



一般所有的部署linkis或者调度的机器必须是gateway 节点 新增部署gateway 可以参考这个



如何在CDH集群外配置Kerberos环境的Gateway节点
https://mp.weixin.qq.com/s/Y6iirtXuamJGtIzllKX2Og



Fang Jin
好的,谢谢

Reply via email to