It seems by design of yarn mode. Have you ever make it work in spark-shell ?


Jhon Anderson Cardenas Diaz <jhonderson2...@gmail.com>于2018年1月10日周三
下午9:17写道:

> *Environment*:
> AWS EMR, yarn cluster.
>
> *Description*:
>
> I am trying to use a java filter to protect the access to spark ui, this
> by using the property spark.ui.filters; the problem is that when spark is
> running on yarn mode, that property is being allways overriden with the
> filter org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter:
>
> *spark.ui.filters:
> org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter*
>
> And this properties are automatically added:
>
>
> *spark.org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.param.PROXY_HOSTS:
> ip-x-x-x-226.eu-west-1.compute.internalspark.org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter.param.PROXY_URI_BASES:
> http://ip-x-x-x-226.eu-west-1.compute.internal:20888/proxy/application_xxxxxxxxxxxxx_xxxx
> <http://ip-x-x-x-226.eu-west-1.compute.internal:20888/proxy/application_xxxxxxxxxxxxx_xxxx>*
>
> Any suggestion of how to add a java security filter so ti does not get
> overriden, or maybe how to configure the security from hadoop side?
>
> Thanks.
>

Reply via email to