Hi,
Thank you for your reply,
WE are deploying on kubernetes and the xml is part of the  common config map to 
all flink jobs we have(or at least was for previous versions) 

This means that we need to duplicate the configuration in the flink-conf.yaml 
for each job
instead of having a common configmap

Thanks,
Shachar

On 2020/10/27 08:48:17, Robert Metzger <rmetz...@apache.org> wrote: 
> Hi Shachar,
> 
> Why do you want to use the core-site.xml to configure the file system?
> 
> Since we are adding the file systems as plugins, their initialization is
> customized. It might be the case that we are intentionally ignoring xml
> configurations from the classpath.
> You can configure the filesystem in the flink-conf.yaml file.
> 
> 
> On Sun, Oct 25, 2020 at 7:56 AM Shachar Carmeli <carmeli....@gmail.com>
> wrote:
> 
> > Hi,
> > I'm trying to define filesystem to flink 1.11 using core-site.xml
> > I tried adding in the flink-conf.yaml env.hadoop.conf.dir and I see it is
> > added to the classpath
> > also adding environment variable HADOOP_CONF_DIR didn't help
> >
> > The flink 1.11.2 is running on docker using kubernetes
> >
> > I added hadoop using plugin as mentioned in
> > https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/s3.html#hadooppresto-s3-file-systems-plugins
> >
> > when configure the parameters manually I can connect to the local s3a
> > server
> > So it looks like the flink is not reading the core-site.xml file
> >
> > please advise
> >
> > Thanks,
> > Shachar
> >
> 

Reply via email to