Robert,
Thanks for the tip!
Before you replied, I did figure out to put the keys in flink-conf.yaml,
using btrace. I instrumented the methods
org.apache.hadoop.conf.Configuration.get for the keys, and
org.apache.hadoop.conf.Configuration.substituteVars for effective
values. (There is a btr
I validated my assumption. Putting
s3.connection.maximum: 123456
into the flink-conf.yaml file results in the following DEBUG log output:
2020-05-08 16:20:47,461 DEBUG
org.apache.flink.fs.s3hadoop.common.HadoopConfigLoader [] - Adding
Flink config entry for s3.connection.maximum as fs.s3a.
Hey Jeff,
Which Flink version are you using?
Have you tried configuring the S3 filesystem via Flink's config yaml?
Afaik all config parameters prefixed with "s3." are mirrored into the
Hadoop file system connector.
On Mon, May 4, 2020 at 8:45 PM Jeff Henrikson wrote:
> > 2) How can I tell if
> 2) How can I tell if flink-s3-fs-hadoop is actually managing to pick up
> the hadoop configuration I have provided, as opposed to some separate
> default configuration?
I'm reading the docs and source of flink-fs-hadoop-shaded. I see that
core-default-shaded.xml has fs.s3a.connection.maximum
Hello Flink users,
I could use help with three related questions:
1) How can I observe retries in the flink-s3-fs-hadoop connector?
2) How can I tell if flink-s3-fs-hadoop is actually managing to pick up
the hadoop configuration I have provided, as opposed to some separate
default configuration