Hi Anuj, To my best knowledge, flink does not provide the encryption strategy support for now. If you are using flink on k8s, it is possible to achieve the encryption of parameters using the init container. You can check this SO <https://stackoverflow.com/questions/73579176/flink-kubernetes-deployment-how-to-provide-s3-credentials-from-hashicorp-vault> for more detailed instructions. Besides, it should be possible to override Configuration object in your job code. Are you using Application mode to run the job?
Best regards, Biao Geng Anuj Jain <anuj...@gmail.com> 于2023年5月8日周一 13:55写道: > Hi Community, > I am trying to create an amazon S3 filesystem distributor using flink and > for this I am using hadoop S3a connector with Flink filesystem sink. > My flink application would run in a non-AWS environment, on native > cluster; so I need to put my access keys in flink configuration. > > For connecting to S3 storage, i am configuring flink-conf.yaml with the > access credentials like > s3.access.key: <access key> > s3.secret.key: <secret key> > ... and some other parameters required for assuming AWS IAM role with s3a > AssumedRoleCredentialProvider > > Is there a way to encrypt these parameters rather than putting them > directly or is there any other way to supply them programmatically. > > I tried to set them programmatically using the Configuration object and > supplying them with > StreamExecutionEnvironment.getExecutionEnvironment(Configuration), in my > job (rather than from flink-conf.yaml) but then the S3 connection failed. I > think flink creates the connection pool at startup even before the job is > started. > > Thanks and Regards > Anuj Jain >