Hi;


I am new to Flink development.

Is there any way to set S3 credentials at runtime?

How can we connect 3 or more different s3 buckets (with different creds)?

Lets say you have 3 csv file on AWS S3, and you want to join them with their id 
fields.



How can we do this? I don't want to use flink-conf.yaml file or another config 
file.

Because sources can change dynamically, so I need to set creds dynamically.



I could not pass the creds checking for even 1 csv file, here you can try the 
code(Scala):



object AwsS3CSVTest {

  def main(args: Array[String]): Unit = {

    val conf = new Configuration();

    conf.setString("fs.s3a.access.key", "***")

    conf.setString("fs.s3a.secret.key", "***")

    val env = ExecutionEnvironment.createLocalEnvironment(conf)

    val datafile = env.readCsvFile("s3a://anybucket/anyfile.csv")

      .ignoreFirstLine()

      .fieldDelimiter(";")

      .types(classOf[String], classOf[String], classOf[String], 
classOf[String], classOf[String], classOf[String])

    datafile.print()

  }

}



I also asked on Stackoverflow for sharing.



https://stackoverflow.com/questions/74482619/apache-flink-s3-file-system-credentials-does-not-work/



I want to say that, I know I can do this with Spark. You can access the 
HadoopConfiguration and set the creds at runtime:



  def getAwsS3DF = {

    val ss = SparkFactory.getSparkSession

    ss.sparkContext.hadoopConfiguration.set("fs.s3a.access.key", "xxx")

    ss.sparkContext.hadoopConfiguration.set("fs.s3a.secret.key", "xxx")



    val df = ss.read.format("csv")

      .option("header", true)

      .option("sep", "\t")

      .load("s3a://anybucket/anyfile.csv ")

   df.show

  }



So is there anything am I missing or is it not possible?



Thank you.

Reply via email to