Hi team,

I’m using Snowflake IO plugin to write to Snowflake on Spark runner. I’m using 
S3 bucket as staging bucket. The bucket is set up in a different account. I 
want to set s3 objects acl to bucket-owner-full-control while writing.

  1.  Do you have a status update on ticket [1]? Is it possible to prioritize 
it?
  2.  Is there a way to force Snowflake IO to use Hadoop s3 connector instead 
of using S3FileSystem? We have  acl settings set up in hadoop configs on the 
spark cluster.

[1]
https://issues.apache.org/jira/browse/BEAM-10850

Reply via email to