Can someone help with this issue? It’s a blocker for us to use Beam for 
snowflake IO.

Thanks so much!

From: Anuj Gandhi <an...@zillowgroup.com>
Reply-To: "user@beam.apache.org" <user@beam.apache.org>
Date: Friday, July 16, 2021 at 12:07 PM
To: "user@beam.apache.org" <user@beam.apache.org>
Cc: "Tao Li (@taol)" <_git...@zillowgroup.com>
Subject: [Question] Snowflake IO cross account s3 write

Hi team,

I’m using Snowflake IO plugin to write to Snowflake on Spark runner. I’m using 
S3 bucket as staging bucket. The bucket is set up in a different account. I 
want to set s3 objects acl to bucket-owner-full-control while writing.

  1.  Do you have a status update on ticket [1]? Is it possible to prioritize 
it?
  2.  Is there a way to force Snowflake IO to use Hadoop s3 connector instead 
of using S3FileSystem? We have  acl settings set up in hadoop configs on the 
spark cluster.

[1]
https://issues.apache.org/jira/browse/BEAM-10850

Reply via email to