piyushnarang commented on issue #8117: [FLINK-12115] [filesystems]: Add support 
for AzureFS
URL: https://github.com/apache/flink/pull/8117#issuecomment-487765637
 
 
   Thanks for getting back and taking a look @tillrohrmann and @shuai-xu. To 
answer some of your top level comments / questions:
   1) The flink-azure-fs-hadoop jars are written out to the opt/ directory in 
the flink-dist (based on comments in the original review). I've tested this in 
local flink jobs, I've trying to sort out some things to test this out on our 
internal hadoop cluster. 
   2) I can add some E2E tests on the lines of `test_shaded_presto_s3`. Do we 
have an azure bucket at the project level that I should use? Or should I just 
add the tests similar to the IT test and the folks who run it can fill in their 
azure details?
   3) ITCase with HTTP - Seems like they do support retrieving this information 
via their REST API 
(https://docs.microsoft.com/en-us/rest/api/storagerp/storageaccounts/getproperties).
 I can try and hook this up to the IT case to only run the HTTP tests if 
`supportsHttpsTrafficOnly` = false. 
   4) Dependency jars - If I understand correctly, some of these dependent jars 
(like hadoop-azure / azure-storage) should be part of the hadoop install right? 
Or do I need to tweak things to package them?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to