parthchandra commented on PR #1377: URL: https://github.com/apache/datafusion-comet/pull/1377#issuecomment-2652559874
> > Doing all this natively is not going to be easy so for the next phase where we are moving more of our parquet code to native, we will have a hook back to the jvm class when we cannot access hdfs or an object store natively. > > Yeah that's all I was referring to, was wondering how Hadoop configs would get translated to datafusion scan/object store configs. I thought maybe the whole thing would still go through the JNI for hadoop file system interactions. I guess you could actually use the JNI based object store in this PR for any Hadoop file system Right now, Comet is losing the configs passed in to hadoop which is something we will address (soon?). Converting those configs to Datafusion configs will have to be done as and when we encounter them. To start with, the configs from hadoop-aws would probably be the first to be handled. I haven't dug any deeper into this yet. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org For additional commands, e-mail: github-h...@datafusion.apache.org