This is an interesting one.
I have never tried to add --files ...
spark-submit --master yarn --deploy-mode client --files
/etc/hive/conf/hive-site.xml,/etc/hadoop/conf/core-site.xml,/etc/hadoop/conf/hdfs-site.xml
Rather, under $SPARK_HOME/conf, I create soft links to the needed XML files
as belo
Thanks everyone. I was able to resolve this.
Here is what I did. Just passed conf file using —files option.
Mistake that I did was reading the json conf file before creating spark session
. Reading if after creating spark session helped it. Thanks once again for your
valuable suggestions
Tha
If code running on the executors need some local file like a config file,
then it does have to be passed this way. That much is normal.
On Sat, May 15, 2021 at 1:41 AM Gourav Sengupta
wrote:
> Hi,
>
> once again lets start with the requirement. Why are you trying to pass xml
> and json files to
Hi,
once again lets start with the requirement. Why are you trying to pass xml
and json files to SPARK instead of reading them in SPARK?
Generally when people pass on files they are python or jar files.
Regards,
Gourav
On Sat, May 15, 2021 at 5:03 AM Amit Joshi
wrote:
> Hi KhajaAsmath,
>
> Cli
Hi KhajaAsmath,
Client vs Cluster: In client mode driver runs in the machine from where you
submit your job. Whereas in cluster mode driver runs in one of the worker
nodes.
I think you need to pass the conf file to your driver, as you are using it
in the driver code, which runs in one of the work
Here is my updated spark submit without any luck.,
spark-submit --master yarn --deploy-mode cluster --files
/appl/common/ftp/conf.json,/etc/hive/conf/hive-site.xml,/etc/hadoop/conf/core-site.xml,/etc/hadoop/conf/hdfs-site.xml
--num-executors 6 --executor-cores 3 --driver-cores 3 --driver-memory 7g
Sorry my bad, it did not resolve the issue. I still have the same issue.
can anyone please guide me. I was still running as a client instead of a
cluster.
On Fri, May 14, 2021 at 5:05 PM KhajaAsmath Mohammed <
mdkhajaasm...@gmail.com> wrote:
> You are right. It worked but I still don't understand
You are right. It worked but I still don't understand why I need to pass
that to all executors.
On Fri, May 14, 2021 at 5:03 PM KhajaAsmath Mohammed <
mdkhajaasm...@gmail.com> wrote:
> I am using json only to read properties before calling spark session. I
> don't know why we need to pass that to
I am using json only to read properties before calling spark session. I
don't know why we need to pass that to all executors.
On Fri, May 14, 2021 at 5:01 PM Longjiang.Yang
wrote:
> Could you check whether this file is accessible in executors? (is it in
> HDFS or in the client local FS)
> /appl