[Spark conf setting] spark.sql.parquet.cacheMetadata = true still invalidates cache in memory.

2021-07-01 Thread Parag Mohanty
Hi Team I am trying to read a parquet file, cache it and then do transformation and overwrite the parquet file in a session. But first count action doesn't cache the dataframe. It gets cached while caching the transformed dataframe. Even if the spark.sql.parquet.cacheMetadata = true still the write

PySpark, setting spark conf values in a function and catching for errors

2021-01-15 Thread Mich Talebzadeh
Hi, I have multiple routines that are using Spark for Google BigQuery that set these configuration values. I have decided to put them in a PySpark function as below with spark as an input. def setSparkConfSet(spark): try: spark.conf.set("GcpJsonKeyFile", config['GCPVariables']['js

Re: Spark Conf

2018-03-15 Thread Neil Jonkers
Hi "In general, configuration values explicitly set on a SparkConf take the highest precedence, then flags passed to spark-submit, then values in the defaults file." https://spark.apache.org/docs/latest/submitting-applications.html Perhaps this will help Vinyas: Look at args.sparkProperties in ht

Spark Conf

2018-03-14 Thread Vinyas Shetty
Hi, I am trying to understand the spark internals ,so was looking the spark code flow. Now in a scenario where i do a spark-submit in yarn cluster mode with --executor-memory 8g via command line ,now how does spark know about this exectuor memory value ,since in SparkContext i see : _executorMemo

Spark conf forgets cassandra host in the configuration file

2018-02-08 Thread Ismail Bayraktar
Hello, I am facing an issue with Spark Conf while reading the Cassandra host property from the default spark configuration file. I use Kafka 2.11.0.10 and Spark 2.2.1, Cassandra 3.11. I have a Docker container where spark master, worker and my app running as standalone cluster mode. I have a

How to configure global_temp database via Spark Conf

2017-02-28 Thread SRK
Hi, How to configure global_temp database via SparkConf? I know that its a System Preserved database. Can it be preserved via Spark Conf? Thanks, Swetha -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/How-to-configure-global-temp-database-via-Spark-Conf

Re: pass custom spark-conf

2015-12-31 Thread KOSTIANTYN Kudriavtsev
I want to add AWS credentials into hdfs-site.xml and pass different xml for different users Thank you, Konstantin Kudryavtsev On Thu, Dec 31, 2015 at 2:19 PM, Ted Yu wrote: > Check out --conf option for spark-submit > > bq. to configure different hdfs-site.xml > > What config parameters do you

Re: pass custom spark-conf

2015-12-31 Thread Ted Yu
Check out --conf option for spark-submit bq. to configure different hdfs-site.xml What config parameters do you plan to change in hdfs-site.xml ? If the parameter only affects hdfs NN / DN, passing hdfs-site.xml wouldn't take effect, right ? Cheers On Thu, Dec 31, 2015 at 10:48 AM, KOSTIANTYN K

pass custom spark-conf

2015-12-31 Thread KOSTIANTYN Kudriavtsev
Hi all, I'm trying to use different spark-default.conf per user, i.e. I want to have spark-user1.conf and etc. Is it a way to pass a path to appropriate conf file when I'm using standalone spark installation? Also, is it possible to configure different hdfs-site.xml and pass it as well with spark-

Fwd: Oozie SparkAction not able to use spark conf values

2015-12-07 Thread Rajadayalan Perumalsamy
quot; --conf outputdb="dev.db" --verbose hdfs://nameservice1/user/dev/spark/conf/TrySparkAction.jar = >>> Invoking Spark class now >>> Heart beat <<

Oozie SparkAction not able to use spark conf values

2015-12-04 Thread Rajadayalan Perumalsamy
quot; --conf outputdb="dev.db" --verbose hdfs://nameservice1/user/dev/spark/conf/TrySparkAction.jar = >>> Invoking Spark class now >>> Heart beat <<

Re: log4j.xml bundled in jar vs log4.properties in spark/conf

2015-08-07 Thread mlemay
-log4-properties-in-spark-conf-tp23923p24173.html Sent from the Apache Spark User List mailing list archive at Nabble.com. - To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h

Re: log4j.xml bundled in jar vs log4.properties in spark/conf

2015-08-06 Thread mlemay
I'm having the same problem here. -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/log4j-xml-bundled-in-jar-vs-log4-properties-in-spark-conf-tp23923p24158.html Sent from the Apache Spark User List mailing list archive at Nabbl

Re: log4j.xml bundled in jar vs log4.properties in spark/conf

2015-07-22 Thread Steve Loughran
; Hi, > I have log4j.xml in my jar > From 1.4.1 it seems that log4j.properties in spark/conf is defined first in > classpath so the spark.conf/log4j.properties "wins" > before that (in v1.3.0) log4j.xml bundled in jar defined the configuration > > if I manually add my jar

log4j.xml bundled in jar vs log4.properties in spark/conf

2015-07-21 Thread igor.berman
Hi, I have log4j.xml in my jar >From 1.4.1 it seems that log4j.properties in spark/conf is defined first in classpath so the spark.conf/log4j.properties "wins" before that (in v1.3.0) log4j.xml bundled in jar defined the configuration if I manually add my jar to be strictly first in

Re: Passing Elastic Search Mappings in Spark Conf

2015-04-16 Thread Deepak Subhramanian
> Sent from Mailbox > > > On Thu, Apr 16, 2015 at 1:14 AM, Deepak Subhramanian > wrote: >> >> Hi, >> >> Is there a way to pass the mapping to define a field as not analyzed >> with es-spark settings. >> >> I am just wondering if I can set t

Re: Passing Elastic Search Mappings in Spark Conf

2015-04-15 Thread Nick Pentreath
alyzed using the set function in spark conf as similar to the other > es settings. > val sconf = new SparkConf() > .setMaster("local[1]") > .setAppName("Load Data To ES") > .set("spark.ui.port", "4141") > .set("es.index.auto.create&quo

Passing Elastic Search Mappings in Spark Conf

2015-04-15 Thread Deepak Subhramanian
Hi, Is there a way to pass the mapping to define a field as not analyzed with es-spark settings. I am just wondering if I can set the mapping type for a field as not analyzed using the set function in spark conf as similar to the other es settings. val sconf = new SparkConf() .setMaster

Re: making spark/conf/spark-defaults.conf changes take effect

2014-05-19 Thread Andrew Or
in an aws ec2 cluster that i launched using the spark-ec2 > script that comes with spark > and I use the "-v master" option to run the head version. > > If I then log into master and make changes > to spark/conf/spark-defaults.conf > How do I make the changes take effe

making spark/conf/spark-defaults.conf changes take effect

2014-05-18 Thread Daniel Mahler
I am running in an aws ec2 cluster that i launched using the spark-ec2 script that comes with spark and I use the "-v master" option to run the head version. If I then log into master and make changes to spark/conf/spark-defaults.conf How do I make the changes take effect across the cl