Hi Team
I am trying to read a parquet file, cache it and then do transformation and
overwrite the parquet file in a session.
But first count action doesn't cache the dataframe.
It gets cached while caching the transformed dataframe.
Even if the spark.sql.parquet.cacheMetadata = true still the write
Hi,
I have multiple routines that are using Spark for Google BigQuery that set
these configuration values. I have decided to put them in a PySpark
function as below with spark as an input.
def setSparkConfSet(spark):
try:
spark.conf.set("GcpJsonKeyFile",
config['GCPVariables']['js
Hi
"In general, configuration values explicitly set on a SparkConf take the
highest precedence, then flags passed to spark-submit, then values in the
defaults file."
https://spark.apache.org/docs/latest/submitting-applications.html
Perhaps this will help Vinyas:
Look at args.sparkProperties in
ht
Hi,
I am trying to understand the spark internals ,so was looking the spark
code flow. Now in a scenario where i do a spark-submit in yarn cluster mode
with --executor-memory 8g via command line ,now how does spark know about
this exectuor memory value ,since in SparkContext i see :
_executorMemo
Hello,
I am facing an issue with Spark Conf while reading the Cassandra host property
from the default spark configuration file.
I use Kafka 2.11.0.10 and Spark 2.2.1, Cassandra 3.11. I have a Docker
container where spark master, worker and my app running as standalone cluster
mode. I have a
Hi,
How to configure global_temp database via SparkConf? I know that its a
System Preserved database. Can it be preserved via Spark Conf?
Thanks,
Swetha
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/How-to-configure-global-temp-database-via-Spark-Conf
I want to add AWS credentials into hdfs-site.xml and pass different xml for
different users
Thank you,
Konstantin Kudryavtsev
On Thu, Dec 31, 2015 at 2:19 PM, Ted Yu wrote:
> Check out --conf option for spark-submit
>
> bq. to configure different hdfs-site.xml
>
> What config parameters do you
Check out --conf option for spark-submit
bq. to configure different hdfs-site.xml
What config parameters do you plan to change in hdfs-site.xml ?
If the parameter only affects hdfs NN / DN, passing hdfs-site.xml wouldn't
take effect, right ?
Cheers
On Thu, Dec 31, 2015 at 10:48 AM, KOSTIANTYN K
Hi all,
I'm trying to use different spark-default.conf per user, i.e. I want to
have spark-user1.conf and etc. Is it a way to pass a path to appropriate
conf file when I'm using standalone spark installation?
Also, is it possible to configure different hdfs-site.xml and pass it as
well with spark-
quot;
--conf
outputdb="dev.db"
--verbose
hdfs://nameservice1/user/dev/spark/conf/TrySparkAction.jar
=
>>> Invoking Spark class now >>>
Heart beat
<<
quot;
--conf
outputdb="dev.db"
--verbose
hdfs://nameservice1/user/dev/spark/conf/TrySparkAction.jar
=
>>> Invoking Spark class now >>>
Heart beat
<<
-log4-properties-in-spark-conf-tp23923p24173.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h
I'm having the same problem here.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/log4j-xml-bundled-in-jar-vs-log4-properties-in-spark-conf-tp23923p24158.html
Sent from the Apache Spark User List mailing list archive at Nabbl
; Hi,
> I have log4j.xml in my jar
> From 1.4.1 it seems that log4j.properties in spark/conf is defined first in
> classpath so the spark.conf/log4j.properties "wins"
> before that (in v1.3.0) log4j.xml bundled in jar defined the configuration
>
> if I manually add my jar
Hi,
I have log4j.xml in my jar
>From 1.4.1 it seems that log4j.properties in spark/conf is defined first in
classpath so the spark.conf/log4j.properties "wins"
before that (in v1.3.0) log4j.xml bundled in jar defined the configuration
if I manually add my jar to be strictly first in
> Sent from Mailbox
>
>
> On Thu, Apr 16, 2015 at 1:14 AM, Deepak Subhramanian
> wrote:
>>
>> Hi,
>>
>> Is there a way to pass the mapping to define a field as not analyzed
>> with es-spark settings.
>>
>> I am just wondering if I can set t
alyzed using the set function in spark conf as similar to the other
> es settings.
> val sconf = new SparkConf()
> .setMaster("local[1]")
> .setAppName("Load Data To ES")
> .set("spark.ui.port", "4141")
> .set("es.index.auto.create&quo
Hi,
Is there a way to pass the mapping to define a field as not analyzed
with es-spark settings.
I am just wondering if I can set the mapping type for a field as not
analyzed using the set function in spark conf as similar to the other
es settings.
val sconf = new SparkConf()
.setMaster
in an aws ec2 cluster that i launched using the spark-ec2
> script that comes with spark
> and I use the "-v master" option to run the head version.
>
> If I then log into master and make changes
> to spark/conf/spark-defaults.conf
> How do I make the changes take effe
I am running in an aws ec2 cluster that i launched using the spark-ec2
script that comes with spark
and I use the "-v master" option to run the head version.
If I then log into master and make changes to spark/conf/spark-defaults.conf
How do I make the changes take effect across the cl
20 matches
Mail list logo