Dear All:
We are trying to deploy ( using Jenkins ) a spark-python app on an edge
node, however the dilemma is whether to clone the git repo to all the nodes in
the cluster. The reason is, if we choose to use the deployment mode as cluster
and master as yarn, then driver expects the cur
or of Apache Spark
>
> <https://github.com/apache/spark/blob/master/python/pyspark/cloudpickle.py#L241>
> GITHUB.COM
>
> <https://github.com/apache/spark/blob/master/python/pyspark/cloudpickle.py#L241>
> <https://mixmax.com/r/aMyLuMpcgLtL2LPwR>
>
&
te:
>
> >>
>
> >> RDD contains data but not JVM byte code i.e. data which is read from
>
> >> source and transformations have been applied. This is ideal case to persist
>
> >> RDDs.. As Nirav mentioned this data will be serialized before persisting t
That’s because of this:
scala> val text =
Array((1,"hNjLJEgjxn"),(2,"lgryHkVlCN"),(3,"ukswqcanVC"),(4,"ZFULVxzAsv"),(5,"LNzOozHZPF"),(6,"KZPYXTqMkY"),(7,"DVjpOvVJTw"),(8,"LKRYrrLrLh"),(9,"acheneIPDM"),(10,"iGZTrKfXNr"))
text: Array[(Int, String)] = Array((1,hNjLJEgjxn), (2,lgryHkVlCN),
(3,ukswqc
On an other note, if you have a streaming app, you checkpoint the RDDs so that
they can be accessed in case of a failure. And yes, RDDs are persisted to DISK.
You can access spark’s UI and see it listed under Storage tab.
If RDDs are persisted in memory, you avoid any disk I/Os so that any look
All,
Ran into one strange issue. If I initialize a h2o context and start it (NOT
using it anywhere) , the count action on spark data frame would result in an
error. The same count action on the spark data frame would work fine without
h20 context not being initialized.
hc = H2OContext(sc).sta
Rather this is a fundamental question:
Was it an architectural constraint that collect action always returns the
results to the driver? It is gobbling up all the driver’s memory ( in case of
cache ) and why can’t we have an exclusive executor that shares the load and
“somehow” merge the results
You can have a temporary file to capture the data that you would like to
overwrite. And swap that with existing partition that you would want to wipe
the data away. Swapping can be done by simple rename of the partition and just
repair the table to pick up the new partition.
Am not sure if that
I can see large number of collections happening on driver and eventually,
driver is running out of memory. ( am not sure whether you have persisted any
rdd or data frame). May be you would want to avoid doing so many collections or
persist unwanted data in memory.
To begin with, you may want to
Thanks for the idea Maciej. The data is roughly 10 gigs.
I’m wondering if there any way to avoid the collect for each unit operation and
somehow capture all such resultant arrays and collect them at once.
> On Jul 20, 2016, at 2:52 PM, Maciej Bryński wrote:
>
> RK Aduri,
> Anothe
That -1 is coming from here:
PythonRDD.writeIteratorToStream(inputIterator, dataOut)
dataOut.writeInt(SpecialLengths.END_OF_DATA_SECTION) —> val
END_OF_DATA_SECTION = -1
dataOut.writeInt(SpecialLengths.END_OF_STREAM)
dataOut.flush()
> On Jul 21, 2016, at 12:24 PM, Jacek Laskowski wrote:
>
>
This has worked for me:
--conf "spark.driver.extraJavaOptions
-Dlog4j.configuration=file:/some/path/search-spark-service-log4j-Driver.properties"
\
you may want to try it.
If that doesn't work, then you may use --properties-file
--
View this message in context:
http://apache-spark-user-list.
Spark version: 1.6.0
So, here is the background:
I have a data frame (Large_Row_DataFrame) which I have created from an
array of row objects and also have another array of unique ids (U_ID) which
I’m going to use to look up into the Large_Row_DataFrame (which is cached)
to do a customized
e on RDD. Is this reason of high RAM utilization.
>
> Thanks,
> Saurav Sinha
>
> On Tue, Jul 19, 2016 at 10:14 PM, RK Aduri
> wrote:
>
>> Just want to see if this helps.
>>
>> Are you doing heavy collects and persist that? If that is so, you might
>> w
Did you check this:
case class Example(name : String, age ; Int)
there is a semicolon. should have been (age : Int)
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Task-not-serializable-java-io-NotSerializableException-org-json4s-Serialization-anon-1-tp823
Just want to see if this helps.
Are you doing heavy collects and persist that? If that is so, you might
want to parallelize that collection by converting to an RDD.
Thanks,
RK
On Tue, Jul 19, 2016 at 12:09 AM, Saurav Sinha
wrote:
> Hi Mich,
>
>1. In what mode are you running the spark stan
You can probably define sliding windows and set larger batch intervals.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-Best-Practices-to-handle-multiple-datapoints-arriving-at-different-time-interval-tp27315p27348.html
Sent from the Apache
Did you try with different driver's memory? Increasing driver's memory can be
one option. Can you print the GC and post the GC times?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/java-lang-OutOfMemoryError-related-to-Graphframe-bfs-tp27318p27347.html
Sent
DataFrames uses RDDs as internal implementation of its structure. It doesn't
convert to RDD but uses RDD partitions to produce logical plan.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/RDD-and-Dataframes-tp27306p27346.html
Sent from the Apache Spark User
19 matches
Mail list logo