Hi,
In the https://cwiki.apache.org/confluence/display/SPARK/Wiki+Homepage the
current release window has not been changed from 1.5. Can anybody give an
idea of the expected dates for 1.6 version?
Regards,
Meethu Mathew
Senior Engineer
Flytxt
Why change the number of partitions of RDDs? especially since you
can't generally do that without a shuffle. If you just mean to ramp up
and down resource usage, dynamic allocation (of executors) already
does that.
On Wed, Sep 30, 2015 at 10:49 PM, Muhammed Uluyol wrote:
> Hello,
>
> How feasible
Hello,
How feasible would it be to have spark speculatively increase the number of
partitions when there is spare capacity in the system? We want to do this
to increase to decrease application runtime. Initially, we will assume that
function calls of the same type will have the same runtime (e.g.
Dear Spark developers,
I would like to understand GraphX caching behavior with regards to PageRank in
Spark, in particular, the following implementation of PageRank:
https://github.com/apache/spark/blob/master/graphx/src/main/scala/org/apache/spark/graphx/lib/PageRank.scala
On each iteration the
+user list
On Tue, Sep 29, 2015 at 3:43 PM, Pala M Muthaia wrote:
> Hi,
>
> I am trying to use internal UDFs that we have added as permanent functions
> to Hive, from within Spark SQL query (using HiveContext), but i encounter
> NoSuchObjectException, i.e. the function could not be found.
>
> Ho
Hi Sukesh,
To unsubscribe from the dev list, please send a message to
dev-unsubscr...@spark.apache.org. To unsubscribe from the user list, please
send a message user-unsubscr...@spark.apache.org. Please see:
http://spark.apache.org/community.html#mailing-lists.
Thanks,
-Rick
sukesh kumar wrote
thanks a lot, it works now after I set %HADOOP_HOME%
On Tue, Sep 29, 2015 at 1:22 PM, saurfang wrote:
> See
>
> http://stackoverflow.com/questions/26516865/is-it-possible-to-run-hadoop-jobs-like-the-wordcount-sample-in-the-local-mode
> ,
> https://issues.apache.org/jira/browse/SPARK-6961 and fin
Concerning task execution, a worker executes its assigned tasks in parallel
or sequentially?
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Task-Execution-tp14411.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.
---
Hi,
We intend to run adhoc windowed continuous queries on spark streaming data.
The queries could be registered/deregistered dynamically or can be
submitted through command line. Currently Spark streaming doesn’t allow
adding any new inputs, transformations, and output operations after
starting a