Could you rebuild the whole project? I changed the python function
serialization format in https://github.com/apache/spark/pull/11535 to fix a
bug. This exception looks like some place was still using the old codes.
On Sun, Mar 6, 2016 at 6:24 PM, Hyukjin Kwon wrote:
> Just in case, My python ve
Hi,
mapReduceTriplets you said has been removed in master and you need to use a
newer api,
aggregateMessages, instead of it (See SPARK-3936 and SPARK-12995 for
details).
The memory-based shuffling opt. is a topic of not only graphx but also
spark itself.
You can see SPARK-3376 for related discussi
Just in case, My python version is 2.7.10.
2016-03-07 11:19 GMT+09:00 Hyukjin Kwon :
> Hi all,
>
> While I am testing some codes in PySpark, I met a weird issue.
>
> This works fine at Spark 1.6.0 but it looks it does not for Spark 2.0.0.
>
> When I simply run *logData = sc.textFile(path).coalesc
Hi all,
While I am testing some codes in PySpark, I met a weird issue.
This works fine at Spark 1.6.0 but it looks it does not for Spark 2.0.0.
When I simply run *logData = sc.textFile(path).coalesce(1) *with some big
files in stand-alone local mode without HDFS, this simply throws the
exception
Hi All,
When i am submitting a spark job on YARN with Custom Partitioner, it is
not picked by Executors. Executors still using the default HashPartitioner.
I added logs into both HashPartitioner (org/apache/spark/Partitioner.scala)
and Custom Partitioner. The completed executor logs shows Hash
+1
Spark ODBC server is fine, SQL is fine.
2016-03-03 12:09 GMT-08:00 Yin Yang :
> Skipping docker tests, the rest are green:
>
> [INFO] Spark Project External Kafka ... SUCCESS [01:28
> min]
> [INFO] Spark Project Examples . SUCCESS [02:59
> min]
I wonder if anyone got any feedback on it. I can look into implement it
but would like to know if such a functionality can be merged into master
back. If yes then please let me know and point me to the direction to
get started.
Regards,
Gurvinder
On 03/04/2016 09:25 AM, Gurvinder Singh wrote:
> Fo