I think you should check the rpc target, may be the nodemanager has memory
issue like gc or other.Check it out first.
And i wonder why you assign --executor-cores 8?
2017-07-29 7:40 GMT+08:00 jeff saremi :
> asking this on a tangent:
>
> Is there anyway for the shuffle data to be replicated to m
Hi John,
The reason you don't see the second sysout line is because is executed on a
different JVM (ie. Driver vs Executor). the second sysout line should be
available through the executor logs. Check the Executors tab.
There are alternative approaches to manage log centralization however it
real
This code is not working:
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.recommendation import ALS, ALSModel
from pyspark.sql import Row
als = ALS(maxIter=10, regParam=0.01, userCol="user_id", itemCol="movie_id",
ratingCol="rating")
model = als.fit(training
This code is not working:
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.recommendation import ALS, ALSModel
from pyspark.sql import Row
als = ALS(maxIter=10, regParam=0.01, userCol="user_id", itemCol="movie_id",
ratingCol="rating")
model = als.fit(training
Unsubscribe.
Hi, All,
Although there are lots of discussions related to logging in this news
group, I did not find an answer to my specific question so I am posting mine
with the hope that this will not cause a duplicated question.
Here is my simplified Java testing Spark app:
public class SparkJobEntry {
Hi Gourav,
Today I try to reproduce your case, but failed.
Can you post your full code please?
If it is possible, give us the table schema, I can produce the data by schema.
BTW,my spark is2.1.0.
I am interesting this caae very much.
---Original---
From: ""
Date: 2017/7/28 17:25:03
To: