You can run Spark in local mode and not require any standalone master or
worker.
Are you sure you're not using local mode? are you sure the daemons aren't
running?
What is the Spark master you pass?
On Wed, Mar 9, 2022 at 7:35 PM wrote:
> What I tried to say is, I didn't start spark master/worke
What I tried to say is, I didn't start spark master/worker at all, for a
standalone deployment.
But I still can login into pyspark to run the job. I don't know why.
$ ps -efw|grep spark
$ netstat -ntlp
both the output above have no spark related info.
And this machine is managed by myself, I k
To be specific:
1. Check the log files on both master and worker and see if any errors.
2. If you are not running your browser on the same machine and the
Spark cluster, please use the host's external IP instead of
localhost IP when launching the worker
Hope this helps...
-- ND
On 3/9/22
Did it start successfully? What do you mean ports were not opened?
On Wed, Mar 9, 2022 at 3:02 AM wrote:
> Hello
>
> I have spark 3.2.0 deployed in localhost as the standalone mode.
> I even didn't run the start master and worker command:
>
> start-master.sh
> start-worker.sh spark://1
Yes agreed.It seems to be issue with mapping the text file contents to case
classes, not sure though.
On Thu, Aug 4, 2016 at 8:17 PM, $iddhe$h Divekar wrote:
> Hi Deepak,
>
> My files are always > 50MB.
> I would think there would be a small config to overcome this.
> Tried almost everything i c
Hi Deepak,
My files are always > 50MB.
I would think there would be a small config to overcome this.
Tried almost everything i could after searching online.
Any help from the mailing list would be appreciated.
On Thu, Aug 4, 2016 at 7:43 AM, Deepak Sharma wrote:
> I am facing the same issue wi
I am facing the same issue with spark 1.5.2
If the file size that's being processed by spark , is of size 10-12 MB , it
throws out of memory .
But if the same file is within 5 MB limit , it runs fine.
I am using spark configuration with 7GB of memory and 3 cores for executors
in the cluster of 8 ex
n error occurred while calling o29.load.
: java.lang.ClassNotFoundException: Failed to find data source:
org.apache.spark.sql.cassandra. Please find packages at
http://spark-packages.org
Is there a way to load up those jar files into the script
Jo
From: sujeet jog [mailto:sujeet@gmail.com]
check if this helps,
from multiprocessing import Process
def training() :
print ("Training Workflow")
cmd = spark/bin/spark-submit ./ml.py & "
os.system(cmd)
w_training = Process(target = training)
On Wed, Jun 29, 2016 at 6:28 PM, Joaquin Alzola
wrote:
> Hi,
>
Can you describe more about the container ?
Please show complete stack trace for the exception.
Thanks
On Thu, Jun 16, 2016 at 1:32 PM, jay vyas
wrote:
> Hi spark:
>
> Is it possible to avoid reliance on a login user when running a spark job?
>
> I'm running out a container that doesnt supply
Your question lacks sufficient information for us to actually provide help.
Have you looked at the Spark UI to see which part of the graph is taking the
longest? Have you tried logging your methods?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-jobs-
11 matches
Mail list logo