1. When you are running locally, make sure the "master" in the SparkConf
reflects that and is not somehow set to "yarn-client"
2. You may not be getting any resources from YARN at all, so no executors,
so no receiver running. That is why I asked the most basic question - Is it
receiving data? That
I do see this message:
15/08/10 19:19:12 WARN YarnScheduler: Initial job has not accepted any
resources; check your cluster UI to ensure that workers are registered and
have sufficient resources
On Mon, Aug 10, 2015 at 4:15 PM, Mohit Anchlia
wrote:
>
> I am using the same exact code:
>
>
> http
I am using the same exact code:
https://github.com/apache/spark/blob/master/examples/src/main/java/org/apache/spark/examples/streaming/JavaRecoverableNetworkWordCount.java
Submitting like this:
yarn:
/opt/cloudera/parcels/CDH-5.4.0-1.cdh5.4.0.p0.27/bin/spark-submit --class
org.sony.spark.stream
Is it receiving any data? If so, then it must be listening.
Alternatively, to test these theories, you can locally running a spark
standalone cluster (one node standalone cluster in local machine), and
submit your app in client mode on that to see whether you are seeing the
process listening on 999
I've verified all the executors and I don't see a process listening on the
port. However, the application seem to show as running in the yarn UI
On Mon, Aug 10, 2015 at 11:56 AM, Tathagata Das wrote:
> In yarn-client mode, the driver is on the machine where you ran the
> spark-submit. The execut
In yarn-client mode, the driver is on the machine where you ran the
spark-submit. The executors are running in the YARN cluster nodes, and the
socket receiver listening on port is running in one of the executors.
On Mon, Aug 10, 2015 at 11:43 AM, Mohit Anchlia
wrote:
> I am running as a yar
I am running as a yarn-client which probably means that the program that
submitted the job is where the listening is also occurring? I thought that
the yarn is only used to negotiate resources in yarn-client master mode.
On Mon, Aug 10, 2015 at 11:34 AM, Tathagata Das wrote:
> If you are running
If you are running on a cluster, the listening is occurring on one of the
executors, not in the driver.
On Mon, Aug 10, 2015 at 10:29 AM, Mohit Anchlia
wrote:
> I am trying to run this program as a yarn-client. The job seems to be
> submitting successfully however I don't see any process listeni
I am trying to run this program as a yarn-client. The job seems to be
submitting successfully however I don't see any process listening on this
host on port
https://github.com/apache/spark/blob/master/examples/src/main/java/org/apache/spark/examples/streaming/JavaRecoverableNetworkWordCount.j
There are no workers registered with the Spark Standalone master! That is
the crux of the problem. :)
Follow the instructions properly -
https://spark.apache.org/docs/latest/spark-standalone.html#cluster-launch-scripts
Especially make the conf/slaves file has intended workers listed.
TD
On Mon, A
Interesting, I see 0 cores in the UI?
- *Cores:* 0 Total, 0 Used
On Fri, Apr 3, 2015 at 2:55 PM, Tathagata Das wrote:
> What does the Spark Standalone UI at port 8080 say about number of cores?
>
> On Fri, Apr 3, 2015 at 2:53 PM, Mohit Anchlia
> wrote:
>
>> [ec2-user@ip-10-241-251-232 s_l
What does the Spark Standalone UI at port 8080 say about number of cores?
On Fri, Apr 3, 2015 at 2:53 PM, Mohit Anchlia
wrote:
> [ec2-user@ip-10-241-251-232 s_lib]$ cat /proc/cpuinfo |grep process
> processor : 0
> processor : 1
> processor : 2
> processor : 3
> processor
[ec2-user@ip-10-241-251-232 s_lib]$ cat /proc/cpuinfo |grep process
processor : 0
processor : 1
processor : 2
processor : 3
processor : 4
processor : 5
processor : 6
processor : 7
On Fri, Apr 3, 2015 at 2:33 PM, Tathagata Das wrote:
> How many core
How many cores are present in the works allocated to the standalone cluster
spark://ip-10-241-251-232:7077 ?
On Fri, Apr 3, 2015 at 2:18 PM, Mohit Anchlia
wrote:
> If I use local[2] instead of *URL:* spark://ip-10-241-251-232:7077 this
> seems to work. I don't understand why though because when
If I use local[2] instead of *URL:* spark://ip-10-241-251-232:7077 this
seems to work. I don't understand why though because when I
give spark://ip-10-241-251-232:7077 application seem to bootstrap
successfully, just doesn't create a socket on port ?
On Fri, Mar 27, 2015 at 10:55 AM, Mohit An
I tried to file a bug in git repo however I don't see a link to "open
issues"
On Fri, Mar 27, 2015 at 10:55 AM, Mohit Anchlia
wrote:
> I checked the ports using netstat and don't see any connections
> established on that port. Logs show only this:
>
> 15/03/27 13:50:48 INFO Master: Registering a
I checked the ports using netstat and don't see any connections established
on that port. Logs show only this:
15/03/27 13:50:48 INFO Master: Registering app NetworkWordCount
15/03/27 13:50:48 INFO Master: Registered app NetworkWordCount with ID
app-20150327135048-0002
Spark ui shows:
Running Ap
Hi,
Did you run the word count example in Spark local mode or other mode, in
local mode you have to set Local[n], where n >=2. For other mode, make sure
available cores larger than 1. Because the receiver inside Spark Streaming
wraps as a long-running task, which will at least occupy one core.
Be
What's the best way to troubleshoot inside spark to see why Spark is not
connecting to nc on port ? I don't see any errors either.
On Thu, Mar 26, 2015 at 2:38 PM, Mohit Anchlia
wrote:
> I am trying to run the word count example but for some reason it's not
> working as expected. I start "nc
I am trying to run the word count example but for some reason it's not
working as expected. I start "nc" server on port and then submit the
spark job to the cluster. Spark job gets successfully submitting but I
never see any connection from spark getting established. I also tried to
type words
@eric-
i saw this exact issue recently while working on the KinesisWordCount.
are you passing "local[2]" to your example as the MASTER arg versus just
"local" or "local[1]"?
you need at least 2. it's documented as "n>1" in the scala source docs -
which is easy to mistake for n>=1.
i just ran t
Not sure what data you are sending in. You could try calling
"lines.print()" instead which should just output everything that comes in
on the stream. Just to test that your socket is receiving what you think
you are sending.
On Mon, Mar 31, 2014 at 12:18 PM, eric perler wrote:
> Hello
>
> i ju
Hello
i just started working with spark today... and i am trying to run the wordcount
network example
i created a socket server and client.. and i am sending data to the server in
an infinite loop
when i run the spark class.. i see this output in the console...
---
23 matches
Mail list logo