Hey,
I did sparkcontext.addstreaminglistener(streaminglistener object) in my code
and i am able to see some stats in the logs and cant see anything in web UI.
How to add the Streaming Tab to the web UI ?
I need to get queuing delays and related information.
--
View this message in context:
Hey,
I am facing a weird issue.
My spark workers keep dying every now and then and in the master logs i keep
on seeing following messages,
14/05/14 10:09:24 WARN Master: Removing worker-20140514080546-x.x.x.x-50737
because we got no heartbeat in 60 seconds
14/05/14 14:18:41 WARN Master: Removi
Can you share your working metrics.properties.?
I want remote jmx to be enabled so i need to use the JMXSink and monitor my
spark master and workers.
But what are the parameters that are to be defined like host and port ?
So your config can help.
--
View this message in context:
http://ap
I'll start with Kafka implementation.
Thanks for all the help.
On Mar 21, 2014 7:00 PM, "anoldbrain [via Apache Spark User List]" <
ml-node+s1001560n2994...@n3.nabble.com> wrote:
> It is my understanding that there is no way to make FlumeInputDStream work
> in a cluster environment with the curre
On 03/21/2014 06:17 PM, anoldbrain [via Apache Spark User List] wrote:
> he actual , which in turn causes the 'Fail to bind to ...'
> error. This comes naturally because the slave that is running the code
> to bind to : has a different ip.
I ran sudo ./run-example
org.apache.spark.streaming.exa
On 03/21/2014 06:17 PM, anoldbrain [via Apache Spark User List] wrote:
> he actual , which in turn causes the 'Fail to bind to ...'
> error. This comes naturally because the slave that is running the code
> to bind to : has a different ip.
So if we run the code on the slave where we are sending
Hey,
Even i am getting the same error.
I am running,
sudo ./run-example org.apache.spark.streaming.examples.FlumeEventCount
spark://:7077 7781
and getting no events in the spark streaming.
---
Time: 1395395676000 ms
-
Hey,
I am using the following flume flow,
Flume agent 1 consisting of Rabbitmq-> source, files-> channet, avro-> sink
sending data to a slave node of spark cluster.
Flume agent 2, slave node of spark cluster, consisting of avro-> source,
files-> channel, now for the sink i tried avro, hdfs, file