Hi Hari

Yes I started Flume agent to push data to the relevant port. Below  mentioned 
are the conf files for flume configurations

Test21.conf

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.bind = localhost
a1.sources.r1.port = 2323

# Describe the sink
a1.sinks.k1.type = logger

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

Command used is
bin/flume-ng agent -n a1 -c conf -f conf/test21.conf 
-Dflume.root.logger=INFO,console

Test12.conf

agent1.sources = seqGenSrc
agent1.sinks = avrosink
agent1.channels = memoryChannel

agent1.sources.seqGenSrc.type = exec
agent1.sources.seqGenSrc.command = tail -f /home/huser/access.log
agent1.sources.seqGenSrc.batch-size = 1

agent1.sinks.avrosink.type = avro
agent1.sinks.avrosink.hostname = localhost
agent1.sinks.avrosink.port = 2323
agent1.sinks.arvosink.batch-size = 100
agent1.sinks.arvosink.connect-timeout = 60000
agent1.sinks.avrosink.request-timeout = 60000

agent1.channels.memoryChannel.type = memory
agent1.channels.memoryChannel.capacity = 1000
agent1.channels.memoryChannel.transactionCapacity = 100

agent1.sources.seqGenSrc.channels = memoryChannel
agent1.sinks.avrosink.channel = memoryChannel


Command used is
bin/flume-ng agent -n agent1 -c conf -f conf/test12.conf 
-Dflume.root.logger=DEBUG,console

Even after changing the port several times, still Iam facing with the same 
issues,
Kindly look into my conf file and just let me know the steps.


Regards,
Jeniba Johnson
From: Hari Shreedharan [mailto:hshreedha...@cloudera.com]
Sent: Tuesday, November 11, 2014 1:06 PM
To: Jeniba Johnson
Cc: dev@spark.apache.org
Subject: Re: Bind exception while running FlumeEventCount

Did you start a Flume agent to push data to the relevant port?

Thanks,
Hari


On Fri, Nov 7, 2014 at 2:05 PM, Jeniba Johnson 
<jeniba.john...@lntinfotech.com<mailto:jeniba.john...@lntinfotech.com>> wrote:

Hi,

I have installed spark-1.1.0 and apache flume 1.4 for running streaming example 
FlumeEventCount. Previously the code was working fine. Now Iam facing with the 
below mentioned issues. My flume is running properly it is able to write the 
file.

The command I use is

bin/run-example org.apache.spark.examples.streaming.FlumeEventCount 
172.29.17.178 65001


14/11/07 23:19:23 INFO receiver.ReceiverSupervisorImpl: Stopping receiver with 
message: Error starting receiver 0: org.jboss.netty.channel.ChannelException: 
Failed to bind to: /172.29.17.178:65001
14/11/07 23:19:23 INFO flume.FlumeReceiver: Flume receiver stopped
14/11/07 23:19:23 INFO receiver.ReceiverSupervisorImpl: Called receiver onStop
14/11/07 23:19:23 INFO receiver.ReceiverSupervisorImpl: Deregistering receiver 0
14/11/07 23:19:23 ERROR scheduler.ReceiverTracker: Deregistered receiver for 
stream 0: Error starting receiver 0 - org.jboss.netty.channel.ChannelException: 
Failed to bind to: /172.29.17.178:65001
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:106)
at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:119)
at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:74)
at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:68)
at 
org.apache.spark.streaming.flume.FlumeReceiver.initServer(FlumeInputDStream.scala:164)
at 
org.apache.spark.streaming.flume.FlumeReceiver.onStart(FlumeInputDStream.scala:171)
at 
org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:121)
at 
org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:106)
at 
org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:264)
at 
org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:257)
at 
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
at 
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
at org.apache.spark.scheduler.Task.run(Task.scala:54)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:344)
at sun.nio.ch.Net.bind(Net.java:336)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:199)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
at 
org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366)
at 
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290)
at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
... 3 more

14/11/07 23:19:23 INFO receiver.ReceiverSupervisorImpl: Stopped receiver 0
14/11/07 23:19:23 INFO receiver.BlockGenerator: Stopping BlockGenerator
14/11/07 23:19:23 INFO util.RecurringTimer: Stopped timer for BlockGenerator 
after time 1415382563200
14/11/07 23:19:23 INFO receiver.BlockGenerator: Waiting for block pushing thread
14/11/07 23:19:23 INFO receiver.BlockGenerator: Pushing out the last 0 blocks
14/11/07 23:19:23 INFO receiver.BlockGenerator: Stopped block pushing thread
14/11/07 23:19:23 INFO receiver.BlockGenerator: Stopped BlockGenerator
14/11/07 23:19:23 INFO receiver.ReceiverSupervisorImpl: Waiting for executor 
stop is over
14/11/07 23:19:23 ERROR receiver.ReceiverSupervisorImpl: Stopped executor with 
error: org.jboss.netty.channel.ChannelException: Failed to bind to: 
/172.29.17.178:65001
14/11/07 23:19:23 ERROR executor.Executor: Exception in task 0.0 in stage 0.0 
(TID 0)
org.jboss.netty.channel.ChannelException: Failed to bind to: 
/172.29.17.178:65001
at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:106)
at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:119)
at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:74)
at org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:68)
at 
org.apache.spark.streaming.flume.FlumeReceiver.initServer(FlumeInputDStream.scala:164)
at 
org.apache.spark.streaming.flume.FlumeReceiver.onStart(FlumeInputDStream.scala:171)
at 
org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:121)
at 
org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:106)
at 
org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:264)
at 
org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:257)
at 
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
at 
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
at org.apache.spark.scheduler.Task.run(Task.scala:54)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.net.BindException: Address already in use
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:344)
at sun.nio.ch.Net.bind(Net.java:336)
at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:199)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.jboss.netty.channel.socket.nio.NioServerBoss$RegisterTask.run(NioServerBoss.java:193)
at 
org.jboss.netty.channel.socket.nio.AbstractNioSelector.processTaskQueue(AbstractNioSelector.java:366)
at 
org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:290)
at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42)
... 3 more
14/11/07 23:19:23 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 0.0 
(TID 0, localhost): org.jboss.netty.channel.ChannelException: Failed to bind 
to: /172.29.17.178:65001
org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:106)
org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:119)
org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:74)
org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:68)
org.apache.spark.streaming.flume.FlumeReceiver.initServer(FlumeInputDStream.scala:164)
org.apache.spark.streaming.flume.FlumeReceiver.onStart(FlumeInputDStream.scala:171)
org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:121)
org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:106)
org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:264)
org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:257)
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:722)
14/11/07 23:19:23 ERROR scheduler.TaskSetManager: Task 0 in stage 0.0 failed 1 
times; aborting job
14/11/07 23:19:23 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose 
tasks have all completed, from pool
14/11/07 23:19:23 INFO scheduler.TaskSchedulerImpl: Cancelling stage 0
14/11/07 23:19:23 INFO scheduler.DAGScheduler: Failed to run runJob at 
ReceiverTracker.scala:275
Exception in thread "Thread-28" org.apache.spark.SparkException: Job aborted 
due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: 
Lost task 0.0 in stage 0.0 (TID 0, localhost): 
org.jboss.netty.channel.ChannelException: Failed to bind to: 
/172.29.17.178:65001
org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:106)
org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:119)
org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:74)
org.apache.avro.ipc.NettyServer.<init>(NettyServer.java:68)
org.apache.spark.streaming.flume.FlumeReceiver.initServer(FlumeInputDStream.scala:164)
org.apache.spark.streaming.flume.FlumeReceiver.onStart(FlumeInputDStream.scala:171)
org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:121)
org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:106)
org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:264)
org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$9.apply(ReceiverTracker.scala:257)
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1121)
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
org.apache.spark.scheduler.Task.run(Task.scala:54)
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:722)
Driver stacktrace:
at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
at scala.Option.foreach(Option.scala:236)
at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
at 
org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

Regards,
Jeniba Johnson


________________________________
The contents of this e-mail and any attachment(s) may contain confidential or 
privileged information for the intended recipient(s). Unintended recipients are 
prohibited from taking action on the basis of information in this e-mail and 
using or disseminating the information, and must notify the sender and delete 
it from their system. L&T Infotech will not accept responsibility or liability 
for the accuracy or completeness of, or the presence of any virus or disabling 
code in this e-mail"

Reply via email to