Can you run following commands and tell us if namenode is up jps netstat -plan | grep 50030 On Jan 14, 2013 12:13 PM, "Vikram Kulkarni" <vikulka...@expedia.com> wrote:
> I am trying to setup a sink for hdfs for HTTPSource . But I get the > following exception when I try to send a simple Json event. I am also using > a logger sink and I can clearly see the event output to the console window > but it fails to write to hdfs. I have also in a separate conf file > successfully written to hdfs sink. > > Thanks, > Vikram > > *Exception: * > [WARN - > org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:456)] > HDFS IO error > java.io.IOException: Call to localhost/127.0.0.1:50030 failed on local > exception: java.io.EOFException > at org.apache.hadoop.ipc.Client.wrapException(Client.java:1144) > > *My conf file is as follows:* > # flume-httphdfs.conf: A single-node Flume with Http Source and hdfs sink > configuration > > # Name the components on this agent > agent1.sources = r1 > agent1.channels = c1 > > # Describe/configure the source > agent1.sources.r1.type = org.apache.flume.source.http.HTTPSource > agent1.sources.r1.port = 5140 > agent1.sources.r1.handler = org.apache.flume.source.http.JSONHandler > agent1.sources.r1.handler.nickname = random props > > # Describe the sink > agent1.sinks = logsink hdfssink > agent1.sinks.logsink.type = logger > > agent1.sinks.hdfssink.type = hdfs > agent1.sinks.hdfssink.hdfs.path = hdfs://localhost:50030/flume/events > agent1.sinks.hdfssink.hdfs.file.Type = DataStream > > # Use a channel which buffers events in memory > agent1.channels.c1.type = memory > agent1.channels.c1.capacity = 1000 > agent1.channels.c1.transactionCapacity = 100 > > # Bind the source and sink to the channel > agent1.sources.r1.channels = c1 > agent1.sinks.logsink.channel = c1 > agent1.sinks.hdfssink.channel = c1 > > >