Assuming each line in the logfile is considered as a event for flume ,

1.Do we have any maximum size of event defined for memory/file channel.like
any maximum no of characters in a line.
2.Does flume supports all formats of data to be processed as events or do
we have any limitation.

I am just still trying to understanding why the flume stops processing
events after sometime.

Can someone please help me out here.

Thanks,
saravana


On 11 July 2014 17:49, SaravanaKumar TR <saran0081...@gmail.com> wrote:

> Hi ,
>
> I am new to flume and  using Apache Flume 1.5.0. Quick setup explanation
> here.
>
> Source:exec , tail –F command for a logfile.
>
> Channel: tried with both Memory & file channel
>
> Sink: HDFS
>
> When flume starts , processing events happens properly and its moved to
> hdfs without any issues.
>
> But after sometime flume suddenly stops sending events to HDFS.
>
>
>
> I am not seeing any errors in logfile flume.log as well.Please let me know
> if I am missing any configuration here.
>
>
> Below is the channel configuration defined and I left the remaining to be
> default values.
>
>
> a1.channels.c1.type = FILE
>
> a1.channels.c1.transactionCapacity = 100000
>
> a1.channels.c1.capacity = 10000000
>
> Thanks,
> Saravana
>
>
>
>
>

Reply via email to