Hi Natty,

This is my entire config file.

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /data/logs/test_log
a1.sources.r1.restart = true
a1.sources.r1.logStdErr = true

#a1.sources.r1.batchSize = 2

a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = regex_filter
a1.sources.r1.interceptors.i1.regex = resuming normal
operations|Received|Response

#a1.sources.r1.interceptors = i2
#a1.sources.r1.interceptors.i2.type = timestamp
#a1.sources.r1.interceptors.i2.preserveExisting = true

# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://
testing.sck.com:9000/running/test.sck/date=%Y-%m-%d
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.filePrefix = events-
a1.sinks.k1.hdfs.rollInterval = 600
##need to run hive query randomly to check teh long running process , so we
 need to commit events in hdfs files regularly
a1.sinks.k1.hdfs.rollCount = 0
a1.sinks.k1.hdfs.batchSize = 10
a1.sinks.k1.hdfs.rollSize = 0
a1.sinks.k1.hdfs.useLocalTimeStamp = true

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 10000

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1


On 14 July 2014 22:54, Jonathan Natkins <na...@streamsets.com> wrote:

> Hi Saravana,
>
> What does your sink configuration look like?
>
> Thanks,
> Natty
>
>
> On Fri, Jul 11, 2014 at 11:05 PM, SaravanaKumar TR <saran0081...@gmail.com
> > wrote:
>
>> Assuming each line in the logfile is considered as a event for flume ,
>>
>> 1.Do we have any maximum size of event defined for memory/file
>> channel.like any maximum no of characters in a line.
>> 2.Does flume supports all formats of data to be processed as events or do
>> we have any limitation.
>>
>> I am just still trying to understanding why the flume stops processing
>> events after sometime.
>>
>> Can someone please help me out here.
>>
>> Thanks,
>> saravana
>>
>>
>> On 11 July 2014 17:49, SaravanaKumar TR <saran0081...@gmail.com> wrote:
>>
>>> Hi ,
>>>
>>> I am new to flume and  using Apache Flume 1.5.0. Quick setup explanation
>>> here.
>>>
>>> Source:exec , tail –F command for a logfile.
>>>
>>> Channel: tried with both Memory & file channel
>>>
>>> Sink: HDFS
>>>
>>> When flume starts , processing events happens properly and its moved to
>>> hdfs without any issues.
>>>
>>> But after sometime flume suddenly stops sending events to HDFS.
>>>
>>>
>>>
>>> I am not seeing any errors in logfile flume.log as well.Please let me
>>> know if I am missing any configuration here.
>>>
>>>
>>> Below is the channel configuration defined and I left the remaining to
>>> be default values.
>>>
>>>
>>> a1.channels.c1.type = FILE
>>>
>>> a1.channels.c1.transactionCapacity = 100000
>>>
>>> a1.channels.c1.capacity = 10000000
>>>
>>> Thanks,
>>> Saravana
>>>
>>>
>>>
>>>
>>>
>>
>

Reply via email to