Hi ,
I am new to flume and using Apache Flume 1.5.0. Quick setup explanation
here.
Source:exec , tail –F command for a logfile.
Channel: tried with both Memory & file channel
Sink: HDFS
When flume starts , processing events happens properly and its moved to
hdfs without any issues.
But after
Hi all,
Has there been any talk about having the file channel skip over bad data
directories if given multiple and a drive goes bad? I ran into this problem
yesterday and I had to put a separate configuration on this agent until the
drive is replaced.
thanks
dave
Hi All,
If anyone's interested in consuming messages from Amazon's Simple Queue
Service (SQS), I've open sourced a plugin for it available at
https://github.com/plumbee/flume-sqs-source
Feel free to check it out, comment, suggest RFE's (no guarantees they'll be
implemented)
Cheers,
Dennis
Data
Hi,
I think that'd be a great feature which is needed in File Channel. Can you
create a JIRA to track this?
Cheers,
Brock
On Fri, Jul 11, 2014 at 8:13 AM, David Sinclair <
dsincl...@chariotsolutions.com> wrote:
> Hi all,
>
> Has there been any talk about having the file channel skip over bad d
Assuming each line in the logfile is considered as a event for flume ,
1.Do we have any maximum size of event defined for memory/file channel.like
any maximum no of characters in a line.
2.Does flume supports all formats of data to be processed as events or do
we have any limitation.
I am just st