Hi Saravana,
What does your sink configuration look like?
Thanks,
Natty
On Fri, Jul 11, 2014 at 11:05 PM, SaravanaKumar TR
wrote:
> Assuming each line in the logfile is considered as a event for flume ,
>
> 1.Do we have any maximum size of event defined for memory/file
> channel.like any maxi
a1.sinks.k1.hdfs.useLocalTimeStamp = true
>
> # Use a channel which buffers events in memory
> a1.channels.c1.type = memory
> a1.channels.c1.capacity = 1
> a1.channels.c1.transactionCapacity = 1
>
> # Bind the source and sink to the channel
> a1.sources.r1.channels = c
Hi Venkatesh,
Does it reliably stop processing events after about 7 minutes, or does it
happen randomly, and just quickly? Does the program immediately start up
the Flume agent?
Have you looked at a thread dump from the program, at all? You can use
`jstack -F ` to produce a stacktrace of all the
ove to flume.log , I dont see any exception .
>
> cat flume.log | grep "Exception" doesnt show any.
>
>
> On 15 July 2014 22:24, Jonathan Natkins wrote:
>
>> Hi Saravana,
>>
>> Our best bet on figuring out what's going on here may be to turn on
Hi Sanjay,
Is this just a single-node test cluster? Playing with replication configs
is probably a little bit dangerous, since it means that your blocks will
have no replicas, and if you lose a disk, you're going to end up with no
way to recover the blocks. If this is a cluster you actually care a
rt the "tail -F" if its not running
>>> in the background.
>>>
>>> 3.Does flume supports all formats of data in logfile or it has any
>>> predefined data formats..
>>>
>>> Please help me with these to understand better..
>>>
va -Xmx20m
> -Dflume.root.logger=DEBUG,LOGFILE.."
>
> so i guess it takes 20 mb as agent flume memory.
> My RAM is 128 GB.So please suggest how much can i assign as heap memory
> and where to define it.
>
>
> On 16 July 2014 15:05, Jonathan Natkins wrote:
>
>>
Hi Anand,
What you're doing is a slightly odd way to use Flume. With the exec source,
Flume will execute that command, and consume the output as events. Often
the exec source is used to tail -F a file, which allows you to pipe more
data to the file and ingest additional events. By using cat, Flume
I haven't tested this myself, but a quick look at the code suggests that
your column name specification may be configured incorrectly. It looks like
it should be:
agent.sinks.hbaseSink.serializer.colNames = column1,column2
I'm trying this out myself, though, so if I find something definitive, I'l
ose to the list of column names defined by the
colNames config parameter. If you want to toss any data away, just make
sure it's not within a set of parentheses.
Let me know if you have any more questions, or if you have trouble getting
this to work.
Thanks!
Natty
On Mon, Jul 28, 2014 a
Hi Guillermo,
It might actually be easier to do the special transformation in a custom
interceptor that's attached to Source1. It depends a little bit on what
your transformation actually is, but generally, I'd say that it's going to
be *much* easier to implement a custom interceptor than it is to
d the
> client system with those transformations.
>
> How about the connection between Sink1 and Source2?? should it be a Avro
> type? or it's not neccesary?? Anyway, I'm gonig to think about to do the
> transformations in the Source, although I think it's not possibl
it declaring two columnFamily
> and one value in the subsequent colNames parameter, but it didn’t work.
>
> Is it possible inserting these values into different columns?
>
>
>
> Thanks again
>
>
>
>
>
> *From:* Jonathan Natkins [mailto:na...@streamsets.com]
> *
:
>
>
> http://localhost:8080/flumeEvent/rest/data/inject?colval11=1&colval2=005&colval3=test
>
>
> With the following content: “This is a test for different columns”
>
>
>
> Thanks again
>
>
>
>
>
> *From:* Jonathan Natkins [mailto:na...@stre
Hey all,
I created a JIRA for this: https://issues.apache.org/jira/browse/FLUME-2437
I thought I'd start working on one myself, which can hopefully be
contributed back. I'm curious: do you have particular requirements? Based
on the emails in this thread, it sounds like the original goal was to ha
* build S3 source
> * make flume configurable dynamically
>
> --
> Paweł
>
>
> 2014-08-01 9:51 GMT+02:00 Otis Gospodnetic :
>
> Hi,
>>
>> On Fri, Aug 1, 2014 at 4:52 AM, Jonathan Natkins
>> wrote:
>>
>>> Hey all,
>>>
>>
; column:
>
> *column3*
>
> col1val: firstPart
> col2val: This is the first part of the result
>
>
>
>
>
> *From:* Jonathan Natkins [mailto:na...@streamsets.com]
> *Sent:* Thursday, July 31, 2014 7:02 PM
>
> *To:* user@flume.apache.org
> *Subject:* Re: Flume to
do the same.
> What do you think?
>
> --
> Paweł Róg
>
> 2014-08-01 20:19 GMT+02:00 Hari Shreedharan :
>
> +1 on an S3 Source. I would gladly review.
>>
>> Jonathan Natkins wrote:
>>
>>
>> Hey Pawel,
>>
>> My intention is to start worki
gz/avro/others.
>
> Best is to start with something that works and then start adding more
> features to it.
>
>
> On Wed, Aug 6, 2014 at 2:27 AM, Jonathan Natkins > wrote:
>
>> Hi all,
>>
>> I started trying to write some code on this, and realized
Adding the dev list to the discussion
On Wed, Aug 6, 2014 at 9:37 AM, Jonathan Natkins
wrote:
> Ashish, I've put some comments inline.
>
>
> On Tuesday, August 5, 2014, Ashish wrote:
>
>> Sharing some random thoughts
>>
>> 1. Download the file u
If you have sudo access, you can run a command as a particular user using
sudo -u.
`sudo -u flume flume-ng &`
Also, if you installed Flume via RPM or Deb package, there should be an
init.d script, though I'm not positive what user that script runs as.
On Fri, Aug 8, 2014 at 9:08 AM, Babu, Pras
gt;>
>>> Would be great to reuse an existing implementation which is based on
>>> InputStream and feed it with S3 object input stream, concern of metadata
>>> storage still remains. Most often S3 objects are stored in compressed form,
>>> so this source would n
to write/expose API to store meta-data info in Zk (Flume-1491
> doesn't bring that in).
>
> HTH !
>
>
> On Mon, Aug 11, 2014 at 11:39 AM, Jonathan Natkins
> wrote:
>
>> Given that FLUME-1491 hasn't been committed yet, and may still be a ways
>> a
Hi everybody,
I wanted to let you all know that we've gone and scheduled the next Flume
meetup, which will be happening on Thursday, September 16 at the Cloudera
San Francisco office. We'll be starting the meetup off with a talk from
Hari Shreedharan, who is a member of the Flume PMC. We'll contin
, Aug 11, 2014 at 3:25 PM, Jonathan Natkins
> wrote:
>
>> Hi everybody,
>>
>> I wanted to let you all know that we've gone and scheduled the next Flume
>> meetup, which will be happening on Thursday, September 16 at the Cloudera
>> San Francisco office. We'll
Hey Gary,
>From the information I've got here, this looks like more of an HBase
problem than a Flume problem. My recommendation would be to first double
check that you can run commands against the HBase instance from your Flume
agent node. Try running `hbase shell` and execute a list command. If y
double check the
> configuration again just to make sure.
>
> thanks
> On Aug 13, 2014, at 4:56 PM, Hari Shreedharan > wrote:
>
> Actually what version of Flume are you using? ROOT was removed in Hbase 96
> I think, you need to use Flume 1.5.0 or higher for asynchbas
27 matches
Mail list logo