ing httpsource, I was not
> sure how to check the heartbeat without sending an event.
>
> --
> ___
> Sanjath Shringeri | VP, Engineering | claritics | User. Intelligence. Now.
>
> 408.796.1287 |
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.
lk to another node that acts as
> http monitor, right?
>
>
> On Mon, Jun 17, 2013 at 10:36 PM, Ashish wrote:
>
>> FWIK, there is no direct way. However you can enable HTTP monitoring and
>> have your ELB point to the URL.
>> HTTP 200 Ok can be used to keep the HTTP so
Hi,
I was trying to understand PollableSource and EventDrivenSource, but got
confused. Is the difference based on how Events are consumed by Channel or
something else.
Also, how to decide which one to use while implementing a custom Source.
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
and ExecSource
> for an example of an event driven source.
>
>
> On 06/19/2013 08:16 PM, Ashish wrote:
>
>> Hi,
>>
>> I was trying to understand PollableSource and EventDrivenSource, but got
>> confused. Is the difference based on how Events are consumed by Ch
.net.ConnectException: Connection refused: no further
>> information
>> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>> at
>> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
>> at
>> org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:396)
>> at
>> org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:358)
>> at
>> org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:274)
>> at
>> org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
>> at java.lang.Thread.run(Thread.java:662)
>> 2013-06-20 17:55:59,369 (main) [DEBUG -
>> org.apache.flume.client.avro.AvroCLIClient.main(AvroCLIClient.java:84)]
>> Exiting
>>
>>
>> My question is what I am doing wrong and what I need to test in order to
>> fix this situatioin.
>>
>> Thanks in advance.
>>
>> best regards,
>>
>>
>>
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
013 at 12:33 PM, Nickolay Kolev wrote:
> Hi Ashish,
>
> Thanks for pointing me that error. I am trying to read the code and this
> is the correct full class name. (last time I wrote java code was in 1998
> and my knowledge are a lot out of date)
>
> Unfortunately the result is the sa
gt; org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:118)]
> Sinks k1
>
> 2013-06-21 11:01:24,471 (conf-file-poller-0) [DEBUG -
> org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:119)]
> Sources null
>
> best regards,
> nic
s = ch1
> agent1.sources.r1.handler = org.apache.flume.source.http.JSONHandler
> Thanks.
>
> Shushuai
>
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
!
On Sat, Jun 22, 2013 at 3:07 AM, shushuai zhu wrote:
> Ashish,
>
> Thanks for the pointer. So I will create my own source, say HTTPSSource,
> which extends the HTPSource to add the https connection, then use the
> custom HTTPSSource in flume.conf.
>
> Is this the right sou
put up an example by this weekend, if time permits.
HTH !
On Mon, Jun 24, 2013 at 8:24 PM, shushuai zhu wrote:
> Ashish, thanks again. Could you elaborate a little more what I should do?
> I am relatively new to Flume (just started using it a couple of weeks ago)
> and also new to op
new ServletHolder(new FlumeHTTPServlet()), "/");
srv.start();
To submit a patch, would need to refine the code a bit and add test cases.
Shall take a while. HTH !
On Mon, Jun 24, 2013 at 8:24 PM, shushuai zhu wrote:
> Ashish, thanks again. Could you elaborate a li
Sure, I am working on the test cases and Httpclient is giving me tough time
with SSL. If it doesn't work, shall write simple SSL client to test it.
On Wed, Jun 26, 2013 at 8:13 PM, shushuai zhu wrote:
> Ashish, thx. Will try your solution. Please also kindly send a
> notice after
,the hdfs sink file of flume ng will not closed ever.
> It's a bug or we have the second way to stop the flume ng??
>
> thanks a lot!
>
> 2013-06-18
> --
> cherubimsun
> **
>
>
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
h each new line. I just want my Java app to get each new
> > line from tailing and process it in some custom way. In other words,
> > I don't really need Flume to be involved in anything beyond tailing
> > files.
> >
> > Is that doable?
> >
> > Thanks,
>
Have added the patch to JIRA. Let's wait for the review.
On Thu, Jun 27, 2013 at 7:20 AM, Ashish wrote:
> Sure, I am working on the test cases and Httpclient is giving me tough
> time with SSL. If it doesn't work, shall write simple SSL client to test it.
>
>
> On Wed
html#is-flume-a-good-fit-for-your-problem
https://cwiki.apache.org/confluence/display/FLUME/Articles%2C+Blog+Posts%2C+HOWTOs
>
> "You only live once, but if you do it right, once is enough."
>
>
> Regards,
>
> Maheedhar Reddy K V
>
>
> http://about.me/maheed
er using a simple
> cron job to do the task. I can manually write statements like "hadoop fs
> -put " in the cron job
> instead.
>
The ML thread pointed is related to RollingFileSink, not HDFS sink, so it's
not valid in context of HDFS sink.
HTH !
>
> Appreciate yo
y to identify such events.. you may be able to use
>>>>>> the Regex interceptor to toss them out before they get into the channel.
>>>>>>
>>>>>>
>>>>>> On Wed, Jul 24, 2013 at 2:52 PM, Jeremy Karlson <
>>>>
up in forums, I think it
> may be caused by empty header. If so, how is a timestamp header is added?
> if not what cause the event undelivery to happen?
>
> Thank you,
>
> George
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Hari
>>>>>>>
>>>>>>> On Tuesday, October 29, 2013 at 12:48 PM, George Pang wrote:
>>>>>>>
>>>>>>> Hi Hari,
>>>>>>
> do? I don't see in flume.conf example a place for remote Hbase address.
>
> Thank you,
>
> George
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
sbin/gmetad
> # netstat -utlpn|grep 23108
> tcp0 0 0.0.0.0:86510.0.0.0:*
> LISTEN 23108/gmetad
> tcp0 0 0.0.0.0:8652 0.0.0.0:*
> LISTEN 23108/gmetad
>
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
On Fri, Nov 1, 2013 at 2:22 PM, ch huang wrote:
> i am not very clear ,i have only on ganglia gmetad deploy on 11.142
> ,should i still need add comma after 192.168.11.142:8651 ?
No!, It's only needed if you have multiple servers
>
>
> On Fri, Nov 1, 2013 at 3:00 PM, Ash
Can you please elaborate more on what you want to achieve? Not very clear
from description. Do you need to invoke script during Agent initialization,
or for each event processed or something else.
thanks
ashish
On Fri, Nov 1, 2013 at 8:51 AM, Chhaya Vishwakarma <
chhaya.vishw
in SyslogUDPSource that Flume can receive a
> large message?
>
> ---
> Kaka
>
>
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
--
> The contents of this e-mail and any attachment(s) may contain confidential
> or privileged information for the intended recipient(s). Unintended
> recipients are prohibited from taking action on the basis of information in
> this e-mail and using or disseminating the information, and must notify the
> sender and delete it from their system. L&T Infotech will not accept
> responsibility or liability for the accuracy or completeness of, or the
> presence of any virus or disabling code in this e-mail"
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
I am not aware of any options out of the box. Maybe someone else can help.
Alternate way is to write a custom source.
On Mon, Dec 30, 2013 at 3:56 PM, Chhaya Vishwakarma <
chhaya.vishwaka...@lntinfotech.com> wrote:
> Hi
>
> Exec as source and tail command
>
>
&
pache.org
> *Subject:* Re: Event breaking in flume
>
>
>
> Maybe you can set up some morphlines and do some ETL in your event.
>
>
>
> I hope this help you.
>
>
>
>
> http://blog.cloudera.com/blog/2013/07/morphlines-the-easy-way-to-build-and-integrat
try to reconnect a few
> times,but then my java application was shutdown.Is there any possible my
> java application goes normal with the exception of log4j?
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
fs.
>>>>>>>>> 3. Having a master node managing nodes in 1,2.
>>>>>>>>>
>>>>>>>>> But it seems to be overskilled in my case: in 1, i can already
>>>>>>>>> sink to hdfs. Since the data available at socket server are much
>>>>>>>>> faster
>>>>>>>>> than the data translation part. I want to be able to later add more
>>>>>>>>> nodes
>>>>>>>>> to do the translation job. so what is the correct setup?
>>>>>>>>> Thanks,
>>>>>>>>> Chen
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Thu, Jan 9, 2014 at 2:38 PM, Chen Wang <
>>>>>>>>> chen.apache.s...@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> Guys,
>>>>>>>>>> In my environment, the client is 5 socket servers. Thus i wrote a
>>>>>>>>>> custom source spawning 5 threads reading from each of them
>>>>>>>>>> infinitely,and
>>>>>>>>>> the sink is hdfs(hive table). The work fine by running flume-ng
>>>>>>>>>> agent.
>>>>>>>>>>
>>>>>>>>>> But how can i deploy this in distributed mode(cluster)? I am
>>>>>>>>>> confused about the 3 ties(agent,collector,storage) mentioned in the
>>>>>>>>>> doc.
>>>>>>>>>> Does it apply to my case? How can I separate my
>>>>>>>>>> agent/collect/storage?
>>>>>>>>>> Apparently i can only have one agent running: multiple agent will
>>>>>>>>>> result in
>>>>>>>>>> getting duplicates from the socket server. But I want that if one
>>>>>>>>>> agent
>>>>>>>>>> dies, other agent can take it up. I would also like to be able to add
>>>>>>>>>> horizontal scalability for writing to hdfs. How can I achieve all
>>>>>>>>>> this?
>>>>>>>>>>
>>>>>>>>>> thank you very much for your advice.
>>>>>>>>>> Chen
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Mailing List Archives,
>>>>>>> QnaList.com
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
e.org/jira/browse/FLUME-1286. There is some good info
in the JIRA.
thanks
ashish
On Fri, Jan 10, 2014 at 11:08 AM, Chen Wang wrote:
> Ashish,
> Since we already use storm for other real time processing, i thus want to
> re utilize it. The biggest advantage for me of using storm in this cas
Guava should be already present in Flume lib directory. I downloaded and
verified it. You should have guava-10.0.1.jar in Flume lib directory.
Can you try with a fresh Flume download? IMHO, it should work, then try to
debug the broken env.
HTH!
ashish
On Fri, Jan 10, 2014 at 4:10 PM, Chhaya
n point me in the right direction.
>
> -Mayur
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
ipient(s). Unintended
> recipients are prohibited from taking action on the basis of information in
> this e-mail and using or disseminating the information, and must notify the
> sender and delete it from their system. L&T Infotech will not accept
> responsibility or liability for the
u would have to work at Avro-Netty level. My knowledge is
a bit rusty at the moment, but SSL end at the 1st layer in the channel
pipeline, so you may need to hack Avro's usage of Netty to get this
working, passing the required information from SSL layer to codecs higher
in the chain.
I am a bit tied up for new few weeks, but ready to hack in after that.
HTH!
>
> I appreciate you taking the time to talk through this with me.
>
> -Charles
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
gt; Simple question : is there a JMX source on developement ? Did someone
> developped it ?
> If it exist : Where can we find this ?
> If it doesn't exist : I need it, so I dev it
>
>
> Thanks for your answer,
>
> Sylvain
>
--
thanks
ashish
Blog: http://
IMHO, checkout the trunk, build it and then just use the agent config from
old setup. Keep the old setup as is.
thanks
ashish
On Wed, Apr 9, 2014 at 6:17 PM, Deepak Subhramanian <
deepak.subhraman...@gmail.com> wrote:
> Thanks Otis. I will give it a try. Do I have to replace the flum
nt listening on
> http://hostname1:80/dev <http://hostname1/dev>
> We have to go through firewall request everytime we need to add an
> additional port to a flume agent.
> --
> Deepak Subhramanian
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
call)
Can someone familiar with this part look further into this? I shall debug
further as soon as I have free cycles.
thanks
ashish
On Fri, Apr 11, 2014 at 5:24 PM, Deepak Subhramanian <
deepak.subhraman...@gmail.com> wrote:
> Thanks Simon. I am also struggling with no luck. I t
list :)
http://elasticsearch-users.115913.n3.nabble.com/Issue-with-posting-json-data-to-elastic-search-via-Flume-td4054017.html
Can ES experts comment on the best way forward?
On Sun, Apr 13, 2014 at 8:10 PM, Ashish wrote:
> Have been able to reproduce the problem locally using the exist
check org.apache.avro.ipc.NettyServer
Line#97 (for Avro 1.7.3)
thanks
ashish
On Tue, Apr 22, 2014 at 2:59 PM, Himanshu Patidar <
himanshu.pati...@hotmail.com> wrote:
> Hi,
>
> I have a flume agent with Avro Source, memory channel and a custom sink. I
> am trying to send a single event with the
Not sure if this would fir, but have a look at
http://flume.apache.org/FlumeDeveloperGuide.html#embedded-agent
thanks
ashish
On Tue, May 20, 2014 at 7:16 PM, Jay Vyas wrote:
> Hi flume !
>
> I'd like to implement a simple flume sink:
>
> agent.channels.memory-c
ere a way to read text files from local directory and write into Hdfs
> in avro format by using flume?
>
> Sent from my iPhone
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
Nope, see the serializer field description
On Thu, Jun 19, 2014 at 11:42 AM, kishore alajangi <
alajangikish...@gmail.com> wrote:
> Hi Ashish,
>
> Do i need to use both avrosink and hdfssink to write the text file in avro
> format into hdfs?
>
>
> On Wed, Jun 18, 2014
t;>>>>>>>>> a1.sources.r1.interceptors = i1
>>>>>>>>>>> a1.sources.r1.interceptors.i1.type = regex_filter
>>>>>>>>>>> a1.sources.r1.interceptors.i1.regex = resuming normal
>>>>>>>>>>> operations|Received|Response
>>>>>>>>>>>
>>>>>>>>>>> #a1.sources.r1.interceptors = i2
>>>>>>>>>>> #a1.sources.r1.interceptors.i2.type = timestamp
>>>>>>>>>>> #a1.sources.r1.interceptors.i2.preserveExisting = true
>>>>>>>>>>>
>>>>>>>>>>> # Describe the sink
>>>>>>>>>>> a1.sinks.k1.type = hdfs
>>>>>>>>>>> a1.sinks.k1.hdfs.path = hdfs://
>>>>>>>>>>> testing.sck.com:9000/running/test.sck/date=%Y-%m-%d
>>>>>>>>>>> a1.sinks.k1.hdfs.writeFormat = Text
>>>>>>>>>>> a1.sinks.k1.hdfs.fileType = DataStream
>>>>>>>>>>> a1.sinks.k1.hdfs.filePrefix = events-
>>>>>>>>>>> a1.sinks.k1.hdfs.rollInterval = 600
>>>>>>>>>>> ##need to run hive query randomly to check teh long running
>>>>>>>>>>> process , so we need to commit events in hdfs files regularly
>>>>>>>>>>> a1.sinks.k1.hdfs.rollCount = 0
>>>>>>>>>>> a1.sinks.k1.hdfs.batchSize = 10
>>>>>>>>>>> a1.sinks.k1.hdfs.rollSize = 0
>>>>>>>>>>> a1.sinks.k1.hdfs.useLocalTimeStamp = true
>>>>>>>>>>>
>>>>>>>>>>> # Use a channel which buffers events in memory
>>>>>>>>>>> a1.channels.c1.type = memory
>>>>>>>>>>> a1.channels.c1.capacity = 1
>>>>>>>>>>> a1.channels.c1.transactionCapacity = 1
>>>>>>>>>>>
>>>>>>>>>>> # Bind the source and sink to the channel
>>>>>>>>>>> a1.sources.r1.channels = c1
>>>>>>>>>>> a1.sinks.k1.channel = c1
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On 14 July 2014 22:54, Jonathan Natkins
>>>>>>>>>>> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Hi Saravana,
>>>>>>>>>>>>
>>>>>>>>>>>> What does your sink configuration look like?
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks,
>>>>>>>>>>>> Natty
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Fri, Jul 11, 2014 at 11:05 PM, SaravanaKumar TR <
>>>>>>>>>>>> saran0081...@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Assuming each line in the logfile is considered as a event for
>>>>>>>>>>>>> flume ,
>>>>>>>>>>>>>
>>>>>>>>>>>>> 1.Do we have any maximum size of event defined for memory/file
>>>>>>>>>>>>> channel.like any maximum no of characters in a line.
>>>>>>>>>>>>> 2.Does flume supports all formats of data to be processed as
>>>>>>>>>>>>> events or do we have any limitation.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I am just still trying to understanding why the flume stops
>>>>>>>>>>>>> processing events after sometime.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Can someone please help me out here.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>> saravana
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On 11 July 2014 17:49, SaravanaKumar TR <
>>>>>>>>>>>>> saran0081...@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hi ,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I am new to flume and using Apache Flume 1.5.0. Quick setup
>>>>>>>>>>>>>> explanation here.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Source:exec , tail –F command for a logfile.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Channel: tried with both Memory & file channel
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Sink: HDFS
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> When flume starts , processing events happens properly and
>>>>>>>>>>>>>> its moved to hdfs without any issues.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> But after sometime flume suddenly stops sending events to
>>>>>>>>>>>>>> HDFS.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I am not seeing any errors in logfile flume.log as
>>>>>>>>>>>>>> well.Please let me know if I am missing any configuration here.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Below is the channel configuration defined and I left the
>>>>>>>>>>>>>> remaining to be default values.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> a1.channels.c1.type = FILE
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> a1.channels.c1.transactionCapacity = 10
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> a1.channels.c1.capacity = 1000
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>> Saravana
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
Use these JAVA_OPTS="-Xms1g -Xmx1g -Dcom.sun.management.jmxremote -XX:-
HeapDumpOnOutOfMemoryError"
On Thu, Jul 17, 2014 at 11:55 AM, SaravanaKumar TR
wrote:
> Thanks Ashish , So I wil go ahead and update the flume-env,sh file with
>
> JAVA_OPTS
ate as 1 GB.
>
> But for out of memory error ,do we get notified in flume logs? I haven't
> see any exception till now.
>
>
> On 17 July 2014 11:55, SaravanaKumar TR wrote:
>
>> Thanks Ashish , So I wil go ahead and update the flume-env,sh file with
le or partial & its move still in progress.
>>>
>>> if suppose a file is of large size and we started moving it to spooler
>>> directory , how flume identifies that the complete file is transferred or
>>> is still in progress.
>>>
>>> Please help me out here.
>>>
>>> Thanks,
>>> saravana
>>>
>>
>>
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
alRMIServerSocketFactory$1.accept() @bci=1,
> line=52 (Interpreted frame)
> - sun.rmi.transport.tcp.TCPTransport$AcceptLoop.executeAcceptLoop()
> @bci=55, line=388 (Interpreted frame)
> - sun.rmi.transport.tcp.TCPTransport$AcceptLoop.run() @bci=1, line=360
> (Interpreted frame)
an intrusive way,
>> which means we would not collect data on servers.
>>
>> Is that possible to use libpcap/winpcap to tap into TCP stream, convert
>> it to Avro/Thrift, and then send to Flume source?
>>
>> Very appreciate your suggestions. Please indicate if there
s://issues.apache.org/jira/browse/FLUME-1491
> 3) How complicated it is to make Flume configurable? Does it men tons of
> coding and months to implement it or rather it is not so hard?
>
>
> --
> Paweł
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
for time needed to close the issue or to have any stable version (no needed
> to be pushed into master)?
>
Patch is attached, need to be battle tested.
>
> --
> Paweł
>
>
> 2014-08-01 12:59 GMT+02:00 Ashish :
>
> If you change the configuration file, Flume reloads it
gt;>> The scenario is we want to collect data over TCP connection which is
>>>> send to backend database server. But it is not possible to use an intrusive
>>>> way, which means we would not collect data on servers.
>>>>
>>>> Is that possible to use libpcap/winpcap to tap into TCP stream, convert
>>>> it to Avro/Thrift, and then send to Flume source?
>>>>
>>>> Very appreciate your suggestions. Please indicate if there are better
>>>> options.
>>>>
>>>> Cheers,
>>>> Blade
>>>>
>>>>
>>>
>>>
>>
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
der doesn't
> listen on zookeeper and doesn't emit events when somethings changes in
> zookeeper. Did I miss something?
>
> --
> Paweł
>
> 2014-08-01 13:16 GMT+02:00 Ashish :
>
>
>>
>>
>> On Fri, Aug 1, 2014 at 4:40 PM, Paweł wrote:
>>
>&
Please send mailto:user-unsubscr...@flume.apache.org
On Wed, Aug 6, 2014 at 8:14 AM, Xiaobo Liu wrote:
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
documentation has an outline
>>> that you can use. You can also look at the
>>> existing Execsource and
>>> work your way up.
>>>
>>> As far as I know, t
configuration a parameter with class name of a "InputStream
>> processor". This processor will we able to eg. unzip, deserialize avro or
>> read JSON and convert it into log events. What do you think?
>>
>> --
>> Paweł Róg
>>
>> 2014-08-06 5:12 GMT+0
quirement just to use a particular source. FLUME-1491 would make Flume
> generally dependent upon ZooKeeper, which is a good transition point to
> start using ZK for other state that would be necessary for Flume
> components. Would you agree?
>
>
> On Sun, Aug 10, 2014 at 11:35 PM, Ash
On Mon, Aug 11, 2014 at 4:04 PM, Otis Gospodnetic <
otis.gospodne...@gmail.com> wrote:
> Hi,
>
> On Wed, Aug 6, 2014 at 5:04 AM, Ashish wrote:
>
>> Sharing some random thoughts
>>
>> 1. Download the file using S3 SDK and let the SpoolDirectory
>>
rg/apache/flume/sink/AvroSink.java
>
> https://flume.apache.org/FlumeDeveloperGuide.html#transaction-interface
>
> -Jeff
>
>
>
>
>
>
> On Tue, Sep 2, 2014 at 6:36 PM, Ed Judge wrote:
>
>> Does anyone know of any good documentation that talks about the
>> protocol/
control
>> of a subset of sources?
>>
>> In that case, a conf mgmg server (such as Puppet) would be responsible
>> for editing flume.conf with parameters 'agent.sources' from source1 to
>> source3000 (assuming we have 3000 sources machines).
>>
>>
se lookup and generates CSV files to be put into S3.
>
> The question is, is it the right place for the code or should the code be
> running in channel as the ACID gaurantees is present in Channel. Please
> advise.
>
> -Kev
>
>
--
thanks
ashish
Blog: http://www.
a better person to comment.
>
> You can not install Flume agents in the SNMP managed devices, and you
> can not modify any software in the SNMP managed devide for use Flume
> client SDK (if I understand correctly your idea Ashish). There are two
> ways for SNMP data collection from
On Fri, Sep 5, 2014 at 4:01 PM, JuanFra Rodriguez Cardoso <
juanfra.rodriguez.card...@gmail.com> wrote:
> Thanks, both of you!
>
> @Ashish, Javi's thoughts are right. My use case is focused on sources for
> consuming SNMP traps. I came here from the already open discussion
On Sat, Sep 6, 2014 at 4:42 AM, Kevin Warner
wrote:
> Thanks Andrew, Ashish and Sharinder for your response.
>
> I have a large number of JSON files which are 2K size each on Tomcat
> servers. We are using rsync to get the files from the Tomcat servers to the
> EC2 compute instanc
unmodified? Is there a better
> way to accomplish what I want to do? Just looking for some guidance.
>
> Thanks,
> Ed
>
> On Sep 4, 2014, at 4:44 AM, Ashish wrote:
>
> Avro records shall have the schema embedded with them. Have a look at
> source, that shall help a bit
the load balancing option doesn't do it.
>
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
you're right, is there more documentation about it somewhere?
>>> How does it work? I mean, if I choose random, it choose randomly a sink
>>> and when it finishes to generate a file and choose another sink or it send
>>> each event for each sink and really there ar
ebugging Sink, Comment out AvroSink if you use this one
> # http://flume.apache.org/FlumeUserGuide.html#file-roll-sink
> client.sinks.k1.type = file_roll
> client.sinks.k1.sink.directory = /opt/app/solr/flume/sinkOut
> client.sinks.k1.sink.rollInterval = 0
>
> # Conne
ype = DataStream
> collector.sinks.k1.hdfs.rollInterval = 86400
> collector.sinks.k1.hdfs.rollSize = 0
> collector.sinks.k1.hdfs.rollCount = 0
> collector.sinks.k1.hdfs.serializer = HEADER_AND_TEXT
>
>
>
> On 9/16/14 11:10 AM, Ashish wrote:
>
> Try using HEADER_
ut down all the channels and eventually the JVM
>> shuts down.
>>
>> I am running the agent in debug mode and I can see my data coming in
>> correctly for a couple of hours and the snippet shows, flume enters the
>> “LEAVING DECODE” section of the flume debug eng
g company/its Subsidiaries/ its Group Companies. It
> may contain information which is confidential and legally privileged and the
> same shall not be used or dealt with by any third party in any manner
> whatsoever without the specific consent of ITC Infotech India Ltd./ its
> Holding company/ its Subsidiaries/ its Group Companies.
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
avro source.
>
> My question is can flume keep header properties when sink to remote
> avro.
>
>
> --
> 公司:TestBird
> QQ:2741334465
> Email: wan...@testbird.com
> 地址:成都市高新区天府软件园C8-3# 邮编:610041
> 网址:http://www.testbird.com
>
--
thanks
Hi,
Can you elaborate a bit more with what you want to do with SSH?
thanks
ashish
On Thu, Oct 16, 2014 at 4:33 AM, terreyshih wrote:
> Hi,
>
> I understand flume avro source and sinks can use SSL as documented, How
> about SSH though ? Can I instantiate an SSH connection ?
I would start with trying to find which Thread is consuming most CPU. The
stacktrace shall give you a good hint on the direction to proceed.
Blogged about the process here
http://www.ashishpaliwal.com/blog/2011/08/finding-java-thread-consuming-high-cpu/
Hope it help
ashish
On Wed, Oct 15, 2014
Well, I could be wrong :) The whole process takes hardly 2 min and from my
personal experience I prefer to gather data and work by elimination process.
Leave it to Mike on how he want to proceed further.
thanks
ashish
On Thu, Oct 16, 2014 at 2:47 PM, Ahmed Vila wrote:
> Hi Ashish,
>
&
public void notify(final Activity activity, final GnipStream stream) {
// Create Event out of Activity
// get Activity as byte array
Event event = EventBuilder.withBody(bytes);
getChannelProcessor().processEvent(event);
}
};
HTH !
ashish
On Fri, Oct 17
> ?
>
> thanks,
> -Gary
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
ram blocked, my program threw a
> exception of disconnection, then reconnected to flume and continued to send
> events.
> I am confused by this. Is this a bug of flume? if not, what should I do to
> fix this problem?
>
> Thanks,
> Wang Ke
--
thanks
ashish
Blog: http://w
> Again, I know this would not happen if the down stream agent is never brought
> down. However, I am just wondering if it is possible for this to happen is
> the downstream is brought down and then up again ?
>
> thanks,
> -Gary
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
ters, not what you think or say or plan.” )
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
ll now. but when stopped HDFS
> service . flume agent itself get stopped. Is this default behavior ? . or
> any wen wrong .
>
> Regards,
> Mahendran
>
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
>
>
> but it does not working . not moving any files from LogFiles directory. How
> can i achieve my use case ?
>
>
> Thanks,
>
> Mahendran
>
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
33/what-is-the-minimal-setup-needed-to-write-to-hdfs-gs-on-google-cloud-storage-wit
>>>
>>> Thanks.
>>>
>>> --
>>> Jean-Philippe Caruana
>>> http://www.barreverte.fr
>>
>>
>>
>> --
>> Jean-Philippe Caruana - j...@target2sell.com
>> Target2sell, le turbo du e-commerce
>> 43 rue de Turbigo - 75003 Paris
>> +33 (0) 9 51 92 63 20 | +33 (0) 1 44 54 94 55
>>
>> http://www.target2sell.com
>> http://www.barreverte.fr
>
>
>
> --
> Jean-Philippe Caruana - j...@target2sell.com
> Target2sell, le turbo du e-commerce
> 43 rue de Turbigo - 75003 Paris
> +33 (0) 9 51 92 63 20 | +33 (0) 1 44 54 94 55
>
> http://www.target2sell.com
> http://www.barreverte.fr
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
SSL certificate for SSL
> configuration ?
>
> Thanks
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
.
>>
>>
>>
>> I need information on how to accomplish this via a java API.
>>
>>
>>
>> Any help would be appreciated.
>>
>>
>>
>> -CM
>>
>>
>
>
>
> --
> Joey Echeverria
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
ction to
> HDFS failed log4j through this exception as WARN so i cannot get this
> exception when set log4j threshold as ERROR.at some point it lead to channel
> full exception
>
> How can i find whether connection to HDFS is succeed or not ?
>
> Thanks.
>
--
thanks
ashish
orward to the kafka sink plugin that I can't get to compile
>>> independently. :-/
>>>
>>> Thanks!
>>>
>>
>>
>>
>> --
>>
>> Santiago M. Mola
>>
>>
>> <http://www.stratio.com/>
>> Vía de las dos
nnel data out of the way and re-started the
>> Flume agent. I'd like to pop the bad message from the queue data on disk...
>> are there any tools or recommended ways to do this?
>>
>> Thanks,
>> Charles
>
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
fic event, and that event will be dropped from the channel and
> transaction.
>
> I don’t see how we can do it outside the sink for this reason.
>
> Thanks,
> Hari
>
>
> On Wed, Feb 4, 2015 at 5:32 AM, Ashish wrote:
>>
>> Is it possible to extend File Channel
I did tried it, but was lost a bit on getting an Event from
TransactionalEvent. I shall work out the details, meanwhile you can
help me out with the Event part.
thanks
ashish
On Tue, Feb 10, 2015 at 5:56 AM, Hari Shreedharan
wrote:
> Correct - that would be pretty tricky. We could indeed mod
ke to know how much file has been processed in parallel by
SpoolDir ?
Nope
Q2. Is it possible to how much* size* of file has been moved by SpoolDir to
sink at any point using built-in API?
Nope
If you can share what you want to achieve, we may be able to provide some
pointers.
thanks
ashish
t of my knowledge)
>
> Q1. I would like to know how much file has been processed in parallel by
> SpoolDir ?
> Nope
>
> Q2. Is it possible to how much* size* of file has been moved by SpoolDir
> to sink at any point using built-in API?
> Nope
> If you can share wha
;
>
>
> **DISCLAIMER*
>
> This message is private and confidential and it is intended exclusively for
> the addressee. If you receive this message by mistake, you should not
> disseminate, distribute or copy this e-mail. Please inform the sender and
> delete the message and attachments from your system. No confidentiality nor
> any privilege regarding the information is waived or lost by any
> mistransmission or malfunction.
>
> Any views or opinions contained in this message are solely those of the
> author, and do not necessarily represent those of Grupo Santander, unless
> otherwise specifically stated and the sender is authorized to do so. E-mail
> transmission cannot be guaranteed to be secure, confidential, or error-free,
> as information could be intercepted, corrupted, lost, destroyed, arrive
> late, incomplete, or contain viruses. Grupo Santander does not accept
> responsibility for any changes in the contents of this message after it has
> been sent.
>
> This message is provided for informational purposes and should not be
> construed as a solicitation or offer to buy or sell any securities or
> related financial instruments. If the addressee of this message does not
> consent to the use of internet e-mail, please communicate it to us.
>
>
>
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
e Sistemas
>
> Sistemas Distribuidos: BPM / Tibco
>
> Parque Empresarial La Finca - Edificio 16 planta 1
>
> Paseo del Club Deportivo s/n - 28223 Pozuelo de Alarcón (Madrid)
>
> Teléfono: +34 91 289 88 43 – Móvil: +34 615 90 92 01
>
> Email: juftav...@produban.com
>
&g
he same folder name for lacks of time this will leads very high
> performance degradation.
>
> Is there any way to handle my case without performing the same file header
> for lacks time ?
>
> thanks.
>
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
> I started flume with the -z flag pointing to a zookeeper instance and I see
> that it created a flume
>
> znode but I am not sure how to actually put the configuration in zookeeper.
>
> Thanks,
> Simeon
>
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
.checkpointDir=/data/2/flumechannel/checkpoint
>> tier1.channels.c1.dataDirs=/data/2/flumechannel/data
>> tier1.channels.c1.transactionCapacity = 1
>> tier1.channels.c1.maxFileSize = 5
>>
>>
>>
>> #sink
>>
>> tier1.sinks.k1.type = hdfs
>
sure if Flume could consume the local file while the application is
> still writing the log file? Thanks.
>
> regards,
> Lin
--
thanks
ashish
Blog: http://www.ashishpaliwal.com/blog
My Photo Galleries: http://www.pbase.com/ashishpaliwal
Your understanding is correct :)
On Mon, Mar 9, 2015 at 6:54 AM, Lin Ma wrote:
> Thanks Ashish,
>
> Followed your guidance, and found below instructions of which have further
> questions to confirm with you, it seems we need to close the files and never
> touch it for Flume to pr
1 - 100 of 127 matches
Mail list logo