못
> 전송된 경우, 발신인 또는 당사에 알려주시고, 본 메일을 즉시 삭제하여 주시기 바랍니다.
> This E-mail may contain confidential information and/or copyright
> material. This email is intended for the use of the addressee only. If you
> receive this email by mistake, please either delete it without reproducing,
> distributing or retaining copies thereof or notify the sender immediately.
>
--
Nitin Pawar
I have not came across any
will be good if you can write one and submit to codebase
I will try to write this over the weekend if not done by you :)
On Thu, Nov 29, 2012 at 12:08 PM, Mohit Anchlia wrote:
> Is there a flume ng agent startup script that I can place in /etc/init.d?
--
Ni
ustomer would like to keep the log files in thier original state
> (file name, size, etc..). Is it practicable using Flume?
>
> 3. Is there a better way to collect the files without using "Exec source"
> and "tail -F" command?
>
> Many Thanks and Cheers,
> Emile
>
--
Nitin Pawar
s, including e-mail and instant messaging (including content),
> may be scanned by our systems for the purposes of information security and
> assessment of internal compliance with Accenture policy.
>
>
> __
>
> www.accenture.com
>
--
Nitin Pawar
cotta, Flume Node status is “OPENING”.
>
> But when I start terracotta, I got the error “INFO
> com.cloudera.flume.watchdog.Watchdog: Subprocess exited with value 1”
>
> ** **
>
> Please help !
>
> ** **
>
> ** **
>
> *Thanks and Regards,*
>
> *Shouvanik Hald
llo,
>
> ** **
>
> I am sure, the problem is due to Terracotta. Can you please help me?
>
> ** **
>
> *Thanks and Regards,*
>
> *Shouvanik Haldar | Cloud SME Pool | Mobile:+91-9830017568 *
>
> ** **
>
> *From:* Nitin Pawar [mailto:nitinpawar...@gmail.com
>
> ** **
>
> *Thanks and Regards,*
>
> *Shouvanik Haldar | Cloud SME Pool | Mobile:+91-9830017568 *
>
> ** **
>
> *From:* Nitin Pawar [mailto:nitinpawar...@gmail.com]
> *Sent:* Monday, December 10, 2012 3:34 PM
>
> *To:* Haldar, Shouvanik
> *Cc:* user@flume.apac
nformation security and
> assessment of internal compliance with Accenture policy.
>
>
> ______
>
> www.accenture.com
>
--
Nitin Pawar
t;
> ** **
>
> I want to execute them on flume web console. And make flume node “active”
> how to do that?
>
> ** **
>
> *Thanks and Regards,*
>
> *Shouvanik Haldar | Cloud SME Pool | Mobile:+91-9830017568 *
>
> ** **
>
> *From:* Nitin Pawar [mailto:nitinpawar...@gma
tant messaging (including content),
> may be scanned by our systems for the purposes of information security and
> assessment of internal compliance with Accenture policy.
>
>
> __
>
> www.accenture.com
>
--
Nitin Pawar
data will not be written."
On Tue, Dec 18, 2012 at 3:59 PM, wrote:
> payloadColumn
--
Nitin Pawar
> surprise,because in hbase-sink of flume doc I can not find any one about
> payloadColumn,then it must be null, that I cannot write data to hbase,why
> is it?
>
>
--
Nitin Pawar
Try putting machines external IP or internal ip from aws console
Name u r giving is invalid hostname and not routable
On Dec 18, 2012 6:42 PM, wrote:
> I am always getting this annoying error
>
> ** **
>
> ** **
>
> Unable to map logical node 'test_agnt_nd1' to physical node
> 'ip-10-40-222
1.1.0 to 1.3.0 should be fairly easy job
On Dec 19, 2012 4:24 PM, "Abhijeet Pathak"
wrote:
> It's still not working.
>
> I read somewhere that failover support was implemented after Jan 2012.
> I've flume 1.1.0+121-1.cdh4.0.1.p0.1~precise-cdh4.0.1 installed.
>
> Can that be a reason for it not wo
ava
>
>
>
>
> 15453 flume 20 0 1487m44 S 0.3 0.0 8:07.55 java
>
>
>
>
> 15964 flume 20 0 1487m 27m 164 S 0.3 1.4 0:04.09 java
>
>
>
>
> 16098 flume 20 0 1487m00 S 0.3 0.0 8:12.67 java
> ..
>
> Why so many processes are generated here and they don't go away even when
> flume is stopped.
>
> --
> All the best,
> Shengjie Min
>
>
--
Nitin Pawar
og,
> secure log etc.
>
> I have following questions:
> 1. Can Flume solve this requirement?
> 2. Who is going to feed the log files to Flume agent? Do I need some other
> tool to feed my logs to Flume?
>
> --
> Regards,
> Varun Shankar
>
--
Nitin Pawar
t work for me as it works only for immutable
> files.
>
> Reliable delivery is very important for me.
>
> Can you suggest Flume Source which will work for me?
>
> On Wed, Dec 26, 2012 at 6:58 PM, Nitin Pawar wrote:
>
>> yes flume will definitely solve this problem
>
java:194)
>
> at
> org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
>
> at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
>
> atjava.lang.Thread.run(Thread.java:722)
>
> Caused by: java.lang.IllegalArgumentException: Missing flume header
> attribute, 'key' - cannot process this event
>
> at
> com.btoddb.flume.sinks.cassandra.CassandraSinkRepository.saveToCassandra(CassandraSinkRepository.java:125)
>
> at
> com.btoddb.flume.sinks.cassandra.CassandraSink.process(CassandraSink.java:166)
>
> ... 3 more
>
> I got one solution as Key is Src+Key but am not getting how to configure
> it.
> So can any one please help me out to solve this problem.
> So
>
>
--
Nitin Pawar
as in README. but am not getting from where I
> can set that key.
> can you please give me idea about from where I can configure it or from
> where its get generated.
>
>
> On Wed, Dec 26, 2012 at 11:22 PM, Nitin Pawar wrote:
>
>> from the README
>> you need to have fo
read, print, retain copy, disseminate,
> distribute, or use this message or any part thereof. If you receive this
> message in error, please notify the sender immediately and delete all
> copies of this message. KPIT Cummins Infosystems Ltd. does not accept any
> liability for virus infected mails.
>
--
Nitin Pawar
in Flume-NG? How?
>
> ** **
>
> Thanks,
>
> Abhijeet
>
> ** **
>
> *From:* Nitin Pawar [mailto:nitinpawar...@gmail.com]
> *Sent:* Friday, December 28, 2012 1:18 PM
> *To:* user@flume.apache.org
> *Subject:* Re: Source and Sink on different machine
Can you also put a code review on codereviewer
this will get into trunk soon
On Sat, Jan 5, 2013 at 6:35 AM, Azuryy Yu wrote:
> Hi All,
>
> I submitted a patch for exec source,
> https://issues.apache.org/jira/browse/FLUME-1819
>
> please take a look.
>
--
Nitin Pawar
https://cwiki.apache.org/FLUME/how-to-contribute.html#HowtoContribute-ReviewingCode
On Sun, Jan 6, 2013 at 3:09 PM, Azuryy Yu wrote:
> where is codereviewer and how to submit a code review? Thanks
>
>
> On Sat, Jan 5, 2013 at 1:39 PM, Nitin Pawar wrote:
>
>> coderevie
Can you run following commands and tell us if namenode is up
jps
netstat -plan | grep 50030
On Jan 14, 2013 12:13 PM, "Vikram Kulkarni" wrote:
> I am trying to setup a sink for hdfs for HTTPSource . But I get the
> following exception when I try to send a simple Json event. I am also using
> a lo
Its a jobtracker uri
There shd be a conf in ur hdfs-site.xml and core-site.xml which looks like
hdfs://localhost:9100/
You need to use that value
On Jan 14, 2013 12:34 PM, "Vikram Kulkarni" wrote:
> I was able to write using the same hdfs conf from a different sink.
> Also, I can open the MapRe
the correct value maps to this fs.default.name in your core-site.xml
so whatever value you have there, you will need to use same for flume hdfs
sink
On Mon, Jan 14, 2013 at 12:37 PM, Nitin Pawar wrote:
> Its a jobtracker uri
>
> There shd be a conf in ur hdfs-site.xml and core-site.
hen when I actually go to the dfs file system I do find
> the FlumeData.1358148499961 file as expected.
>
> -Vikram
>
> From: Nitin Pawar
> Reply-To: "user@flume.apache.org"
> Date: Sunday, January 13, 2013 11:07 PM
> To: "user@flume.apache.org"
>
;> >> Could someone help me understand capacity attribute of memoryChannel?
>> Does
>> >> it mean that memoryChannel flushes to sink only when this capacity is
>> >> reached or does it mean that it's the max events stored in memory and
>> call
>> >> blocks until everything else gets freed?
>> >>
>> >>
>> >> http://flume.apache.org/FlumeUserGuide.html#memory-channel
>> >>
>> >>
>> >>
>> >
>>
>>
>>
>> --
>> Apache MRUnit - Unit testing MapReduce -
>> http://incubator.apache.org/mrunit/
>>
>
>
--
Nitin Pawar
S %CPU %MEMTIME+ COMMAND
> 8571 root 21 0 1209m 424m 11m S 2.0 5.3 53:05.95 java
> -Dflume.log.dir=/var/log/flume 4957 root
>
> Regards,
> Deepak
>
--
Nitin Pawar
into a Flume event; and in Sink, we must write each event
> to a single file.
>
> Is it practicable? Thanks!
>
> --
> Best Regards,
> Henry Ma
>
--
Nitin Pawar
; line as an event, and File Roll Sink will receive these lines and roll up
> to a big file by a fixed interval. Is it right, and can we config it to
> send the whole file as an event?
>
>
> On Tue, Jan 22, 2013 at 1:22 PM, Nitin Pawar wrote:
>
>> why don't you use d
jing, PR China
>>
>> Email: wei@wbkit.com
>>
>> Tel: +86 25 8528 4900 (Operator)
>> Mobile: +86 138 1589 8257
>> Fax: +86 25 8528 4980
>>
>> Weibo: http://weibo.com/guowee
>> Web: http://www.wbkit.com
>> -
>> WesternBridge Tech: Professional software service provider. Professional
>> is MANNER as well CAPABILITY.
>>
>>
>
--
Nitin Pawar
yes that works as well
On Tue, Jan 22, 2013 at 2:18 PM, GuoWei wrote:
> Thanks a lot
>
> I try to use nohup at the beginning of the flume-mg. It seems also works.
>
> Do you try that ?
>
> Thanks
>
> On 2013-1-22, at 下午4:34, Nitin Pawar wrote:
>
> also try e
> Hi,
>
> I've a folder in HDFS where a bunch of files gets created periodically.
> I know that currently Flume does not support reading from HDFS folder.
>
> What is the best way to transfer this data from HDFS to Hbase (with or
> without using Flume)?
>
>
> Reg
it just supports collection of data
it does not understand anything about content of your data
On Thu, Feb 7, 2013 at 3:22 PM, Surindhar wrote:
> Hi,
>
> Does Flume supports Analysis of Data?
>
> Br,
>
>
>
--
Nitin Pawar
;>> Does Flume supports Analysis of Data?
>>>
>>> Br,
>>>
>>>
>>>
>>
>>
>> --
>> - Inder
>> "You are average of the 5 people you spend the most time with"
>>
>
>
--
Nitin Pawar
ut sink. If online machine learning (e.g.
> stochastic gradient descent or something else online) was what you were
> thinking, I wonder if there are any folks on this list who might have an
> interest in helping to work on putting such a thing together.
>
> In any case, I'd like to hear more about specific use cases for streaming
> analytics. :)
>
> Regards,
> Mike
>
>
--
Nitin Pawar
ood to hear more of your thoughts. Please see inline.
>
> On Thu, Feb 7, 2013 at 8:55 PM, Nitin Pawar wrote:
>
> I can understand the idea of having data processed inside flume by
>> streaming it to another flume agent. But do we really need to re-engineer
>> something insi
> Thanks in advance,
> Priyanka
>
--
Nitin Pawar
Alex's blog has detailed info on this
http://mapredit.blogspot.in/2012/03/flumeng-evolution.html
On Mon, Feb 11, 2013 at 11:40 PM, Sri Ramya wrote:
> How to configure flume ng so that i can flume-ng takes that syslog message
> and send it to cassandra.
>
--
Nitin Pawar
copying, disclosure, modification,
> distribution and/or publication of this e-mail message, contents or its
> attachment other than by its intended recipient/s is strictly prohibited.
>
> Visit us at http://www.polarisFT.com
>
>
--
Nitin Pawar
ow to stop a running Flume Agent.Is there any command or options in
> 'flumeng'
>
> Is there a way we can figure out whether a running agent is stopped
> /crashed?
>
>
> Thanks
> Venkat
>
--
Nitin Pawar
ker.run(ThreadPoolExecutor.java:908)
>>> at java.lang.Thread.run(Thread.java:662)
>>> 11 Apr 2013 15:11:48,919 ERROR [pool-6-thread-1]
>>> (org.apache.flume.client.avro.ReliableSpoolingFileEventReader.getNextFile:442)
>>> - Exception opening file:
>>> c:\flume_data\spool\web\u_ex130411.log-201304111500.log
>>> java.io.IOException: Unable to delete existing meta file
>>> c:\flume_data\spool\web\.flumespool\.flumespool-main.meta
>>> at
>>> org.apache.flume.serialization.DurablePositionTracker.getInstance(DurablePositionTracker.java:96)
>>> at
>>> org.apache.flume.client.avro.ReliableSpoolingFileEventReader.getNextFile(ReliableSpoolingFileEventReader.java:417)
>>> at
>>> org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:212)
>>> at
>>> org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:154)
>>> at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:439)
>>> at
>>> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
>>> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
>>> at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
>>> at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
>>> at
>>> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>> at java.lang.Thread.run(Thread.java:662)
>>>
>>
>>
>
--
Nitin Pawar
{
> name:null counters:{} } } - Exception follows.
> 463 org.apache.flume.FlumeException: Could not start sink. Table or column
> family does not exist in Hbase.
>
> And the base running normally and also have table and column family.
>
>
> Please help me.
>
> thanks
> Brad
--
Nitin Pawar
t clearly:
>
> >> Cannot connect to ZooKeeper, is the quorum specification valid?
> webtech, wbtest01, wbtest02
>
> Check your Hbase configuration.
>
> - Alex
>
> On Apr 16, 2013, at 10:15 AM, GuoWei wrote:
>
> > I Use flume version: 1.3.0
> >
> > 在 2013-4
flume can hold and need to print the capacity.but I can not
> find the proper way to do this instead of change the source code. Any ideas?
>
>
> thanks
>
--
Nitin Pawar
n Wed, May 15, 2013 at 1:50 PM, Nitin Pawar wrote:
> for maximum performance on your data flow two things which will matter
> most are: the channel and the transaction batch size.
> when you say losing data, are you using memory channel? or file channel?
>
> Flume can batch events. T
here is one example for the capacity defining flow
https://cwiki.apache.org/FLUME/flume-ng-performance-measurements.html
On Wed, May 15, 2013 at 2:16 PM, Nitin Pawar wrote:
> sorry pressed enter too soon
>
> as for your question: how many events a flume agent can hold?
> sorry but I
he input and
> output****
>
> ** **
>
> *发件人:* Nitin Pawar [mailto:nitinpawar...@gmail.com]
> *发送时间:* 2013年5月15日 16:49
> *收件人:* user@flume.apache.org
> *主题:* Re: how to print the channel capacity
>
> ** **
>
> here is one example for the capacity defining flow ***
May 22, 2013 at 12:54 AM, Pranav Sharma
>> wrote:
>>
>>> Is there a way to check the size of a channel either programmatically or
>>> using a command line? I'm using a memory based channel and have enabled
>>> ganglia based monitoring. Thanks.
>>>
>>> Regards,
>>> -Pranav.
>>>
>>
>>
>
--
Nitin Pawar
tion_name//mm/dd/hh/[filename]-[timestamp].gz
>
> Is there any way to configure the spooling directory source in flume
> with time variables such that it can find these files? Or is there a better
> way to do this?
>
> Thanks
> --
> Frank Maritato
>
>
>
>
>
--
Nitin Pawar
t; Shouvanik****
>
> ** **
>
> *From:* Nitin Pawar [mailto:nitinpawar...@gmail.com]
> *Sent:* Monday, June 24, 2013 4:19 PM
> *To:* Haldar, Shouvanik
> *Subject:* Re: How to extract data from MySQL using Flume
>
> ** **
>
> Flume is not the tool to extract data fro
x27;
> > TBLPROPERTIES
> ('avro.schema.literal'='{"type":"record","name":"Event","fields":[{"name":"headers","type":{"type":"map","values":"string"}},{"name":"body","type":"bytes"}]}');
>
>
> describe flume_avro_test
> > ;
> OK
> headers map from deserializer
> body array from deserializer
>
> Thanks,
> Deepak Subhramanian
>
--
Nitin Pawar
sorry hit send to soon ..
correction rather than just changing your table definition.
On Wed, Nov 13, 2013 at 6:45 PM, Nitin Pawar wrote:
> Not really sure there is a direct way to concat anything other than
> strings in hive unless typecasting them to string.
>
> So you may want
is a good solution. I was wondering if there was a
> builtin support for hive since it is the default flume format for flume
> avro sink.
>
> Thanks, Deepak
>
>
> On Wed, Nov 13, 2013 at 1:15 PM, Nitin Pawar wrote:
>
>> sorry hit send to soon ..
>>
>> correct
ransactionCapacity = 100
>
> # Bind the source and sink to the channel
> a1.sources.r1.channels = c1
> a1.sinks.k1.channel = c1
> a1.sinks.k2.channel = c1
>
>
>
> the logger sink is working fine but for the hdfs sink, it gives the
> following error
>
> process failed
gt; Hi!
>> >>
>> >> I would like to use flume to aggregate and send logs to an S3 bucket.
>> >> I did some research, but the last article I found on the topic was
>> >> more then a year old and the author abandoned Flume for Kafka. My
>> >> other concern is that most of the articles were written for Flume OG,
>> >> not NG.
>> >> Is there any reason why I should not use flume to sink messages to S3?
>> >>
>> >>
>> >> Thanks in advance.
>> >>
>> >> Mate Gulyas
>> >> Lead Developer at Dmlab
>> >
>> >
>>
>
>
--
Nitin Pawar
doubt is, why does flume creates one hdfs file for one event. I
want it to write a single hdfs file per day for the log.
Can someone please help me find out what I have done wrong?
Thanks
--
Nitin Pawar
I got this working by setting all the properties
rollIinterval
rollCount
rollSize
Also realized, rollSize is after decompression
Thanks,
Nitin
On Tue, Sep 18, 2012 at 2:02 PM, Nitin Pawar wrote:
> hello,
>
> I have a working setup of flume which writes into hdfs.
> I am using flum
annels
>> > 12/09/17 15:40:05 INFO properties.PropertiesFileConfigurationProvider:
>> > created channel MemoryChannel-2
>> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
>> > new
>> > configuration:{ sourceRunners:{tail=EventDrivenSourceRunner: {
>> > source:org.apache.flume.source.ExecSource@c24c0 }} sinkRunners:{}
>> >
>> > channels:{MemoryChannel-2=org.apache.flume.channel.MemoryChannel@140c281} }
>> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
>> > Channel MemoryChannel-2
>> > 12/09/17 15:40:05 INFO nodemanager.DefaultLogicalNodeManager: Starting
>> > Source tail
>> > 12/09/17 15:40:05 INFO source.ExecSource: Exec source starting with
>> > command:tail -F
>> >
>> > /usr/local/flume_dir/flume/flume-1.2.0-incubating-SNAPSHOT/flume_test.txt
>> >
>> > Please suggest and help me on this issue.
>>
>>
>>
>> --
>> Apache MRUnit - Unit testing MapReduce -
>> http://incubator.apache.org/mrunit/
>
>
--
Nitin Pawar
2:51 PM, prabhu k wrote:
> Hi Nitin,
>
> While executing flume-ng, i have updated the flume_test.txt file,still
> unable to do HDFS sink.
>
> Thanks,
> Prabhu.
>
> On Tue, Sep 18, 2012 at 2:35 PM, Nitin Pawar
> wrote:
>>
>> Hi Prabhu,
>>
>>
61 matches
Mail list logo