I have a requirement where I need to feed push traffic(comma separated
logs) at a very high rate to flume.
I have three concerns:
1. I am using php to send events to flume through rsyslog. The code I am
using is :
*openlog("mylogs", LOG_NDELAY, LOG_LOCAL2);
syslog(LOG_INFO, "aaid,bid,ci
I know!
Thank you !Jeff Lord and Hari Shreedharan!
-- 原始邮件 --
发件人: "Jeff Lord";;
发送时间: 2014年8月14日(星期四) 中午12:16
收件人: "user@flume.apache.org";
主题: Re: flume failover only support two nodes?
Also all of your sinks are pointing to the same host for the next ho
Hi,
Actually, the interesting thing is that if I have HBase installed locally on
the same flume server machine but have a hbase-site.xml connecting to a remote
hbase server, the connection is fine.
It seems that if I don’t have a local HBase installation, I get this error.
Am I missing somethi
To add headers to the events, you can either send proper avro formatted
packets (which have a header) to an avro source, or implement a custom
interceptor to add headers after they're received by the syslog source.
There is a static interceptor bundled with flume that you can use. The
problem with
Hi Sharninder,
Thanks for the response. The load balancing is not based on header. To
simplify, lets say I have one web server generating logs and three flume
nodes receiving those logs. I want the load to be balanced on those three
flume nodes based on cpu utilization and load.
On Thu, Aug 1
I'm not sure without looking at the exact usecase, but maybe you can use
something like haproxy?
--
Sharninder
On Thu, Aug 14, 2014 at 4:08 PM, Mohit Durgapal
wrote:
> Hi Sharninder,
>
> Thanks for the response. The load balancing is not based on header. To
> simplify, lets say I have one we
Yeah, it likely means that your HBase configuration is in interact for
using the remote cluster. When you run the Flume agent, can you do a `ps
aux | grey flume` and find where the hbase-site.xml is coming from? It's
probably /etc/hbase/conf. Look in the file, and can you tell me what the
value of
I have a tough time seeing why one would need a failover sink where both
write to the same HDFS cluster. Can one describe to me why this is a good
idea?
You'd usually not need two pointing to the same HDFS cluster. I'd use it
for Avro Sinks for agent to agent communication to make sure your
pipeline does not get clogged due to a single agent failing. You could
failover from one HDFS cluster to another if you want.
Gary Malouf wrote:
I have a