Hi,
I'm attempting to use Flume to send data using Avro between two servers. The
process is running perfectly fine until I started to use SSL encryption. I'm
sending data from server A to server B.
Say on server B I make a keystore using:
keytool -genkey -alias serverB -keyalg RSA -keystore ke
Hi Kishore,
the issue I think is
tier1.sinks.sink1.hdfs.rollInterval=0
tier1.sinks.sink1.hdfs.rollSize = 12000
# seconds to wait before closing the file.
tier1.sinks.sink1.hdfs.idleTimeout = 60
Can you try getting rid of the idleTimeout and changing rollInterval to 30, see
if that helps?
R
Hi Saravana,
Flume will check the size and the time of the last edit to the file when it
starts reading it and when it has finished reading. If the two sets of values
differ between the start and end of the file reading process, Flume will fail
noisily. This means that you must move a fully wri
Hi all,
I have a configuration with a file channel configured such that:
a1.channels.ch1.type = file
a1.channels.ch1.checkpointDir = /hadoop/user/flume/channels/checkpoint
a1.channels.ch1.dataDirs = /hadoop/user/flume/channels/data
a1.channels.ch1.capacity = 10
a1.channels.ch1.transactionCapa
Hi Mahendran,
yes that is expected behaviour - I suspect that if you look in the logs for
this agent, it will have thrown an exception when you shut down the HDFS, as it
is depending on a compatible HDFS being available.
Regards,
Guy Needham | Data Discovery
Virgin Media | Enterprise Data, De
14 at 10:25 AM, Jeff Lord
mailto:jl...@cloudera.com>> wrote:
Guy,
What version of flume is this?
-Jeff
On Fri, Nov 7, 2014 at 1:19 AM, Needham, Guy
mailto:guy.need...@virginmedia.co.uk>> wrote:
Hi all,
I have a configuration with a file channel configured such that:
a1.c
s in bytes. At 500k, you will likely end up with too many files.
You should set it as high as you can.
Thanks, Hari
On Mon, Nov 10, 2014 at 1:05 AM, Needham, Guy
mailto:guy.need...@virginmedia.co.uk>> wrote:
Hari, Jeff,
thanks for your replies. It's Flume 1.5.0, I'll use the
I'm running Flume 1.5.0 with this configuration:
flume_test.sources = sr1
flume_test.channels = ch1
flume_test.sinks = sk1
#avro source
flume_test.sources.sr1.type = avro
flume_test.sources.sr1.channels = ch1
flume_test.sources.sr1.bind = 10.92.211.22
flume_test.sources.sr1.port = 55000
flume_tes
n Tue, Jan 13, 2015 at 7:32 AM, Needham, Guy
mailto:guy.need...@virginmedia.co.uk>> wrote:
I'm running Flume 1.5.0 with this configuration:
flume_test.sources = sr1
flume_test.channels = ch1
flume_test.sinks = sk1
#avro source
flume_test.sources.sr1.type = avro
flume_test.sources.sr
I've also found that when running from elsewhere, the dir should contain a
flume-env.sh file that references FLUME_HOME - that way the log4j.properties
file will be included. This approach also allows each flume agent to have the
heap size and JAVA_HOME variables configured independently.
Rega
+1
Regards,
Guy Needham | Data Discovery
Virgin Media | Technology and Transformation | Data
Bartley Wood Business Park, Hook, Hampshire RG27 9UP
D 01256 75 3362
I welcome VSRE emails. Learn more at http://vsre.info/
-Original Message-
From: Jarek Jarcec Cecho [mailto:jar...@gmail.c
With multiple sinks reading from one channel, will each sink read each event,
or will the events be distributed between the sinks?
Regards,
Guy Needham | Data Discovery
Virgin Media | Technology and Transformation | Data
Bartley Wood Business Park, Hook, Hampshire RG27 9UP
D 01256 75 3362
I wel
12 matches
Mail list logo