Roshan, Hari,
This explains a lot of the problems we have been experiencing.
Thank you very much!
All the best,
Chris Shannon.
On Monday, August 4, 2014, Hari Shreedharan
wrote:
> This was fixed in FLUME-2416. Codecs using direct memory were leaking
> direct buffers if full GC did not happen
This was fixed in FLUME-2416. Codecs using direct memory were leaking
direct buffers if full GC did not happen (since the direct buffers are
actually cleaned up only on full gc).
On Mon, Aug 4, 2014 at 2:57 PM, Christopher Shannon
wrote:
> The one we are using is the bzip2 codec. That is someth
The one we are using is the bzip2 codec. That is something we could test.
On Monday, August 4, 2014, Roshan Naik wrote:
> One of the hdfs compression codecs had a memory leak. dont recall which
> right now. if you are using one, try changing to a diff codec and see if
> the leak goes away.
> -ro
One of the hdfs compression codecs had a memory leak. dont recall which
right now. if you are using one, try changing to a diff codec and see if
the leak goes away.
-roshan
On Sat, Aug 2, 2014 at 6:19 AM, Christopher Shannon
wrote:
> I do want to add that the Windows agents in our configuration
I do want to add that the Windows agents in our configuration were upstream
from the agent using the HDFS sink, and the upstream agents would not
recover gracefully when the downstream agent disconnected them on
encountering an out of memory error.
On Sat, Aug 2, 2014 at 8:18 AM, Christopher Shan
Roshan,
After more testing, it became plain that the Windows environment was not
the problem. The simultaneous JVMs were behaving well in the Windows
environment. There does appear to be a documented memory leak problem with
the HDFS sink for version 1.3.0. Our distro comes from IBM BigInsights, a
Is that a custom Flume build or from some distro ? Hard to say without
additional info. anything interesting in the logs when it crashes ?
On Tue, Jul 29, 2014 at 10:15 AM, Christopher Shannon wrote:
> For development and testing, I sometimes have to run multiple agents on
> the same server /