Hello Edward,
Please refer to http://kafka.apache.org/contact.html
Guozhang
On Thu, Aug 15, 2013 at 9:18 PM, Edward Kwok wrote:
> Hi there,
>
> I would like to subscribe for the Kafka Dev and users mailing list.
>
>
> Regards,
>
> Edward
>
--
-- Guozhang
> you see that with no compression 80% of the time goes to FileChannel.write,
> But with snappy enabled only 5% goes to writing data, 50% of the time goes
> to byte copying and allocation, and only about 22% goes to actual
I had similar problem with MapDB, it was solved by using memory mapped fil
The more accurate formula is the following since fetch size is per
partition.
* * * #partitions
Thanks,
Jun
On Thu, Aug 15, 2013 at 9:40 PM, Drew Daugherty <
drew.daughe...@returnpath.com> wrote:
> Thank you Jun. It turned out an OOME was thrown in one of the consumer
> fetcher threads. S
Thanks Jun, that would explain why I was running out of memory.
-drew
From: Jun Rao [jun...@gmail.com]
Sent: Friday, August 16, 2013 8:37 AM
To: users@kafka.apache.org
Subject: Re: Kafka Consumer Threads Stalled
The more accurate formula is the following s
Hi - I am making a few assumptions about the 0.8 high-level consumer API that I
am looking to confirm:
-it is OK to have multiple ConsumerConnector objects in the same process? To be
sure, they are all operating independently. I could probably shove everything
into one ConsumerConnector if I ha
Hello Paul,
1. Yes it is OK. Actually each MirrorMaker process may use multiple
ConsumerConnectors.
2. Yes it is OK.
Guozhang
On Fri, Aug 16, 2013 at 8:29 AM, Paul Mackles wrote:
> Hi - I am making a few assumptions about the 0.8 high-level consumer API
> that I am looking to confirm:
>
> -i
Just to clarify, are the consumer threads you are referring to the number
passed into the map along with the topic when instantiating the connector or is
it the fetcher thread count? This formula must specify a maximum memory usage
and not a working usage or we would still be getting OOMEs. Un
Ok,
I didn't realize the write to disk was immediate (is that new in 0.8, with
requested acks enabled?).
I do think the OS will indeed reserve space in advance for data not yet
flushed to disk. This seems to be true, at least, for xfs, which I have
more experience lately.
Jason
On Thu, Aug 15
According to the Kafka 8 documentation under broker configuration. There
are these parameters and their definitions.
log.retention.bytes -1 The maximum size of the log before deleting it
log.retention.bytes.per.topic "" The maximum size of the log for some
specific topic before deleting it
I'm cu
It should be the # fetcher threads. Yes, this is the max memory usage. You
will only hit it if all partitions have fetch.size bytes to give. This
typically only happens when the consumer was stopped and restarted after
some time.
Thanks,
Jun
On Fri, Aug 16, 2013 at 9:36 AM, Drew Daugherty <
dre
log.retention.bytes is for all topics that are not included in
log.retention.bytes.per.topic
(which defines a map of topic -> size).
Currently, we don't have a total size limit across all topics.
Thanks,
Jun
On Fri, Aug 16, 2013 at 2:00 PM, Paul Christian
wrote:
> According to the Kafka 8 doc
11 matches
Mail list logo