Sent it too early..
We have around 5K topics and 75K partitions, with a replication factor of
3. One of our brokers went down due to disk space issues for logs. After
fixing that, the broker would not come up and throw Out Of Memory issues
while reading index files. It turned out that we had to tu
We had a situation where the kafka 0.8.2 broker would not come up
(inline)
On Mon, Dec 15, 2014 at 11:45:07AM -0800, Rajiv Kurian wrote:
> I currently have a topic with 1024 partitions. I know it's kind of going
> past the recommended limits, but I kept it like that because I am moving a
> legacy system to kafka and it has a 1024 parallel partitions. I wanted to
I currently have a topic with 1024 partitions. I know it's kind of going
past the recommended limits, but I kept it like that because I am moving a
legacy system to kafka and it has a 1024 parallel partitions. I wanted to
understand the costs of having so many partitions a little bit more though.
I
Thanks Jun, looks like I am on the right track.
On Wed, Oct 29, 2014 at 6:51 PM, Jun Rao wrote:
> 1), 2), 3) Yes. If you use SimpleConsumer, you have to figure out the
> leader of each partition and connect to the right broker. Each fetch
> request can send multiple partitions (if with same lead
1), 2), 3) Yes. If you use SimpleConsumer, you have to figure out the
leader of each partition and connect to the right broker. Each fetch
request can send multiple partitions (if with same leader) and you need to
examine the error code per partition.
Not all those error codes are applicable to th
I am planning to use the current Java API and have the following use case:
i) A single topic with about 1024 partitions.
ii) A number of processes that want to consume these partitions in a
deterministic way. The machine -> partitions assignment is done outside of
kafka. During the lifetime of a p
This should be fine. For details, see
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-HowdoIchoosethenumberofpartitionsforatopic
?
Thanks,
Jun
On Tue, Dec 31, 2013 at 1:43 PM, Tom Amon wrote:
> I'm looking to create a topic with about 1400 partitions to allow a high
> degree of para
I'm looking to create a topic with about 1400 partitions to allow a high
degree of parallel processing. We have 5 brokers so that would be 280
partitions per box. Has anyone done something with this number of
partitions before?
Thanks.
many
> consumers you will have is to simply have a large number of partitions so
> that each consumer takes a significant portion of them.
>
> This line of thought has lead me to the following question: What is the
> potential overhead from creating a large number of partitions on a t
you will have is to
simply have a large number of partitions so that each consumer takes a
significant portion of them.
This line of thought has lead me to the following question: What is the
potential overhead from creating a large number of partitions on a topic?
We would probably have at
11 matches
Mail list logo