Did you stop mirror maker?
On Thu, Mar 12, 2015 at 8:27 AM, Saladi Naidu wrote:
> We have 3 DC's and created 5 node Kafka cluster in each DC, connected
> these 3 DC's using Mirror Maker for replication. We were conducting
> performance testing using Kafka Producer Performance tool to load 100
>
See the description of log.retention.bytes here:
https://kafka.apache.org/08/configuration.html
You can set a basic value per log-partition, but you'll need to do some
math to work out an appropriate value based on:
1. The number of partitions per topic
2. The number of topics
3. The capacity of t
Actually, space is determined by # of total partitions in a
broker, independent of # consumers.
Thanks,
Jun
On Thu, Jun 20, 2013 at 8:22 AM, Yu, Libo wrote:
> Thanks for your answer, Jun. That explains what I found.
> I thought it was for the machine. If there are many
> consumers (as in our
Thanks for your answer, Jun. That explains what I found.
I thought it was for the machine. If there are many
consumers (as in our case), that number is determined
by the most productive consumer. I would prefer a limit
for the machine.
Regards,
Libo
From: Yu, Libo [ICG-IT]
Sent: Monday, June 10,
log.retention.bytes is per partition. Do you just have a single
topic/partition in the cluster?
Thanks,
Jun
On Mon, Jun 10, 2013 at 8:24 AM, Yu, Libo wrote:
> Hi,
>
> The volume I used for kafka has about 100G space.
> I set log.retention.bytes to about 60G. But at some
> point, the disk was
Forgot to mention that log.cleanup.interval.mins has been set to 1 in my case.
Regards,
Libo
From: Yu, Libo [ICG-IT]
Sent: Monday, June 10, 2013 11:24 AM
To: 'users@kafka.apache.org'
Subject: out of disk space
Hi,
The volume I used for kafka has about 100G space.
I set log.retention.bytes to a