log.retention.bytes is per partition. Do you just have a single
topic/partition in the cluster?
Thanks,
Jun
On Mon, Jun 10, 2013 at 8:24 AM, Yu, Libo wrote:
> Hi,
>
> The volume I used for kafka has about 100G space.
> I set log.retention.bytes to about 60G. But at some
> point, the disk was
How many total #topics/partitions are you mirroring? If that number is
large, you will need to increase the # of retries and backoff time in the
consumer in MirrorMaker.
Thanks,
Jun
On Mon, Jun 10, 2013 at 8:21 AM, Yu, Libo wrote:
> Hi,
>
> I come across several critical issues when benchmark
We often replay data days old, and have never seen any issues like
this. We are running 0.72.
Philip
On Mon, Jun 10, 2013 at 11:17 AM, Todd Bilsborrow
wrote:
> We've been running Kafka 0.7.0 in production for several months and have been
> quite happy. Our use case to date has been to pull from
Hello,
Any updates on the 0.8 beta release?
Soby Chacko
On Tue, Jun 4, 2013 at 12:24 PM, Neha Narkhede wrote:
> I was just about to send an update. We can release beta right away.
>
> Joe,
>
> I remember you were interested in helping out. Let me know if you are still
> up for managing the rel
We've been running Kafka 0.7.0 in production for several months and have been
quite happy. Our use case to date has been to pull from the head of our topics,
so we're normally consuming within seconds of message production using the high
level consumer which is working great as far as I can tell
Forgot to mention that log.cleanup.interval.mins has been set to 1 in my case.
Regards,
Libo
From: Yu, Libo [ICG-IT]
Sent: Monday, June 10, 2013 11:24 AM
To: 'users@kafka.apache.org'
Subject: out of disk space
Hi,
The volume I used for kafka has about 100G space.
I set log.retention.bytes to a
Hi,
The volume I used for kafka has about 100G space.
I set log.retention.bytes to about 60G. But at some
point, the disk was full and the processes crashed.
I remember other people reported the same issue.
Has this been fixed?
Regards,
Libo
Hi,
I come across several critical issues when benchmarking mirrormakers.
1 If a topic has N partitions on source cluster, after mirroring, on the
destination
cluster it has only one partition.
2 To solve the issue in 1), a topic is created on source and destination at the
same
time with the s
> Actually you don't need 100s GBs to reap the benefits of Kafka over
Rabbit.
> Because Kafka doesn't centrally maintain state it can always manage higher
> message throughput more efficiently than Rabbit even when there is no
> messages persisted to disk.
>
Just out of curiosity, how does Kafka k
Hi Jonathan,
Cheers,
Tim
On 10 Jun 2013, at 13:12, Jonathan Hodges wrote:
> Actually you don't need 100s GBs to reap the benefits of Kafka over Rabbit.
> Because Kafka doesn't centrally maintain state it can always manage higher
> message throughput more efficiently than Rabbit even when there i
Kafka guarantees order per topic partition per source client.
Thanks,
Neha
On Jun 9, 2013 5:33 PM, "S Ahmed" wrote:
> I understand that there are no guarantees per say that a message may be a
> duplicate (its the consumer's job to guarantee that), but when it comes to
> message order, is kafka
Actually you don't need 100s GBs to reap the benefits of Kafka over Rabbit.
Because Kafka doesn't centrally maintain state it can always manage higher
message throughput more efficiently than Rabbit even when there is no
messages persisted to disk.
However Kafka’s throughput advantage increases d
12 matches
Mail list logo