Retention is going to be based on a combination of both the retention and
segment size settings (as a side note, it's recommended to use
log.retention.ms and log.segment.ms, not the hours config. That's there for
legacy reasons, but the ms configs are more consistent). As messages are
received by K
I guess that kind of makes sense.
The following section in the config is what confused me:
*"# The following configurations control the disposal of log segments. The
policy can*
*# be set to delete segments after a period of time, or after a given size
has accumulated.*
*# A segment will be deleted
"minimum age of a log file to be eligible for deletion" Key word is
minimum. If you only have 1k logs, Kafka doesn't need to delete anything.
Try to push more data through and when it needs to, it will start deleting
old logs.
On Mon, Sep 21, 2015 at 8:58 PM allen chan
wrote:
> Hi,
>
> Just brou
Just an update. We moved all the partitions from the one topic that
generated most of the 545 thousand messages/second to their own set of
brokers. The old set of 9 brokers now only get 135 thousand
messages/second or 15 thousand messages/sec/broker. We are still seeing the
same log flush time iss
Hi,
Just brought up new kafka cluster for testing.
Was able to use the console producers to send 1k of logs and received it on
the console consumer side.
The one issue that i have right now is that the retention period does not
seem to be working.
*# The minimum age of a log file to be eligible
Are you using the old or new producer? That sounds like the behavior the
old producer had -- it would stick to the same partition for awhile (10
minutes if I remember correctly). The new producer does not have this
behavior, preferring to round-robin the *available* brokers. Note that
since it roun
Upvote this problem. We are using the same version 0.8.2.0 and see a
similar issue.
The stacktrace we have seen is
[2015-09-18 08:57:47,147] ERROR [Replica Manager on Broker 58]: Error when
processing fetch request for partition [topic1,22] offset 19068459 from
follower with correlation id 234437
Hi,
I am using kafka_2.10-0.8.2.0. Per documentation I just need to invoke the
Producer API send without the key and that would result in a round robin based
partitioning, but I only see one particular partition getting all the data.
Bijay
We are using Kafka 0.8.2.1 and the old producer. When there is an issue
with a broker machine (we are not completely sure what the issue is, though
it generally looks like host down), sometimes it takes some of the
producers about 15 minutes to time out. Our producer settings are
request.required.a
Hello,
We've done a variety of different mirror configurations in the past (we
mirror from AWS into several different data centers) including the first
one you describe. In fact for some high volume / large message topics we
actually found splitting it up and placing a dedicated mirror for the lar
So, when using the high level consumer, only offset.storage needs to be
set, and the High level API will take care of committing the offsets
automatically every so often.
If using the simple consumer API, call commitOffsets and set versionId on
the OffsetCommitRequest¹s constructor to be 1 to comm
Hi,
Attaching a screenshot of bytes/sec from Ganglia
As you can see, the graph in RED color belongs to the third replica, for
which the bytes/sec is around 10 times lower than its 2 peers (in Green and
Blue)
Earlier, I was thinking that it could be related to that 1 system only, but
when I create
Hi,
Can I have 2 separate mirror maker processes in this format:-
Process 1 - source: 2, target: 1
Process 2 - source: 3, target: 1
If this is not supported, then will this circular kind of setup work ?
Process 1 (on cluster 1) - source: 2, target: 1
Process 1 (on cluster 2) - source: 1, target
Hello Folks,
Request your expertise on this
Thanks,
Prabhjot
On Fri, Sep 18, 2015 at 6:18 PM, Prabhjot Bharaj
wrote:
> Hi,
>
> I've noticed that 1 follower replica node out of my kafka cluster catches
> up to the data form the leader pretty slowly.
> My topic has just 1 partition with 3 replic
14 matches
Mail list logo