we need to think through what happens when
every
log has only 1 segment left and yet the total size still exceeds the
limit.
Do we roll log segments early?
Thanks,
Jun
On Sun, May 4, 2014 at 4:31 AM, vinh wrote:
Thanks Jun. So if I understand this correctly, there really is
no
gt;>>>> high that the size threshold is reached before the time threshold.
>>>>>>> But, I
>>>>>>> may be ok with that because if Kafka goes down, it can cause upstream
>>>>>>> applications to fail. This can result in higher
ote:
Thanks Jun. So if I understand this correctly, there really
is no
master
property to control the total aggregate size of all Kafka data
files
on
a
broker.
log.retention.size and log.file.size are great for managing
data at
the
application level. In our case, application needs chan
and this correctly, there really is no
master
property to control the total aggregate size of all Kafka data files
on
a
broker.
log.retention.size and log.file.size are great for managing data at
the
application level. In our case, application needs change frequently,
and
performance itself is an
gt;>>> some
>>>> of the core ones, like Kafka. This is much easier said that done
>>>> though.
>>>>
>>>>
>>>> On May 5, 2014, at 9:16 AM, Jun Rao wrote:
>>>>
>>>> Yes, your understanding is correct. A global knob that control
nks,
Jun
On Sun, May 4, 2014 at 4:31 AM, vinh wrote:
Thanks Jun. So if I understand this correctly, there really is no
master
property to control the total aggregate size of all Kafka data files on
a
broker.
log.retention.size and log.file.size are great for managing data at the
appli
t means that some of the logs may not be retained as
>>> long as you want. Also, we need to think through what happens when every
>>> log has only 1 segment left and yet the total size still exceeds the
>>> limit.
>>> Do we roll log segments early?
>
still exceeds the limit.
Do we roll log segments early?
Thanks,
Jun
On Sun, May 4, 2014 at 4:31 AM, vinh wrote:
Thanks Jun. So if I understand this correctly, there really is no master
property to control the total aggregate size of all Kafka data files on a
broker.
log.retention.size and l
gt; property to control the total aggregate size of all Kafka data files on a
>> broker.
>>
>> log.retention.size and log.file.size are great for managing data at the
>> application level. In our case, application needs change frequently, and
>> performance itself i
no master
> property to control the total aggregate size of all Kafka data files on a
> broker.
>
> log.retention.size and log.file.size are great for managing data at the
> application level. In our case, application needs change frequently, and
> performance itself is an
Thanks Jun. So if I understand this correctly, there really is no master
property to control the total aggregate size of all Kafka data files on a
broker.
log.retention.size and log.file.size are great for managing data at the
application level. In our case, application needs change
log.retention.size controls the total size in a log dir (per
partition). log.file.size
controls the size of each log segment in the log dir.
Thanks,
Jun
On Thu, May 1, 2014 at 9:31 PM, vinh wrote:
> In the 0.7 docs, the description for log.retention.size and log.file.size
> sound ver
In the 0.7 docs, the description for log.retention.size and log.file.size sound
very much the same. In particular, that they apply to a single log file (or
log segment file).
http://kafka.apache.org/07/configuration.html
I'm beginning to think there is no setting to control the max aggr
Using Kafka 0.7.2, I have the following in server.properties:
log.retention.hours=48
log.retention.size=107374182400
log.file.size=536870912
My interpretation of this is:
a) a single log segment file over 48hrs old will be deleted
b) the total combined size of *all* logs is 100GB
c) a single log
14 matches
Mail list logo