Hello Gowtham,
You need to include the size of offset and time index files in your
calculations, plus potentially transaction indexes.
If you use default values, that means 10 MB each for every log
segment, which default size is itself 1 GB.
Alexandre
Le lun. 27 janv. 2020 à 08:51, Gowtham S a
lanning 6 machines, rather look at using 5 ZK's.
> > >
> > > G
> > >
> > > > > Total Broker machine size = Message size per second * Retention
> > > period *
> > > > > Replication Factor
> > > > > =
Hello all,
I recently had the experience of using the script
kafka-consumer-groups.sh from version 2.x of Kafka on a cluster of the
same version which was serving consumers using version 0.8 or earlier
libraries, hence stored the groups and offsets in Zookeeper.
On --list of the consumer groups,
Hi Eugen,
The first line of config log.flush.interval.messages=1 will make Kafka
force an fsync(2) for every produce requests.
The second line of config is not sufficient for periodic flush, you
also need to update log.flush.scheduler.interval.ms which is Long.Max
by default (in which case period-
not. Without this guarantee, it would have to be called ACI.
>
> Eugen
>
>
> 差出人: Alexandre Dupriez
> 送信日時: 2020年3月8日 0:10
> 宛先: users@kafka.apache.org
> 件名: Re: synchronously flushing messages to disk
>
> Hi Eugen,
>
> The first lin
Hi Soumyajit,
It is possible that due to the broker restart, you benefit from less
I/O merges than under steady state. Intuitively, that would come from
a shift from sequential workload with one more dispersed in nature. It
is likely your broker generates more disk read than before the
restart, es
Hi Vitalii,
The timestamps provided by your producers are in microseconds, whereas
Kafka expects milliseconds epochs. This could be the reason for
over-rolling. When you had the default roll time value of a week, did
you experience segment rolls every 15 minutes or so?
Thanks,
Alexandre
Le jeu.
Hi Satendra,
The JVM core indicates you are running out of system memory.
Increasing the heap size of your JVM will not help; if anything, it
will make things worse.
You need to check which processes occupy system memory (look for their
resident set size) and work on reducing memory consumption
ac
Hi Liam,
The property you referred to corresponds to partition leadership, not
ownership from consumers. See
https://issues.apache.org/jira/browse/KAFKA-4084 for a discussion
about why post-incident leader rebalance can sometimes impact
foreground traffic.
Thanks,
Alexandre
Le lun. 12 avr. 2021
Hi Pieter,
FWIW, you may have encountered the following bug:
https://issues.apache.org/jira/browse/KAFKA-12671 .
Thanks,
Alexandre
Le ven. 12 juin 2020 à 00:43, D C a écrit :
>
> Hey peeps,
>
> Anyone else encountered this and got to the bottom of it?
>
> I'm facing a similar issue, having LSO
Hi Pushkar,
If you are using Linux and Kafka 2.6.0+, the closest metric to what
you are looking for is TotalDiskReadBytes [1], which measures data
transfer at the block layer.
Assuming your consumers are doing tail reads and there is no other
activity which requires loading pages from the disk on
t; metric that can be used from 2.5.0?
>
> On Sun, May 16, 2021 at 6:02 PM Alexandre Dupriez <
> alexandre.dupr...@gmail.com> wrote:
>
> > Hi Pushkar,
> >
> > If you are using Linux and Kafka 2.6.0+, the closest metric to what
> > you are looking for is Total
Hi, Lee,
Would you be able to see which Kafka API is generating the traffic?
This is provided by the MBean
kafka.network:type=RequestMetrics,name=RequestsPerSec,request=*,version=([0-9]+)
[1].
Thanks,
Alexandre
[1] https://kafka.apache.org/documentation/#monitoring
Le dim. 16 avr. 2023 à 18:22,
Hi Fares,
What is the rate of offset commits for the group?
How often do you need to commit offsets for consumers in this group?
Thanks,
Alexandre
Le mar. 9 mai 2023 à 18:34, Fares Oueslati a écrit :
>
> Hello Richard,
>
> Thank you for your answer.
>
> Upon examining the `__consumer_offsets` t
14 matches
Mail list logo