I was under the impression that these settings
log.flush.interval.messages=1
log.flush.interval.ms=0
guarantee a synchronous fsync for every message, i.e.when the producer receives
an ack for a message, it is guaranteed to have been persisted to as many disks
as min.insync.replicas requires.
As
I found out by logger that for those Applications whose Processors *are
initialized* it is logged that:
INFO AbstractCoordinator:677 - [Consumer
clientId=sk5-client-StreamThread-1-consumer, groupId=sk5-appid]
Discovered group coordinator myserver:6667 (id: 2147482646 rack: null)
>
> INFO Consu
Hi Eugen,
The first line of config log.flush.interval.messages=1 will make Kafka
force an fsync(2) for every produce requests.
The second line of config is not sufficient for periodic flush, you
also need to update log.flush.scheduler.interval.ms which is Long.Max
by default (in which case period-
This change will require brief interruption for services depending on the
current zookeeper — but only for the amount of time it takes the service on the
original zookeeper to restart. Here’s the basic process:
1. Provision two new zookeepers hosts, but don’t start the service on the new
hosts.
Hi All,
Is there a recommended way of passing state stores around across different
classes? The problem is state store can be fetched only if you have access
to the context and in most of the scenarios look up to state store
somewhere inside another class. I can think of two options. Either add
st
Hello,
I am trying out Mirror Maker 2 on my Kafka 2.4 cluster for DR purposes. I have
created a dedicated cluster for the DR. MM2 seems to be working fine, but I not
sure how would I be able to produce to a topic in a scenario of a DR.
Current Scenario, let's say I have a topic called "mytopic"
Hello, take a look at RemoteClusterUtils.translateOffsets, which will give
you the correct offsets _and topics_ to subscribe to. The method
automatically renames the topics according to the ReplicationPolicy. You
can leverage this method in your consumer itself or in external tooling.
Ryanne
On S
Thanks Alexandre
So if I understand you correctly, the following configuration will guarantee
that when the producer receives an ack, an fsync() for the message in question
is guaranteed to have successfully finished(). Is that correct?
log.flush.interval.messages=1
log.flush.interval.ms=0
log.
Hi Peter,
That was great explanation.
However I have question about the last stage where you mentioned to update
the zookeeper server in the services where single zookeeper is used.
Why do I need to do that?
Is it because only single zookeeper is used and you want to make sure high
availability of
With a single zk in your zookeeper connect string, broker restarts are
vulnerable to a single point of failure. If that zookeeper is offline, the
broker will not start. You want at least two zookeepers in the connect string —
it’s the same reason you should put more than one kafka broker in clie
10 matches
Mail list logo