Hey,
I think this is a known issue in Kafka 2.1.0. Check this out
https://issues.apache.org/jira/browse/KAFKA-7697
It has been fixed in 2.1.1.
On Wed, Mar 13, 2019 at 12:25 AM Joe Ammann wrote:
> Yes, to the best of our knowledge. We had the option to take the cluster
> down on Sat, so we stopp
Yes, to the best of our knowledge. We had the option to take the cluster down
on Sat, so we stopped everything, upgraded the software, set interbroker and
message version to 2.1 and restarted. On Sun we did another rolling restart of
all 4 brokers.
All looked good after that (and we had done th
When you upgraded, did you follow the rolling upgrade instruction from
Kafka site ? And changed message version etc. as described on the site ?
Thanks,
On Tue, 12 Mar 2019 at 23:20, Joe Ammann wrote:
> Hi all
>
> last weekend we have upgraded a small cluster (4 brokers) from 0.10.2.1
> to 2.1.
Hi all
last weekend we have upgraded a small cluster (4 brokers) from 0.10.2.1
to 2.1.0. Everything seemed to go well on the weekend.
For 2 nights in a row we now had an strange behaviour on one of the 4
nodes. At almost exactly 00:08 in both nights, 1 out of the 4 brokers
stopped writing anythin
Hi ManiKumar
Can you suggest any Java api snippet code that does this ? I tried
something like this but could not get it to work. I am seeking help only
after trying everything that I could. Breaking backward compatibility is
tough for many customers like us, unless there is a better and a new wa
Hi Franz,
The MirrorMaker instances are colocated with the brokers, yes. These are beefy,
dedicated hosts that are handling the loads admirably.
The core cluster receives about 400k msg/sec, 1GB/sec across 20 topics at peak
times. CPU usage occasionally crosses 50% during peak times. If I find
Hi Peter,
these are remarkable numbers but to be honest I do not get where you run the
Mirror Maker processes.
Do you run them near the remote clusters or near the target (core?) datacenter
cluster?
As I understand you run 30 MirrorMaker Instances (one for each remote cluster)
on each of the
Hi Ryanne,
thanks. The remark about the ACK is a good and useful hint.
Do you also know why the Mirror Maker uses only one Producer and not one
Producer per Consumer?
Kind regards,
Franz
Gesendet: Dienstag, 12. März 2019 um 14:42 Uhr
Von: "Ryanne Dolan"
An: "Kafka Users"
Betreff: Re: Ka
I have a setup with about 30 remote kafka clusters and one cluster in a core
datacenter where I aggregate data from all the remote clusters. The remote
clusters have 30 nodes each with moderate specs. The core cluster has 100 nodes
with lots of cpu, ram, and ssd storage per node.
I run MirrorMa
Franz, you can run MM on or near either source or target cluster, but it's
more efficient near the target because this minimizes producer latency. If
latency is high, poducers will block waiting on ACKs for in-flight records,
which reduces throughput.
I recommend running MM near the target cluster
Hi all,
there are best practices out there which recommend to run the Mirror Maker on
the target cluster.
https://community.hortonworks.com/articles/79891/kafka-mirror-maker-best-practices.html
I wonder why this recommendation exists because ultimately all data must cross
the border between the
There's already a pretty active Kafka Meetup group 'Kafka Utrecht' which
had meetup in Amsterdam and Rotterdam in the past.
Op di 12 mrt. 2019 om 06:37 schreef Antoine Laffez <
antoine.laf...@lunatech.nl>:
> Hi!
> The workshop is an internal workshop given by certified trainer employee.
> It is 2
12 matches
Mail list logo