Hi,
We have Kafka cluster with three brokers(0.10.0.1). We are accessing this
cluster using domain name say common.test. Also, L3 load balancer has been
configured for this domain name, so that request will be passed to brokers
in RR way.
The Producer has been implemented in java client(KafkaProd
Apologies for the rather long set-up...
I've been using Kafka as a client for a few months now. The setup I've
been using has three brokers on separate servers, all listening to
port 9092. My consumers always connected to server1:9092. I've ignored
server2 and server3.
Now I'm starting to mess ar
You are connecting to a single seed node - your kafka library will then
under the hood connect to the partition leaders for each partition you
subscribe or post to.
The load is not different compared to if you gave all nodes as connect
parameter. However if your seed node crashes then your client
You should also be able to manage this with a compacted topic. If you give
each message a unique key you'd then be able to delete, or overwrite
specific records. Kafka will delete them from disk when compaction runs. If
you need to partition for ordering purposes you'd need to use a custom
partitio
Hi kafka user and devs,
I have a kafka cluster on AWS EMR instances using de zookeeper that
cames with EMR,
The problem is that after some days running and working good, the
cluster shutdown, every node shutdown showing logs messages like the one
below.
That is happening in the las two month 3
If you create a partitioned topic with at least 3 partitions then you will see
your client connect to all of the brokers. The client decides which partition
a message should go to and then sends it directly to the broker that is the
leader for that partition. If you have replicated topics, the
On 11/27/17, 8:36 PM, "Matthias J. Sax" wrote:
Not sure were you exactly copied this. However, second paragraph here
https://kafka.apache.org/documentation/#upgrade_10_2_0 explains:
> Starting with version 0.10.2, Java clients (producer and consumer) have
acquired the ability to
Hi there,
I would like to know if it is a bad idea to use Apache Kafka, -Zookeeper and
-NiFi on small hardware like a box pc with:
CPU: Celeron J1900 or Atom E3845
RAM: 4 GB
SSD: 32 GB
How often will Apache Kafka write to disc, cause the write cycles are limited
on a SSD?
If there is no probl
Hello!
We tried to migrate data from 0.10.2.1 cluster to 0.11.0.2. Firstly we
spread topics to both clusters. There were lots of problems and restarts
of some nodes of both clusters (we probably shouldn't do that). All this
ended up with a state when we had lots of exceptions from 2 nodes of
Upgrading brokers without client was always supported :)
Since 0.10.2, it also works the other way round.
-Matthias
On 11/28/17 7:33 AM, Brian Cottingham wrote:
> On 11/27/17, 8:36 PM, "Matthias J. Sax" wrote:
>
> Not sure were you exactly copied this. However, second paragraph here
>
Excellent, thanks very much for the help!
On 11/28/17, 11:43 AM, "Matthias J. Sax" wrote:
Upgrading brokers without client was always supported :)
Since 0.10.2, it also works the other way round.
-Matthias
On 11/28/17 7:33 AM, Brian Cottingham wrote:
> On
Sameer,
Thanks for reporting, it looks similar to the ticket we have resolved some
time ago (https://issues.apache.org/jira/browse/KAFKA-5154), note that we
added some check to avoid NPE but instead throws a more meaningful
exception message, but the root cause maybe the same.
If your Kafka Strea
Sorry for being late to the party, but congratulations Onur!
On Wed, Nov 8, 2017 at 1:47 AM, Sandeep Nemuri wrote:
> Congratulations Onur!!
>
> On Wed, Nov 8, 2017 at 9:19 AM, UMESH CHAUDHARY
> wrote:
>
> > Congratulations Onur!
> >
> > On Tue, 7 Nov 2017 at 21:44 Jun Rao wrote:
> >
> > > Af
Hi,
We've recently hit an issue that is marked as "resolved" in the 0.10
branch, but has never been released[1]. There is no known workaround for
the problem.
Upgrading our cluster to a 0.11 version is certainly an option, but is a
risky one given that it could introduce move bugs (especially sin
Hey all,
So, I'm curious to hear how others have solved this problem.
We've got quite a few brokers and rolling all of them to pick up new
configuration (which consists of triggering a clean shutdown, then
restarting the service and waiting for replication to catch up before
moving on) ultimately
How does kafka log compaction work?
Does it compact all of the log files periodically against new changes?
There is quite a nice section on this in the documentation -
http://kafka.apache.org/documentation/#compaction ... I think it should
answer your questions.
On Wed, Nov 29, 2017 at 7:19 AM, Kane Kim wrote:
> How does kafka log compaction work?
> Does it compact all of the log files periodically a
17 matches
Mail list logo