Hi,
I have 2 Kafka Connect instances runs in 2 boxes which forms a Kafka
Connect Cluster. One of the instance seems doing the re-balance repeatedly
in a dead loop without running the actual Sink task, the other works fine.
The following is the output message in the console.
May I ask if you have
Hi ,
I am getting an issue while producing and consuming messages. I have just one
kafka broker and one zookeeper. So I have given replication factor of all
topics as 1.
I don't know from where is it trying to get replication factor as 3.
In my server.properties file I have already defined -:
I have two related topics, a and b, and want to listen to them, but
commit the appropriate offset in each only when I've processed a pair
of corresponding messages from each. I see that the commit() method (I
use the Python API, FWIW) takes an optional dict, but it's not clear
to this simple guy ex
I think first solution is what I need. Your second proposal also looks fine but
I don't like idea of keeping additional ledger.
Thanks Svante ! I appreciate your help.
Sincerely,
Tomasz Kopacki
DevOps Engineer @ Nokia
-Original Message-
From: Svante Karlsson [mailto:svante.karls...@csi.
Hi,
Closing of this vote thread as there will be another RC.
Thanks,
Damian
On Mon, 19 Mar 2018 at 23:47 Ismael Juma wrote:
> Vahid,
>
> The Java 9 Connect issue is similar to the one being fixed for Trogdor in
> the following PR:
>
> https://github.com/apache/kafka/pull/4725
>
> We need to do
alt1)
if you can store a generation counter in the value of the "latest value"
topic you could do as follows
topic latest_value key [id]
topic full_history key[id, generation]
on delete get the latest_value.generation_counter and issue deletes on
full_history
key[id, 0..generation_counter]
alt2
Sorry, I was wrong. For history topic, we can use regular topic with
sufficient retention period.
maybe others can give more ideas.
On Wed, Mar 21, 2018 at 3:34 PM, Kopacki, Tomasz (Nokia - PL/Wroclaw) <
tomasz.kopa...@nokia.com> wrote:
> Do you mean I can use tombstone if my clean policy is 'del
Do you mean I can use tombstone if my clean policy is 'delete' and it still
work ?
Sincerely,
Tomasz Kopacki
DevOps Engineer @ Nokia
-Original Message-
From: Manikumar [mailto:manikumar.re...@gmail.com]
Sent: Wednesday, March 21, 2018 11:03 AM
To: users@kafka.apache.org
Subject: Re: lo
Not sure if understood requirement correctly. one option is to use two
compacted topic topics. one is for current state of the resource
and one is for history. and use tombstones whenever you want to clear them.
On Wed, Mar 21, 2018 at 2:53 PM, Kopacki, Tomasz (Nokia - PL/Wroclaw) <
tomasz.kopa..
Almost,
Consider this example:
|R1|R2|R1|R3|R2|R4|R4|R1|R2| <- this is an example of a stream where RX
represents updates of a particular resource. I need to keep the history of
changes forever for all the resources but only until resource is alive. If
resource expires/dies I'd like to remove i
We can enable both compaction and retention for a topic by
setting cleanup.policy="delete,compact"
http://kafka.apache.org/documentation/#topicconfigs
Does this handle your requirement?
On Wed, Mar 21, 2018 at 2:36 PM, Kopacki, Tomasz (Nokia - PL/Wroclaw) <
tomasz.kopa...@nokia.com> wrote:
> Hi,
Hi,
I've been recently exploring log handling in kafka and I wonder if/how can I
mixed log compaction with log rotation.
A little background first:
I have an application that uses kafka topics as a backend for event sourcing.
Messages represents change of state of my 'resources'. Each resource ha
12 matches
Mail list logo