Hi everyone,
I am using camel KAFKA inside Apache KARAF and producing
messages to the KAFKA server. I have multiple interfaces configured in my
system and sending(producing) messages will happen only by using the first
interface *managmentserver1_local_interface* always. Same thing i t
Bruno,
So, essentially, we are just waiting on the processing of first event that
got an error before going ahead on to the next one.
Second, if application handles storing the events in state store for retry,
Kafka stream would essentially commit the offset of those events, so next
event will be
Oleg, yes you can run multiple MM2s for multiple DCs, and generally that's
what you want to do. Are you using Connect to run MM2, or the
connect-mirror-maker.sh driver?
Ryanne
On Mon, Sep 21, 2020, 3:38 PM Oleg Osipov
wrote:
> I use the configuration for M2M for both datacentres
> clusters:
>
I use the configuration for M2M for both datacentres
clusters:
- {"name": "dc1", "bootstrapServers": ip1}
- {"name": "dc2", "bootstrapServers": ip2}
Do you mean I need use additional names besides 'dc1' and 'dc2'?
On 2020/09/21 17:27:50, nitin agarwal wrote:
> Did you keep the cluste
Did you keep the cluster name the same ? If yes, then it will cause
conflict in metadata stored in MM2 internal topics.
Thanks,
Nitin
On Mon, Sep 21, 2020 at 10:36 PM Oleg Osipov
wrote:
> Hello!
>
> I have two datacenters DC1 and DC2. When I deploy M2M in DC1 or DC2 all
> things look correct. I
Hello!
I have two datacenters DC1 and DC2. When I deploy M2M in DC1 or DC2 all
things look correct. I can create a topic, and this topic will be
synchronized with another datacenter. In this case, I have only one mirror
maker. But I want to deploy M2M in each DC. So after I've done this, newly
c
Hello!
I have two datacenters DC1 and DC2. When I deploy M2M in DC1 or DC2 all things
look correct. I can create a topic, and this topic will be syncronized with
another datacenter. In this case, I have only one mirror maker. But I want to
deploy M2M in each DC. So after I done this, newly c
Pushkar,
I don't know if this meets your needs, but I recently implemented something
similar in Samza (which I would classify as a hack, but it works); my
solution included:
- check for the pause condition on each message
- if the condition is met, then go into a while-true-sleep loop
- alternate
Hi Pushkar,
If you want to keep the order, you could still use the state store I
suggested in my previous e-mail and implement a queue on top of it. For
that you need to put the events into the store with a key that
represents the arrival order of the events. Each time a record is
received fr
Iftach, I think I've found a bug. I tested this fix in my infra and worked
fine. Putting a PR upstream for review:
https://github.com/apache/kafka/pull/9313.
On Wed, Sep 16, 2020 at 1:26 PM Iftach Ben-Yosef
wrote:
> Hey Samuel,
>
> I am facing a similar issue with the partitions.assignment.strat
Hi Pushkar,
This question is better suited for
https://groups.google.com/g/confluent-platform since the Schema Registry
is part of the Confluent Platform but not of Apache Kafka.
Best,
Bruno
On 21.09.20 16:58, Pushkar Deole wrote:
Hi All,
Wanted to understand a bit more on the schema regis
Hi All,
Wanted to understand a bit more on the schema registry provided by
confluent.
Following are the queries:
1. Is the schema registry provided by confluent over the top of Apache
Kafka?
2. If a managed kafka service is used in cloud e.g. say Aiven Kafka, then
does the schema registry implemen
Bruno,
1. the loading of topic mapped to GlobalKTable is done by some other
service/application so when my application starts up, it will just sync a
GlobalKTable against that topic and if that other service/application is
still starting up then it may not have loaded that data on that topic and
t
Thank you for clarifying! Now, I think I understand.
You could put events for which required data in the global table is not
available into a state store and each time an event from the input topic
is processed, you could lookup all events in your state store and see if
required data is now av
Say the application level exception is named as :
MeasureDefinitionNotAvaialbleException
What I am trying to achieve is: in above case when the event processing
fails due to required data not available, the streams should not proceed on
to next event, however it should wait for the processing of c
It is not a kafka streams error, it is an application level error e.g. say,
some data required for processing an input event is not available in the
GlobalKTable since it is not yet synced with the global topic
On Mon, Sep 21, 2020 at 4:54 PM Bruno Cadonna wrote:
> Hi Pushkar,
>
> Is the error y
Hi Pushkar,
Is the error you are talking about, one that is thrown by Kafka Streams
or by your application? If it is thrown by Kafka Streams, could you
please post the error?
I do not completely understand what you are trying to achieve, but maybe
max.task.idle.ms [1] is the configuration yo
Hi,
I would like to know how to handle following scenarios while processing
events in a kafka streams application:
1. the streams application needs data from a globalKtable which loads it
from a topic that is populated by some other service/application. So, if
the streams application starts getti
18 matches
Mail list logo