Hey Ryanna,
Could the `max.in.flight.requests.per.connection=1` parameter help to
prevent the "slightly out-of-order records"?
Or is there any workaround for that? Duplicates are fine for me, but I'd
like to have the same order of messages too.
Can you please add some more detail about why those "
Hi Liam,
Thank you very much for the suggestion.
Thanks
C Suresh
On Thursday, April 23, 2020, Liam Clarke-Hutchinson <
liam.cla...@adscale.co.nz> wrote:
> Hi Suresh,
>
> A Topology can contain several processing workflows. So just create two
> workflows in the topology builder.
>
> StreamsBuild
Looks like MM2 ships with Kafka 2.4, if our brokers are still running on
older kafka version (2.3), can the MM2 running with 2.4 code work with
brokers running with 2.3 code?
Thanks.
Hi Suresh,
A Topology can contain several processing workflows. So just create two
workflows in the topology builder.
StreamsBuilder sb = new StreamsBuilder();
sb.stream("sourceA")..to("sinkA");
sb.stream("sourceB")..to("sinkB");
Topology topology = sb.build();
KafkaStreams streams = KafkaStreams
Hey Peter, Connect will need to support transactions before we can
guarantee the order of records in remote topics. We can guarantee that no
records are dropped or skipped, even during consumer failover/migration
etc, but we can still have duplicates and slightly out-of-order records in
the downstr
Hi Team,
Greetings.
I have a use-case wherein I have to consume messages from multiple topics
using Kafka and process it using Kafka Streams, then publish the message
to multiple target topics.
The example is below.
Source topic A - process A - target topic A
Source topic B - process B - targe
Hello there,
I'm facing a challenge at the moment where I need to somehow extend the
brokers so I can make decisions on the messages (whether they should be
discarded or not) on the server side. I've read the documentation but I
haven't seen a way of doing this so I'm starting to look into the cod
Ah, with you now. You'll also need to use the results of
AdminClient.listOffsets which takes TopicPartition objects as an argument.
On Wed, Apr 22, 2020 at 7:43 PM 一直以来 <279377...@qq.com> wrote:
> i use :
> private static void printConsumerGroupOffsets() throws
> InterruptedException, Exe
Hey,
so KIP-382 mentions that:
"Partitioning and order of records is preserved between source and remote
topics."
is the ordering of messages (I guess only in a partition) something that is
actually implemented in 2.4 (or in 2.5)?
Or do I need to set `max.in.flight.requests.per.connection=1` ?
T
Yep, 0 is a non-null value, that is, the consumer group has a committed
offset, which in this case happens to be 0. The dash "-" implies an
unknown/null value.
In your example with your test topics - the earliest offset in the topic is
still 0, and there's not been much data through it, so maybe i
but show "0" and "_", show two value difference???
thank you !
-- --
??: "Liam Clarke-Hutchinson"
i use :
private static void printConsumerGroupOffsets() throws
InterruptedException, ExecutionException {
Properties props = new Properties();
props.setProperty("bootstrap.servers",
"192.168.1.100:9081,192.168.1.100:9082,192.
Yep, looking at the source code of our app we use to track lag, we're using
that method.
On Wed, Apr 22, 2020 at 7:35 PM Liam Clarke-Hutchinson <
liam.cla...@adscale.co.nz> wrote:
> Looking at the source code, try listConsumerGroupOffsets(String
> groupId, ListConsumerGroupOffsetsOptions option
Looking at the source code, try listConsumerGroupOffsets(String
groupId, ListConsumerGroupOffsetsOptions options) instead?
On Wed, Apr 22, 2020 at 6:40 PM 一直以来 <279377...@qq.com> wrote:
> ./kafka-consumer-groups.sh --bootstrap-server localhost:9081 --describe
> --group test
>
>
> use describeCons
14 matches
Mail list logo