Hi Jonathan,
Just a quick update: I have not been able to reproduce the duplicates issue
with the 2.2 RC, even with a topology very similar to the one you included
in your stackoverflow post.
I think we should treat this as a new bug. Would you mind opening a new
Jira bug ticket with some steps t
Ryanne, thank you!
Now it is clear, why offsets on rc behave like this.
Tolya
5 марта 2019 г., в 20:05, Ryanne Dolan
mailto:ryannedo...@gmail.com>> написал(а):
Tolya,
You mentioned that you are replicating "with internal topics", so I'd
expect the __consumer_offsets topic in the target cluste
Thanks Bill,
I have written up a ticket here:
https://issues.apache.org/jira/browse/KAFKA-8042
Adrian
On 05/03/2019, 15:44, "Bill Bejeck" wrote:
Hi Adrian,
No, it's not an expected outcome.
Could you file a Jira ticket and include the information requested by
Guozha
I’m not sure what’s your intention here. Are you trying to do QuickStart or
upgrading, or something else ?
On Tue, 5 Mar 2019 at 18:02, luke_...@11h5.com wrote:
> 1. The Zookeeper configuration file like this:
>
> tickTime=2000
> dataDir=/root/zookeeper
> clientPort=2181
>
> I
1. The Zookeeper configuration file like this:
tickTime=2000
dataDir=/root/zookeeper
clientPort=2181
I started Zookeeper with Standalone mode and listened on default port 2181.
And then, I inputed command "cd /opt/kafka-2.1.1" on shell window.
Next, issue command "cp -r
Hi,
i am trying to create "n" numbers of partitions for a single broker and those
should be keyed partitions. I am able to push message in my first
keyed-partition by taking ((partition-size) - 1) through custom-partitioner
class, so in this case the first-keyed partition will the be last parti
Tolya,
You mentioned that you are replicating "with internal topics", so I'd
expect the __consumer_offsets topic in the target cluster to include (at
least) the same records as the source cluster. MirrorMaker does not
translate offsets, so the downstream commits will be wrong if you try to
replica
Hello, Ryanne and thank you for you answer!
I am using idempotent producers. And you are right, I started replication after
few days and some of source data were already deleted (because of retention) at
that moment.
Still I couldn’t understand logic behind Kafka-consumer-groups. With console
Hi Adrian,
No, it's not an expected outcome.
Could you file a Jira ticket and include the information requested by
Guozhang (code and configs) and we can try to reproduce the error?
Thanks,
Bill
On Tue, Mar 5, 2019 at 10:14 AM Adrian McCague
wrote:
> Drilling down further:
>
> bash-4.2# pwd
>
Tolya,
That is the expected behavior. Offsets are not consistent between mirrored
clusters.
Kafka allows duplicate records ("at least once"), which means the
downstream offsets will tend to creep higher than those in the source
partitions. For example, if a producer sends a record but doesn't rec
Drilling down further:
bash-4.2# pwd
/data/fooapp/0_7
bash-4.2# for dir in $(find . -maxdepth 1 -type d); do echo "${dir}: $(find
${dir} -type f -name 'MANIFEST-*' -printf x | wc -c)"; done
.: 8058
./KSTREAM-JOINOTHER-25-store: 851
./KSTREAM-JOINOTHER-40-store: 819
./KSTREAM-JOINT
Hello, guys!
I am not sure about offsets replicated by MirrorMaker.
I am replicating data from one Kafka cluster (let's say cluster A, Confluent
Kafka 2.0) to another (cluster B, Confluent Kafka 2.1) with internal topics.
MirrorMaker lag is somewhere between 1-2k events.
I started replication a
12 matches
Mail list logo