Re: [VOTE] KIP-657: Add Customized Kafka Streams Logo

2020-08-20 Thread Antony Stubbs
(just my honest opinion) I strongly oppose the suggested logos. I completely agree with Michael's analysis. The design appears to me to be quite random (regardless of the association of streams with otters) and clashes terribly with the embedded Kafka logo making it appear quite unprofessional. I

Re: [VOTE] KIP-657: Add Customized Kafka Streams Logo

2020-08-20 Thread Ben Stopford
Just adding my 2c. Whether "cute" is a good route to take for logos is pretty subjective, but I do think the approach can work. However, a logo being simple seems. This was echoed earlier in Robins 'can it be shrunk' comment. Visually there's a lot going on in both of those images. I think simplif

RE: Kafka Streams Key-value store question

2020-08-20 Thread Pirow Engelbrecht
Hi Bill, Yes, that seems to be exactly what I need. I’ve instantiated this global store with: topology.addGlobalStore(storeBuilder, "KVstoreSource", Serdes.String().deserializer(), Serdes.String().deserializer(), this.config_kvStoreTopic, "KVprocessor", KeyValueProcessor::new); I’ve added a K

Re: Kafka Streams Key-value store question

2020-08-20 Thread Nicolas Carlot
You need to set the auto offset reset to earliest, it uses latest as default. StreamsConfig.consumerPrefix(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG), "earliest" Le jeu. 20 août 2020 à 13:58, Pirow Engelbrecht < pirow.engelbre...@etion.co.za> a écrit : > Hi Bill, > > > > Yes, that seems to be exac

Re: Kafka Streams Key-value store question

2020-08-20 Thread Liam Clarke-Hutchinson
Hi Pirow, You can configure the auto offset reset for your stream source's consumer to "earliest" if you want to consume all available data if no committed offset exists. This will populate the state store on first run. Cheers, Liam Clarke-Hutchinson On Thu, 20 Aug. 2020, 11:58 pm Pirow Engelb

Kafka streams sink outputs weird records

2020-08-20 Thread Pirow Engelbrecht
Hello, I've got Kafka Streams up and running with the following topology: Sub-topology: 0 Source: TopicInput (topics: [inputTopic]) --> InputProcessor Processor: InputProcessor (stores: [KvStore]) --> TopicOutput <-- TopicInput Source: KvInput (topics: [kvStoreTopic])

Re: Mirror Maker 2.0 Queries

2020-08-20 Thread Ryanne Dolan
Ananya, see responses below. > Can this number of workers be configured? The number of workers is not exactly configurable, but you can control it by spinning up drivers and using the '--clusters' flag. A driver instance without '--clusters' will run one worker for each A->B replication flow. So

Re: Kafka streams sink outputs weird records

2020-08-20 Thread Bruno Cadonna
Hi Pirow, hard to to have an idea without seeing the code that is executed in the processors. Could you please post a minimal example that reproduces the issue? Best, Bruno On 20.08.20 14:53, Pirow Engelbrecht wrote: Hello, I’ve got Kafka Streams up and running with the following topology:

Re: MirrorMaker 2.0 - Translating offsets for remote topics and consumer groups

2020-08-20 Thread Josh C
Thanks again Ryanne, I didn't realize that MM2 would handle that. However, I'm unable to mirror the remote topic back to the source cluster by adding it to the topic whitelist. I've also tried to update the topic blacklist and remove ".*\.replica" (since the blacklists take precedence over the whi

Re: Kafka Streams Key-value store question

2020-08-20 Thread Matthias J. Sax
`auto.offset.reset` does not apply for global-store-topics. At startup, we app would always "seek-to-beginning" for a global-store-topic, bootstrap the global-store, and afterwards start the actually processing. However, no offsets are committed for global-store-topics. Maybe this is the reason w

Re: Mirror Maker 2.0 Queries

2020-08-20 Thread Ananya Sen
Thanks, Ryanne. That answers my questions. I was actually missing this "tasks.max" property. Thanks for pointing that out. Furthermore, as per the KIP of Mirror Maker 2.0, there are 3 types of connectors in a Mirror Maker Cluster: 1. KafkaSourceConnector - focus on replicating topic partitions

Re: Mirror Maker 2.0 Queries

2020-08-20 Thread Ryanne Dolan
> Can we configure tasks.max for each of these connectors separately? I don't believe that's currently possible. If you need fine-grained control over each Connector like that, you might consider running MM2's Connectors manually on a bunch of Connect clusters. This requires more effort to set up,

Re: Mirror Maker 2.0 Queries

2020-08-20 Thread Ananya Sen
Thanks a lot Ryanne. That was really very helpful. On Thu, Aug 20, 2020, 11:49 PM Ryanne Dolan wrote: > > Can we configure tasks.max for each of these connectors separately? > > I don't believe that's currently possible. If you need fine-grained control > over each Connector like that, you might

How to update configurations in MM 2.0

2020-08-20 Thread nitin agarwal
Hi All, If we run MM 2.0 in a dedicated MirrorMaker cluster, how do we update a few properties like increasing the number of tasks, adding a topic in whitelist or blacklist etc. What I have observed is that if we update the properties in connect-mirror-maker.properties and restart the MM 2.0 node

MirrorMaker2 for active-active setup

2020-08-20 Thread Anup Shirolkar
Hi Kafka users, I am trying to setup an active-active Kafka setup in docker with 2 MM2 instances. The setup logically looks like the diagram below. [image: Screen Shot 2020-08-20 at 2.39.13 pm.png] Each MM2 is just copying data from the other kafka cluster and so is unidirectional. Strangely, whe

Fwd: MirrorMaker2 for active-active setup

2020-08-20 Thread Anup Shirolkar
Hi Kafka users, I am trying to set up an active-active Kafka setup in docker with 2 MM2 instances. The setup logically looks like the diagram below. [image: Screen Shot 2020-08-20 at 2.39.13 pm.png] Each MM2 is just copying data from the other Kafka cluster and so is unidirectional. Strangely, wh

Headers are used in producer record, how are the higher version broker and lower version clients compatible

2020-08-20 Thread Yuguang Zhao
Hi all, I would like to ask some questions. our company env: Broker version: 1.0.1 producer use kafka-client version: 1.0.1 consumer use kafka-client version: 0.10.2.2 the producer send record and the record with header, the consumer can not consume the record and out a WARN LOG at `org.apac