(just my honest opinion)
I strongly oppose the suggested logos. I completely agree with Michael's
analysis.
The design appears to me to be quite random (regardless of the association
of streams with otters) and clashes terribly with the embedded Kafka logo
making it appear quite unprofessional. I
Just adding my 2c.
Whether "cute" is a good route to take for logos is pretty subjective, but
I do think the approach can work. However, a logo being simple seems. This
was echoed earlier in Robins 'can it be shrunk' comment. Visually there's a
lot going on in both of those images. I think simplif
Hi Bill,
Yes, that seems to be exactly what I need. I’ve instantiated this global store
with:
topology.addGlobalStore(storeBuilder, "KVstoreSource",
Serdes.String().deserializer(), Serdes.String().deserializer(),
this.config_kvStoreTopic, "KVprocessor", KeyValueProcessor::new);
I’ve added a K
You need to set the auto offset reset to earliest, it uses latest as
default.
StreamsConfig.consumerPrefix(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG),
"earliest"
Le jeu. 20 août 2020 à 13:58, Pirow Engelbrecht <
pirow.engelbre...@etion.co.za> a écrit :
> Hi Bill,
>
>
>
> Yes, that seems to be exac
Hi Pirow,
You can configure the auto offset reset for your stream source's consumer
to "earliest" if you want to consume all available data if no committed
offset exists. This will populate the state store on first run.
Cheers,
Liam Clarke-Hutchinson
On Thu, 20 Aug. 2020, 11:58 pm Pirow Engelb
Hello,
I've got Kafka Streams up and running with the following topology:
Sub-topology: 0
Source: TopicInput (topics: [inputTopic])
--> InputProcessor
Processor: InputProcessor (stores: [KvStore])
--> TopicOutput
<-- TopicInput
Source: KvInput (topics: [kvStoreTopic])
Ananya, see responses below.
> Can this number of workers be configured?
The number of workers is not exactly configurable, but you can control it
by spinning up drivers and using the '--clusters' flag. A driver instance
without '--clusters' will run one worker for each A->B replication flow. So
Hi Pirow,
hard to to have an idea without seeing the code that is executed in the
processors.
Could you please post a minimal example that reproduces the issue?
Best,
Bruno
On 20.08.20 14:53, Pirow Engelbrecht wrote:
Hello,
I’ve got Kafka Streams up and running with the following topology:
Thanks again Ryanne, I didn't realize that MM2 would handle that.
However, I'm unable to mirror the remote topic back to the source cluster
by adding it to the topic whitelist. I've also tried to update the topic
blacklist and remove ".*\.replica" (since the blacklists take precedence
over the whi
`auto.offset.reset` does not apply for global-store-topics.
At startup, we app would always "seek-to-beginning" for a
global-store-topic, bootstrap the global-store, and afterwards start the
actually processing.
However, no offsets are committed for global-store-topics. Maybe this is
the reason w
Thanks, Ryanne. That answers my questions. I was actually missing this
"tasks.max" property. Thanks for pointing that out.
Furthermore, as per the KIP of Mirror Maker 2.0, there are 3 types of
connectors in a Mirror Maker Cluster:
1. KafkaSourceConnector - focus on replicating topic partitions
> Can we configure tasks.max for each of these connectors separately?
I don't believe that's currently possible. If you need fine-grained control
over each Connector like that, you might consider running MM2's Connectors
manually on a bunch of Connect clusters. This requires more effort to set
up,
Thanks a lot Ryanne. That was really very helpful.
On Thu, Aug 20, 2020, 11:49 PM Ryanne Dolan wrote:
> > Can we configure tasks.max for each of these connectors separately?
>
> I don't believe that's currently possible. If you need fine-grained control
> over each Connector like that, you might
Hi All,
If we run MM 2.0 in a dedicated MirrorMaker cluster, how do we update a few
properties like increasing the number of tasks, adding a topic in whitelist
or blacklist etc.
What I have observed is that if we update the properties in
connect-mirror-maker.properties and restart the MM 2.0 node
Hi Kafka users,
I am trying to setup an active-active Kafka setup in docker with 2 MM2
instances.
The setup logically looks like the diagram below.
[image: Screen Shot 2020-08-20 at 2.39.13 pm.png]
Each MM2 is just copying data from the other kafka cluster and so is
unidirectional.
Strangely, whe
Hi Kafka users,
I am trying to set up an active-active Kafka setup in docker with 2 MM2
instances.
The setup logically looks like the diagram below.
[image: Screen Shot 2020-08-20 at 2.39.13 pm.png]
Each MM2 is just copying data from the other Kafka cluster and so is
unidirectional.
Strangely, wh
Hi all, I would like to ask some questions.
our company env:
Broker version: 1.0.1
producer use kafka-client version: 1.0.1
consumer use kafka-client version: 0.10.2.2
the producer send record and the record with header,
the consumer can not consume the record and out a WARN LOG at
`org.apac
17 matches
Mail list logo