Hi folks,
As far as I know, Kafka Stream is a separate process by reading data from
topic, transform, and writing to another topic if needed. In this case, how
this process supports high throughout stream as well as load balance in terms
of message traffic and computing resource for stream proce
Hi All,
What would be the best topics definition strategy, to support use-cases
where we would like to avoid starvation within different instances of
events of the same type.
For example, let's say we have fraud detection entity that issue a new
event upon each failed sign-in, and an action is t
Hi John,
Thanks a lot for you explanation. It does make much more sense now.
The Jira issue I think is pretty well explained (with a reference to
this thread). And I've lest my 2 cents in the pull request.
You are right I didn't notice that repartition topic contains the same
message effectively
Hi,
I have created a zookeeper and three brokers as dockers in a physical host
as shown below
[image: image.png]
The followings are used to create Zookeeper and Kafka dockers
docker run -d --name zookeeper -p 2181:2181 -p 2888:2888 -p 3888:3888
jplock/zookeeper
docker run -d --name kafka_broke
I believe you need to use -e KAFKA_ADVERTISED_PORT=909..
On Mon, Jul 16, 2018 at 7:41 AM, Mich Talebzadeh
wrote:
> Hi,
>
> I have created a zookeeper and three brokers as dockers in a physical host
> as shown below
>
> [image: image.png]
>
> The followings are used to create Zookeeper and Kafka
Thanks Chris,
I am afraid the issue is still there!
docker run -d --name kafka_broker0 -p 9092:9092 -e
KAFKA_ADVERTISED_HOST_NAME=50.140.197.220 -e ZOOKEEPER_IP=50.140.197.220 -e
KAFKA_BROKER_ID=0 -e KAFKA_BROKER_PORT=9092 -e *KAFKA_ADVERTISED_PORT=9092
*ches/kafka
${KAFKA_HOME}/bin/kafka-topi
Also I noticed that bar broker ID =0, the connection to broker ID 1 )node
1) and broker ID 2 (node 2) could not be established
[2018-07-16 18:41:10,419] WARN [Producer clientId=console-producer]
Connection to node 1 could not be established. Broker may not be available.
(org.apache.kafka.clients.N
Vasily,
yes, it can happen. As you noticed, both messages might be processed on
different machines. Thus, Kafka Streams provides 'eventual consistency'
guarantees.
-Matthias
On 7/16/18 6:51 AM, Vasily Sulatskov wrote:
> Hi John,
>
> Thanks a lot for you explanation. It does make much more sens
I restarted all Kafka dockers corresponding to broker 0-2 and now broker ID
0 is selected as leader and is working
${KAFKA_HOME}/bin/kafka-topics.sh --describe -zookeeper rhes75:2181
--topic final
Topic:final PartitionCount:3ReplicationFactor:3
Configs:
Topic: final
Could it be that you changed the KAFKA_ADVERTISED_PORT and restarted those
brokers but didn't restart the rest (until now)?
I wouldn't be surprised if the other brokers continued to use the incorrect
advertised port.
On Mon, Jul 16, 2018 at 1:40 PM, Mich Talebzadeh
wrote:
> I restarted all Kafka
Thanks Chris,
This is the way I gave defined Kafka brokers
docker run -d *--name kafka_broker0* -p *9092*:9092 -e
KAFKA_ADVERTISED_HOST_NAME=50.140.197.220 -e
ZOOKEEPER_IP=50.140.197.220 -e *KAFKA_BROKER_ID=0
*-e KAFKA_BROKER_PORT=9092 -e *KAFKA_ADVERTISED_PORT=9092* ches/kafka
docker run -d
Hello Will,
Your question is very high-level and hence I felt less guilty to give you a
general answer :)
You can read the web docs on on Kafka Streams achieve high throughput via
data parallelism here:
https://kafka.apache.org/11/documentation/streams/architecture
Regarding KSQL, you can look
Hmm.. this seems new to me. Checked on the source code it seems right to me.
Could you try out the latest trunk (build from source code) and see if it
is the same issue for you?
> In addition to that, though, I also see state store metrics for tasks
that have been migrated to another instance, an
Hi,
It seems that it wouldn't be that difficult to address: just don't
break Change(newVal, oldVal) into Change(newVal, null) /
Change(oldVal, null) and update aggregator value in one .process()
call.
Would this change make sense?
On Mon, Jul 16, 2018 at 10:34 PM Matthias J. Sax wrote:
>
> Vasil
It is not possible to use a single message, because both messages may go
to different partitions and may be processed by different applications
instances.
Note, that the overall KTable state is sharded. Updating a single
upstream shard, might required to update two different downstream shards.
-
+1 (binding)
- validated signatures
- quickstart on binary distributions
- unit-tests and packaging on src distribution
Looking awesome! Excited for this release and especially the new connect
features :)
On Tue, Jul 10, 2018 at 10:17 AM, Rajini Sivaram
wrote:
> Hello Kafka users, developers a
16 matches
Mail list logo