Thanks! I wonder if this is a bit far-fetched as no one seems to do this at
the moment.
On Fri, May 10, 2019 at 12:50 AM Guozhang Wang wrote:
> Hello Emmanuel,
>
> Yes I think it is do-able technically. Note that it means the offsets of
> cluster A would be stored on cluster B an
Hello,
I would like to know if there is a Java client that would allow me to
consume from topics on a cluster A and produce to topics on a cluster B
with exactly-once semantics. My understanding of the Kafka transactions is
that on the paper it could work, but the kafka java client assumes both ar
Hello,
Considering the following setup:
3x zk nodes running 3.5.0-alpha (for the ability to reconfig without shutting
down)2x kafka nodes 0.8.2.1
each topic has 10 partitions, replication factor =2
Deploying on GKE, so to automate broker ID definition, I used the IP without
the dots as a unique
for a kafka web console I've been using this one and it worked well for mejust
make sure to install the right version of Play framework (in ReadMe.md)
https://github.com/claudemamo/kafka-web-console
> Date: Fri, 27 Mar 2015 15:28:09 -0400
> Subject: Re: A kafka web monitor
> From: yuheng.du.h..
tps://engineering.linkedin.com/kafka/benchmarking-apache-kafka-2-million-writes-second-three-cheap-machines).
-Jon
On Mar 20, 2015, at 3:03 PM, Emmanuel wrote:800B messages /
day = 9.26M messages / sec over 1100 brokers
= ~8400 message / broker / sec
Do I get this right?
Trying to benchmark my own
800B messages / day = 9.26M messages / sec over 1100 brokers
= ~8400 message / broker / sec
Do I get this right?
Trying to benchmark my own test cluster and that's what I see with 2
brokers...Just wondering if my numbers are good or bad...
> Subject: Re: Post on running Kafka at LinkedIn
> From
Kafka on test cluster: 2 Kafka nodes, 2GB, 2CPUs3 Zookeeper nodes, 2GB, 2CPUs
Storm:3 nodes, 3CPUs each, on the same Zookeeper cluster as Kafka.
1 topic, 5 partitions, replication x2
Whether I use 1 slot for the Kafka Spout or 5 slots (=#partitions), the
throughput seems about the same.
I can't se