Hi,
I've been looking at the SimpleConsumer example, and that I've noticed that it
always reads from the leader, and reacts to leader changes by reconnecting to
the new leader. Is it possible to read from a replica in ISR that's not the
leader? If so, how does the consumer get notified the repl
I assume that you are using a ZK-based producer. If brokers don't change in
the window where the VPN connection is down, this may not matter. When the
VPN connection is back, the ZK session will expire and the producer will
establish establish a new ZK session and new connections to the brokers.
Currently, regular consumers can only fetch from the leader replica.
Otherwise, they will get an error in response. We allow some special
consumers to read from follower replicas, but this is really for testing.
Are you thinking of load balancing? Currently, we do load balancing across
partitions.
Actually, we are using custom producers, which recover fully after any
disconnect-and-reconnect from Kafka. It is the High-Level consumers and
Kafka itself than concerned me.
I have seen good behaviour from the system in these conditions before, but
wanted to confirm.
Philip
On Wed, Nov 27, 2
Hi all!
Wikimedia is close to using Kafka to collect webrequest access logs from
multiple data centers. I know that MirrorMaker is the recommended way to do
cross-DC Kafka, but this is a lot of overhead for our remote DCs. To set up a
highly available Kafka Cluster, we need to add a few more
What I did for my project is I have a thread send metadata request to a
random broker and monitor the metadata change periodically. The good thing
is, to my knowledge, any broker in the cluster know the metadata for all
the topics served in this cluster. Another options is you can always query
zook
Hi Andrew,
> We could use LVS or some other load balancer/proxy for the Kafka
> connections, and automatically switch between clusters based on
> availability. But, what would this do to live producers and their
> metadata? Would they be able to handle a total switch of cluster
> metadata?
This
> by non-HA broker do you mean non-HA
> by virtue of it being a single broker with no replication? You would
> still need to get it registered in a ZooKeeper cluster right?\
Right, we're considering using only a single server for Kafka in each remote
DC. We'd run a standalone zookeeper instance
I'm running the Kafka 0.8 version downloaded from the downloads page. I'm
getting lots of issues with socket timeouts from producer and consumer. I'm
also getting errors where brokers that are shut down in a controlled manner
do not get removed from the meta data in other brokers. For instance, I
h
Both the broker and the consumer have the logic for handling ZK session
expirations. So, they should recover automatically. The issue is that if
there is a real failure in the broker/consumer while the VPN is down, the
failure may not be detected.
Thanks,
Jun
On Wed, Nov 27, 2013 at 8:02 AM, Ph
The 0.8 final release is being voted right now. If the vote passes, it
should be available next week. However, what you described is a bit weird.
Do you see any error in the controller and state-change logs?
Thanks,
Jun
On Wed, Nov 27, 2013 at 3:53 PM, Tom Amon wrote:
> I'm running the Kafka
Great Jun, thanks. As always, we've found Kafka 0.72 to be rock solid under
lots of different conditions and failures -- except when the disks fill up,
but I can't blame it for that. :-)
Philip
On Wed, Nov 27, 2013 at 8:44 PM, Jun Rao wrote:
> Both the broker and the consumer have the logic fo
12 matches
Mail list logo