Thanks. Totally missed that.
> From: b...@b3k.us
> Date: Mon, 10 Mar 2014 19:18:50 -0700
> Subject: Re: Remote Zookeeper
> To: users@kafka.apache.org
>
> zookeeper.connect
>
> https://kafka.apache.org/08/configuration.html
>
>
> On Mon, Mar 10, 2014 at 7:17 PM, A A wrote:
>
> > Hi
> >
> > Pr
zookeeper.connect
https://kafka.apache.org/08/configuration.html
On Mon, Mar 10, 2014 at 7:17 PM, A A wrote:
> Hi
>
> Pretty new to Kafka. Have been successful in installing Kafka 0.8.0.
> I am just wondering how should I make my kafka cluster (2 brokers) connect
> to a single remote zookeper
Hi
Pretty new to Kafka. Have been successful in installing Kafka 0.8.0.
I am just wondering how should I make my kafka cluster (2 brokers) connect to a
single remote zookeper server?
I am using $KAFKA/kafka-server-start.sh $KAFKA_CONFIG/server.properties on
both the brokers to start them up
an
I would second that. If you are a little bit risk tolerant, though, we
would certainly appreciate the additional usage and since we are actively
doing QA on it would want to fix any issues you might find.
-Jay
On Mon, Mar 10, 2014 at 3:23 PM, Neha Narkhede wrote:
> The new producer is running o
The new producer is running on our mirror makers serving production load.
It is relatively stable but there are a few corner case bugs that need to
be fixed. Also, we haven't run performance tests yet to come up with
throughput/latency numbers.
I think you can reasonably use it in beta, but I woul
Hello
I am having producer throughput issue, so I am seriously considering to use
new shiny KafkaProducer. Before proceeding, I want to confirm that it's
fully stable for production from Kafka developers.
Thank you
Best, Jae
Currently, the only way to restart from the beginning of the queue is by
deleting the previous checkpoint for the group. The reason is that in real
production deployments, a consumer application can go through process
restarts or other interruptions but the expectation is that it can start
reading
Other than constantly using a new group id (and making a mess of
zookeeper) or deleting the info for the group from zookeeper, is there
any way to start from the beginning of the queue? It looks like this
can be done from the underlying scala code, but I can't find anything in
the Java API.
- Ada
Interesting. We were able to get this from JMX for Kafka 0.7.2 - here's a
snapshot for one of our Kafka clusters:
https://apps.sematext.com/spm-reports/s/93EtvhnOz0
Is getting this from ZK instead of JMX better?
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsear
I put this up over the weekend, thought it might be useful to folks:
https://github.com/b/kafka-websocket
Hi Yonghui,
In 0.8 the load balance logic in consumers is based on range partitioning
with consumer-id as {consumer-1-stream-1, consumer-1-stream-2, ...
consumer-1-stream-10, consumer-2-stream-1, ...} and partitions are assigned
to this list in round robin. So yes, this behavior is expected. If yo
In my environment, I have 2 brokers and only 1 topic, each brokers has 10
partitions,
so there are 20 partitions in total.
I have 4 consumers in one consumer group, each consumer use
createMessageStreams to create 10 streams, 40 streams in total.
Since partition can not be split, so there are
Session termination can happen either when client or zookeeper process
pauses (due to GC) or when the client process terminates. A sustainable
solution is to tune GC settings. For now, you can try increasing the
zookeeper.session.timeout.ms.
On Sun, Mar 9, 2014 at 3:44 PM, Ameya Bhagat wrote:
13 matches
Mail list logo