I have a 3 node kafka cluster running 0.8.1.1, recently updated from 0.8.1
and noticing now that producing from Ruby/Poseidon is having trouble. If
I'm reading correctly, it appears that the Poseidon is attempting to
produce on partition 1 on kafka1, but partition 1 is on kafka1.
Does this look l
Hi all,
I'm running kafka 0.8.1.1, and encountering a weird problem.
All my partitions' leader became -1 some day, and the ISR became empty, for
example
Topic:infinity PartitionCount:12 ReplicationFactor:2 Configs:
Topic: infinity Partition: 0Leader: -1 Replicas: 0,1
I believe a simpler solution would be to create multiple
ConsumerConnector, each with 1 thread (single ConsumerStream) and use
commitOffset API to commit all partitions managed by each
ConsumerConnector after the thread finished processing the messages.
Does that solve the problem, Bhavesh?
Gwen
Yeah, from reading that I suspect you need the SimpleConsumer. Try it out and
see.
Philip
-
http://www.philipotoole.com
On Tuesday, September 2, 2014 5:43 PM, Bhavesh Mistry
wrote:
Hi Philip,
Yes, We have disabled auto commit but, we need to be
Hi Philip,
Yes, We have disabled auto commit but, we need to be able to read from
particular offset if we manage the offset ourself in some storage(DB).
High Level consumer does not allow per partition management plug-ability.
I like to have the High Level consumers Failover and auto rebalancing
No, you'll need to write your own failover.
I'm not sure I follow your second question, but the high-level Consumer should
be able to do what you want if you disable auto-commit. I'm not sure what else
you're asking.
Philip
-
http://www.philipotoole.c
Hi Philip,
Thanks for the update. With Simple Consumer I will not get failover and
rebalance that is provided out of box. what is other option not to block
reading and keep processing and commit only when batch is done.
Thanks,
Bhavesh
On Tue, Sep 2, 2014 at 4:43 PM, Philip O'Toole <
philip.
Either use the SimpleConsumer which gives you much finer-grained control, or
(this worked with 0.7) spin up a ConsumerConnection (this is a HighLevel
consumer concept) per partition, turn off auto-commit.
Philip
-
http://www.philipotoole.com
On Tuesd
Hi Kafka Group,
I have to pull the data from the Topic and index into Elastic Search with
Bulk API and wanted to commit only batch that has been committed and still
continue to read from topic further on same topic. I have auto commit to
be off.
List batch .
while (iterator.hasNext()) {
Hi Theo,
You can try to set replica.fetch.min.bytes to some large number (default to
1) and increase replica.fetch.wait.max.ms (default to 500) and see if that
helps. In general, with 4 fetchers and min.bytes to 1 the replicas would
effectively exchange many small packets over the wire.
Guozhang
To give some more details:
* I used the src tarball of Kafka 0.7.2.
* Ran sbt idea. This generated a bunch of *.iml files under various folders
of Kafka.
* I tried to open the project by pointing to the Kafka project directory.
This opened an "Import Project from SBT project" dialog, and when I
se
Hi,
I am trying to setup Kafka 0.7.2 project in Intellij Idea 13 CE. The wiki
instructions for developers seems to be pointing to trunk. Since the place
I'm working for is using the older version, I was thinking of setting that
version up in Intellij.
Are there instructions on how to do that ?
T
12 matches
Mail list logo