Hi there,
I am a newbie to Kafka. I am trying to use the (
https://github.com/endgameinc/elasticsearch-river-kafka) plugin to pull the
messages from Kafka.
When I start the ElasticSearch, the 1st message gets pulled into the
cluster. And after no messages are pulled even there are enough messages
Hi,
We are evaluating kafka-0.8 for our product. We will start consumer for
each partition. When i try to consume using High-Level API i could able to
consume from kafka. But when i try to consume from kafka using Low-Level API, i
am getting message size as 0. Am i missing some configura
The following log says there are only 2 messages in the log.
kafka.common.OffsetOutOfRangeException: Request for offset 1318 but we only
have log segments in the range 0 to 2.
If you run console consumer on that topic (using --from-beginning), how
many messages do you see?
Thanks,
Jun
On Mon,
Did you make sure the fetch size is larger than the largest message (
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-WhydoesmyconsumergetInvalidMessageSizeException?
)?
Thanks,
Jun
On Mon, Feb 24, 2014 at 2:23 AM, Ranjith Venkatesan
wrote:
> Hi,
>
> We are evaluating kafka-0.8
Jun,
Are you saying it is possible to get events from the high-level consumer
regarding various state machine changes? For instance, can we get a
notification when a rebalance starts and ends, when a partition is
assigned/unassigned, when an offset is committed on a partition, when a leader
c
Hi Robert,
Yes, you can check out the callback functions in the new API
onPartitionDesigned
onPartitionAssigned
and see if they meet your needs.
Guozhang
On Mon, Feb 24, 2014 at 8:18 AM, Withers, Robert wrote:
> Jun,
>
> Are you saying it is possible to get events from the high-level consume
Hi Jun,
I have exactly 2 messages when I run the console consumer. I am totally
confused why my API Consumer looks for 1318 offset.
Any help is greatly appreciated !
Thanks,
KR
On Mon, Feb 24, 2014 at 7:37 AM, Jun Rao wrote:
> The following log says there are only 2 messages in the log.
>
>
Hi,
It seems that my consumers cannot be shut down properly.
I can still see many unused consumers on the portal. Is there a way to get
rid of all these consumers? I tried to call shutdown explicitly, but
without any luck.
Any help appreciated.
Chen
I am looking on how to get kafka to work with python and sending messages.
I am using python kafka to produce and consume messages.
https://github.com/mumrah/kafka-python
the messages that I am sending is a json string in python.
kafka = KafkaClient(kafka_domain, 9092)
producer = SimpleProdu
That’s wonderful. Thanks for kafka.
Rob
On Feb 24, 2014, at 9:58 AM, Guozhang Wang
mailto:wangg...@gmail.com>> wrote:
Hi Robert,
Yes, you can check out the callback functions in the new API
onPartitionDesigned
onPartitionAssigned
and see if they meet your needs.
Guozhang
On Mon, Feb 24,
Cliff,
at this time, I’m not planning any further development. But it someone submits
a pull request, I’ll be happy to merge that in.
Pascal.
On Feb 22, 2014, at 11:03 PM, Cliff Resnick wrote:
> It appears compression (gzip, snappy) is not supported in the librdkafka
> version (8.0.0) used
I am not sure how your plugin works. Does it use SimpleConsumer or the high
level consumer? Which version of Kafka is it on?
Thanks,
Jun
On Mon, Feb 24, 2014 at 12:28 PM, Krishna Raj wrote:
> Hi Jun,
>
> I have exactly 2 messages when I run the console consumer. I am totally
> confused why my
Producer side compression is available in librdkafka 0.8.0.
The consumer side compression is available in librdkafka 0.8.2 and later.
All librdkafka 0.8 releases are API and ABI safe and backwards compatible,
so plugging in a new librdkafka should be fairly straight forward.
2014-02-25 4:32 GMT+
Do you just remove those unused consumer groups in ZK? If so, just run rmr
in a ZK shell.
Thanks,
Jun
On Mon, Feb 24, 2014 at 1:04 PM, Chen Wang wrote:
> Hi,
> It seems that my consumers cannot be shut down properly.
>
> I can still see many unused consumers on the portal. Is there a way to ge
Hi all,
I am referring to this e.g:
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example.
What is the consumer group ID being referred here ?
Thanks
Binita
Hi Binita,
When you use a group id with a high level consumer, the messages will be
distributed among all consumers sharing the same group. So if you have 3
consumers sharing the same group, each will process 1/3 of the messages
within the group. By contrast, if each of the same 3 consumers used a
16 matches
Mail list logo