Thanks for the reply.
Before I seen that log, I produced a lot of events for performance test.
(approximately 3G/min), and I have seen that log in an hour or two. and
I've got ERROR meesage frequently like below.
[2014-08-04 11:27:54,547] ERROR [ReplicaFetcherThread-0-6], Error in fetch
Name: Fet
Hi,
I just started with Apache Kafka and wrote a high level consumer program
following the example given here
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example.
Though, I was able to run the program and consume the messages, I have one
doubt regarding *consumer.shutdown()*.
Hi Guozhang,
By explicitly stopping consumer, do you mean *System.exit(-1)* ?? Also, as
per the solution provided by you, the code loops over the ack variable..is
it correct? because even if ack received from writeToDB is true, there is a
possibility that next message may have come making ack vari
I have seen an issue similar to this but with the /controller node.
I am going to update https://issues.apache.org/jira/browse/KAFKA-1387 with
the steps to reproduce the issue I ran into right now.
I don't know what steps caused what you ran into it is very odd that
shouldn't happen.
Were you do
Hi, every one.
I got into a strange case that my consumer using high level api worked fine
at first, but couple days later blocked in ConsumerIterator.hasNext(),
while there are pending messages on the topic: with
kafka-console-consumer.sh I can see continuous messages.
Then i connect to consumer
Hi, everyone.
I'm using 0.8.1.1, and I have 8 brokers and 3 topics each have 16
partitions and 3 replicas.
I got unseen logs like below. this is occur every 5 seconds.
[2014-08-05 11:11:32,478] INFO conflict in /brokers/ids/2 data:
{"jmx_port":9992,"timestamp":"1407204339990","host":"172.25.63.
Is it possible there is another solution to the problem? I think if you
could better describe the problem(s) you are facing and how you are
architected some then you may get responses from others that perhaps have
faced the same problem with similar architectures ... or maybe folks can
chime in wit
Bhavesh, take a look at
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whyisdatanotevenlydistributedamongpartitionswhenapartitioningkeyisnotspecified
?
Maybe the root cause issue is something else? Even if producers produce
more or less than what they are producing you should be able to
How to achieve uniform distribution of non-keyed messages per topic across
all partitions?
We have tried to do this uniform distribution across partition using custom
partitioning from each producer instance using round robing (
count(messages) % number of partition for topic). This strategy resul
Kafka Version: 0.8.x
1) Ability to define which messages get drooped (least recently instead of
most recent in queue)
2) Try Unbounded Queue to find out the Upper Limit without drooping any
messages for application (use case Stress test)
3) Priority Blocking Queue ( meaning a single Producer can
Hi everyone,
The maintainers of logstash is considering adding my Kafka plugin
(https://github.com/joekiller/logstash-kafka ) to the core of logstash. They
are asking at the following link for some +1s from users. Please feel free to
chime in. https://groups.google.com/forum/m/#!topic/logstash
Weide, 0.8.1.1 does not support offsets storage in Kafka. The brokers
do support offset commit requests/fetches but simply forward to
ZooKeeper - you can issue the offset commit and fetch requests to any
broker. Kafka-backed consumer offsets is currently in trunk and will
be released in 0.8.2.
Tha
Hi
It seems to me that 0.8.1.1 doesn't have the ConsumerMetadata API. So what
broker I should choose in order to commit and fetch offset information ?
Shall I use zookeeper for offset to manage it manually instead ?
Thanks,
Weide
On Sun, Aug 3, 2014 at 4:34 PM, Weide Zhang wrote:
> Hi,
>
>
Hi,
It depends on your use-case.
https://kafka.apache.org/documentation.html#uses
Log retention(size/time) policy is sufficient for normal messaging system
like use-cases.
Refer Kafka documentation for more details.
Manikumar
On Mon, Aug 4, 2014 at 12:14 PM, anand jain wrote:
> Thanks Mani,
Hi,
In your case the exceptions thrown from the database operation may not stop
the consumer automatically, hence you may need to catch the exception and
explicitly stop consumer if you wanted it to behave so.
With iter.hasNext() the code becomes
while (iter.hasNext()) {
message = consumer.it
Thanks a lot Guozhang and Daniel.
Weide
On Mon, Aug 4, 2014 at 8:27 AM, Guozhang Wang wrote:
> Weide,
>
> Like Daniel said, the rebalance logic is deterministic as round robin, so
> if you have a total number of partitions as n, and each one (master or
> slave) machine also has n threads, then
Weide,
Like Daniel said, the rebalance logic is deterministic as round robin, so
if you have a total number of partitions as n, and each one (master or
slave) machine also has n threads, then all partitions will go to master.
When master fails and restarts, the partitions will automatically go bac
Hi Daniel
I count once when producing and count once when consuming, the timestamp is
calculated once before producing, and it is being attached to the msg so the
consumer will use the same TS to count
Thanks
-Original Message-
From: Daniel Compton [mailto:d...@danielcompton.net]
Se
Hi Guy
In your reconciliation, where was the time stamp coming from? Is it possible
that messages were delivered several times but your calculations only counted
each unique event?
Daniel.
> On 4/08/2014, at 5:35 pm, Guy Doulberg wrote:
>
> Hi
>
> What do you mean producer ACK value?
>
>
Also, is there any need of checking any acknowledgement variable? as on any
exception, while dealing with database, would make the consumer program
stop and hence *consumer.commit()* wouldn't have been called..right??
The above question is for single topic.Now, let's assume there are 5
topics. Fir
Thanks Guozhang!!
Below is the code for iterating over log messages:
.
.
for (final KafkaStream stream : streams) {
ConsumerIterator consumerIte =
stream.iterator();
21 matches
Mail list logo