Thanks for your support.
I have explained the issue by using the console producer&consumer, in terms
of simplicity, but i have just found it in my application.
I will update this thread later with more tests.
Regards.
2014-03-06 18:56 GMT+01:00 Neha Narkhede :
> I've seen this behavior whe
Hi,
I have the following problem:
My Kafka consumer is consuming messages, but the processing of the message
might fail. I do not want to
retry until success, but instead want to quickly consume the next message.
However at a later time I might still want to reprocess the failed
messages.
So I tho
Almost right: offsets are unique, immutable identifiers for a message within a
topic-partition. Each partition has its own sequence of offsets, but a (topic,
partition, offset) triple uniquely and persistently identifies a particular
message.
For log retention you have essentially two options:
Am 07.03.14 11:43 schrieb "Martin Kleppmann" unter
:
>Almost right: offsets are unique, immutable identifiers for a message
>within a topic-partition. Each partition has its own sequence of offsets,
>but a (topic, partition, offset) triple uniquely and persistently
>identifies a particular message
Hello everyone,
at Quantifind, we are big users of Kafka and we like it a lot!
In a few use cases, we had to figure out if a queue was growing and how its
consumers were behaving. There are a few command-line tools to try to
figure out what's going on, but it's not always easy to debug and to see
This is really useful! I added it to the ecosystem page:
https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem
-Jay
On Fri, Mar 7, 2014 at 10:49 AM, Pierre Andrews wrote:
> Hello everyone,
>
> at Quantifind, we are big users of Kafka and we like it a lot!
> In a few use cases, we had to f
Great! Thanks!
On Fri, Mar 7, 2014 at 3:59 PM, Jay Kreps wrote:
> This is really useful! I added it to the ecosystem page:
> https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem
>
> -Jay
>
>
> On Fri, Mar 7, 2014 at 10:49 AM, Pierre Andrews >wrote:
>
> > Hello everyone,
> >
> > at Quant
We are planning to use Apache Kafka to replace Apache Fume for mostly as
log transport layer. Please see the attached image which is similar use
case ( and deployment architecture ) at Linkedin (according to
http://sites.computer.org/debull/A12june/pipeline.pdf ). I have
following questions:
Awesome!!! ;-)
Claude
On Fri, Mar 7, 2014 at 4:03 PM, Pierre Andrews wrote:
> Great! Thanks!
>
>
> On Fri, Mar 7, 2014 at 3:59 PM, Jay Kreps wrote:
>
> > This is really useful! I added it to the ecosystem page:
> > https://cwiki.apache.org/confluence/display/KAFKA/Ecosystem
> >
> > -Jay
> >
>
Claude, we should join forces ;)
On Fri, Mar 7, 2014 at 4:45 PM, Claude Mamo wrote:
> Awesome!!! ;-)
>
> Claude
>
>
> On Fri, Mar 7, 2014 at 4:03 PM, Pierre Andrews >wrote:
>
> > Great! Thanks!
> >
> >
> > On Fri, Mar 7, 2014 at 3:59 PM, Jay Kreps wrote:
> >
> > > This is really useful! I add
Agreed, sent you an email.
Caude
On Fri, Mar 7, 2014 at 4:55 PM, Pierre Andrews wrote:
> Claude, we should join forces ;)
>
>
> On Fri, Mar 7, 2014 at 4:45 PM, Claude Mamo wrote:
>
> > Awesome!!! ;-)
> >
> > Claude
> >
> >
> > On Fri, Mar 7, 2014 at 4:03 PM, Pierre Andrews > >wrote:
> >
> > >
Very nice
> On Mar 7, 2014, at 11:55, Pierre Andrews wrote:
>
> Claude, we should join forces ;)
>
>
>> On Fri, Mar 7, 2014 at 4:45 PM, Claude Mamo wrote:
>>
>> Awesome!!! ;-)
>>
>> Claude
>>
>>
>> On Fri, Mar 7, 2014 at 4:03 PM, Pierre Andrews >> wrote:
>>
>>> Great! Thanks!
>>>
>>>
>
Hello Bhavesh,
1) If auto.create.topics.enable is turned on and consumer is subscribing to
a wildcard topic, then producers can just send to new topics on the fly
which can be then captured by the consumers.
2) For now we do not have priority mechanism, but we do have some initial
plans on quotas
In addition to what Guozhang said -
1) Since you are looking into Camus, this is probably a question for the
Camus mailing list. I believe it does automatically detect new topics.
2) There is no priority and we intend to solve the traffic spike problems
through quotas. But usually in most cases, i
On 7 Mar 2014, at 14:11, "Maier, Dr. Andreas" wrote:
>> In your case, it sounds like time-based retention with a fairly long
>> retention period is the way to go. You could potentially store the
>> offsets of messages to retry in a separate Kafka topic.
>
> I was also thinking about doing that. H
How can I prevent these errors coming?
When I created 9 partitions on 3 instances and 2 replication-factor, I
didn't have any error. But when I created 36 partitions on 12 instances, I
got these errors.
2014-03-07 18:19:28,208] ERROR Controller 9 epoch 12 initiated state change
of replica 9 for
I've been trying to write a test consumer in Java for a new use of our
Kafka cluster (currently used solely with Storm), however this use needs
to always start from the earliest offset in the topic. From reading
around it looked like setting "autooffset.reset" = "smallest" would do
this, however I
>From reading
around it looked like setting "autooffset.reset" = "smallest" would do
this, however I'm not actually seeing that behavior.
The reason is that a consumer actually consults this config only if it
doesn't find a previous offset stored for it's group in zookeeper. So, it
will respect th
Starting from a fresh and working deployment, what admin commands or steps
lead you to these errors? This error basically points to an unexpected
state change, which could be a bug. I'm looking for steps to be able to
reproduce the bug.
Thanks,
Neha
On Fri, Mar 7, 2014 at 1:59 PM, Bae, Jae Hyeon
I started from a fresh but I deployed the working version synced from the
latest trunk, not a release version.
I just executed kafka.admin.TopicCommand with --create option.
On Fri, Mar 7, 2014 at 5:56 PM, Neha Narkhede wrote:
> Starting from a fresh and working deployment, what admin command
Hi - am I right in that for this tool to be effective, consumers must be
using the high level consumer or otherwise keeping their offsets in
zookeeper? Is there any way to track performance without that?
On Fri, Mar 7, 2014 at 3:08 PM, Steve Morin wrote:
> Very nice
>
> > On Mar 7, 2014, at 11
Great work!
In addition to Dan's question, does it work with storm kafka-spout which
uses a separate zk path?
Sent from my Nexus 4.
On Mar 7, 2014 7:18 PM, "Dan Hoffman" wrote:
> Hi - am I right in that for this tool to be effective, consumers must be
> using the high level consumer or otherwis
hi,
Kafka web console will be included in the next release.
On Mar 8, 2014 2:49 AM, "Pierre Andrews" wrote:
> Hello everyone,
>
> at Quantifind, we are big users of Kafka and we like it a lot!
> In a few use cases, we had to figure out if a queue was growing and how its
> consumers were behavin
Hi,
Does this work with Kafka 0.7.x or does Kafka 0.7.x not expose the info
needed for computing the lag?
Thanks,
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
On Fri, Mar 7, 2014 at 1:49 PM, Pierre Andrews wrote:
> Hello
Hi,
I need to solve the below problem I am using druid realtime as a consumer
for druid and they have zero issue on what is going on. Whyh would kafka
thow such an error? When I produce and consume from the python lib called
brod..I have no issues. Is the below an kafka issues or consumer issu
25 matches
Mail list logo