This is proposed as part of the Client Rewrite project. Wiki is here -
https://cwiki.apache.org/confluence/display/KAFKA/Client+Rewrite#ClientRewrite-ProposedProducerAPI.
Please feel free to leave feedback on the new proposal.
Thanks,
Neha
On Tue, Sep 24, 2013 at 11:14 PM, Aniket Bhatnagar <
ani
I am integrating Kafka in my in-stream processing and have begun using the
provided Java API (we use Scala in our project). I am wondering if anyone
also feels the need for non-blocking Kafka client in Scala? Producer can
simply return Future[SendResponse] or Future[List[SendResponse]] and
consumer
Are you using the java producer client?
Thanks,
Jun
On Tue, Sep 24, 2013 at 5:33 PM, Mark wrote:
> Our 0.7.2 Kafka cluster keeps crashing with:
>
> 2013-09-24 17:21:47,513 - [kafka-acceptor:Acceptor@153] - Error in
> acceptor
> java.io.IOException: Too many open
>
> The obvious fix i
It is assumed that clocks are in sync, we use ntp and it mostly works.
-Jay
On Tue, Sep 24, 2013 at 5:12 PM, Tom Amon wrote:
> I've read in the docs and papers that LinkedIn has an auditing system that
> correlates message counts from tiers in their system using a time window of
> 10 minutes.
Our 0.7.2 Kafka cluster keeps crashing with:
2013-09-24 17:21:47,513 - [kafka-acceptor:Acceptor@153] - Error in acceptor
java.io.IOException: Too many open
The obvious fix is to bump up the number of open files but I'm wondering if
there is a leak on the Kafka side and/or our applicati
I've read in the docs and papers that LinkedIn has an auditing system that
correlates message counts from tiers in their system using a time window of
10 minutes. The time stamp on the message is used to determine which window
the message falls into.
My question is how do you account for clock dri
filed: https://issues.apache.org/jira/browse/KAFKA-1066
On Tue, Sep 24, 2013 at 12:04 PM, Neha Narkhede wrote:
> This makes sense. Please file a JIRA where we can discuss a patch.
>
> Thanks,
> Neha
>
>
> On Tue, Sep 24, 2013 at 9:00 AM, Jason Rosenberg wrote:
>
> > I'm wondering if a simple ch
It seems to work for me. Here is what I added to my server.config:
kafka.csv.metrics.reporter.enabled=true
kafka.metrics.reporters=kafka.metrics.KafkaCSVMetricsReporter
It does spit out those warnings that I mentioned - although actually
the reason for that is in fact because we attempt to "verify
Yes, that is correct. We added this and *MinFetchRate recently.
On Tue, Sep 24, 2013 at 9:10 AM, Rajasekar Elango wrote:
> Thanks Neha, Looks like this mbean was added recently. The version we are
> running is from early June and it doesn't have this Mbean.
>
> Thanks,
> Raja.
>
>
> On Mon, Sep
Thanks Neha, Looks like this mbean was added recently. The version we are
running is from early June and it doesn't have this Mbean.
Thanks,
Raja.
On Mon, Sep 23, 2013 at 9:15 PM, Neha Narkhede wrote:
> On the consumer side, look for
> "kafka.consumer":name="([-.\w]+)-MaxLag",type="ConsumerFetc
This makes sense. Please file a JIRA where we can discuss a patch.
Thanks,
Neha
On Tue, Sep 24, 2013 at 9:00 AM, Jason Rosenberg wrote:
> I'm wondering if a simple change could be to not log full stack traces for
> simple things like "Connection refused", etc. Seems it would be fine to
> just
I'm wondering if a simple change could be to not log full stack traces for
simple things like "Connection refused", etc. Seems it would be fine to
just log the exception message in such cases.
Also, the log levels could be tuned, such that things logged as ERROR
indicate that all possible retries
Looking at the archive more closely, I understand the confusion now. It
seems that the question got sent twice by me. My apologies. Didn't intend
to spam the mailbox.
Thanks,
Aniket
On 24 September 2013 19:18, Aniket Bhatnagar wrote:
> Agreed that it has been partially discussed in the thread "
Agreed that it has been partially discussed in the thread "[jira] [Updated]
(KAFKA-1046) Added support for Scala 2.10 builds while maintaining
compatibility with 2.8.x" with the discussion being that it's not a good
idea to apply the same path to 0.8-beta1-candidate branch. However, I am
more looki
This was discussed before -
http://mail-archives.apache.org/mod_mbox/kafka-users/201309.mbox/browser
Thanks,
Neha
On Sep 23, 2013 11:43 PM, "Aniket Bhatnagar"
wrote:
> We are looking to adopt Kafka for our in stream processing use case. One of
> the issue seems to be that we use scala 2.10.2 how
I understand that, but seems the property "kafka.metrics.reporters" cannot
work, even using the default csvreporter.
On Tue, Sep 24, 2013 at 1:46 PM, Joel Koshy wrote:
> These warnings are fine - we should think about how to get rid of them. I
> think the issue is that there's a single bag of p
16 matches
Mail list logo