Could you make sure that you are using the scala 2.9.2 jar?
Thanks,
Jun
On Tue, Aug 26, 2014 at 9:28 PM, Parin Jogani
wrote:
> Can anyone help?
>
>
>
>
> On Sat, Aug 23, 2014 at 9:38 PM, Parin Jogani
> wrote:
>
> > Kafka:
> >
> >> org.apache.kafka
> >> kafka_2.9.2
> >> 0.8.1.1
> >> test
> >
Can anyone help?
On Sat, Aug 23, 2014 at 9:38 PM, Parin Jogani
wrote:
> Kafka:
>
>> org.apache.kafka
>> kafka_2.9.2
>> 0.8.1.1
>> test
>
>
> My tests are in Java: junit, so dont know how scala would make a
> difference.
>
> Hope this helps!
>
> -Parin
>
>
>
> On Sat, Aug 23, 2014 at 7:54 PM,
Hi Guozhang thanks for kicking this off. I made some comments in the Wiki
(and we can continue the discussion there) but think this type of
collaborative mailing list discussion and confluence writeup is a great way
for different discussions about the same thing in different organizations
to coale
Also, Jonathan, to answer your question, the new producer on trunk is
running in prod for some use cases at LinkedIn and can be used with
any 0.8.x. version.
-Jay
On Tue, Aug 26, 2014 at 12:38 PM, Jonathan Weeks
wrote:
> I am interested in this very topic as well. Also, can the trunk version of
Hello all,
We want to kick off some discussions about error handling and logging
conventions. With a number of great patch contributions to Kafka recently,
it is good time for us to sit down and think a little bit more about the
coding style guidelines we have (http://kafka.apache.org/coding-guide
When you create the callback, you can pass in the original message.
Thanks,
Jun
On Tue, Aug 26, 2014 at 12:35 PM, Ryan Persaud
wrote:
> Hello,
>
> I'm looking to insert log lines from log files into kafka, but I'm
> concerned with handling asynchronous send() failures. Specifically, if
> som
If there are no "dirty" logs then the cleaner does not log anything.
You can try changing the dirty ratio config
(min.cleanable.dirty.ratio) to something smaller than the default
(which is 0.5).
Joel
On Tue, Aug 26, 2014 at 03:56:20PM -0400, Philippe Laflamme wrote:
> Yes, and in order to "force
Yes, and in order to "force" it to compact regardless of the volume, we've
set the "segment.ms" configuration key on the topic. According to the
docs[1], that should force a compaction at a certain time interval.
We're seeing the segment rolling, but not the compaction.
[1]http://kafka.apache.org
TLDR: I use one Callback per job I send to Kafka and include that sort
of information by reference in the Callback instance.
Our system is currently moving data from beanstalkd to Kafka due to
historical reasons so we use the callback to either delete or release
the message depending on success. T
I am interested in this very topic as well. Also, can the trunk version of the
producer be used with an existing 0.8.1.1 broker installation, or does one need
to wait for 0.8.2 (at least)?
Thanks,
-Jonathan
On Aug 26, 2014, at 12:35 PM, Ryan Persaud wrote:
> Hello,
>
> I'm looking to insert
Hello,
I'm looking to insert log lines from log files into kafka, but I'm concerned
with handling asynchronous send() failures. Specifically, if some of the log
lines fail to send, I want to be notified of the failure so that I can attempt
to resend them.
Based on previous threads on the mail
Log cleaner will only wakeup and start the cleaning work when there are
logs that are "dirty" enough to be cleaned. So if the topic-partitions does
not get enough traffic to make it dirty the log cleaner will not kicks in
to that partition again.
Guozhang
On Tue, Aug 26, 2014 at 9:02 AM, Philipp
I am running on 0.8.1.1 and I thought that the partition reassignment tools
can do this job. Just was not sure if this is the best way to do this.
I will try this out in stage env first and will perform the same in prod.
Thanks,
marcin
On Mon, Aug 25, 2014 at 7:23 PM, Joe Stein wrote:
> Marcin
Exactly what I'm looking for. Thanks! :)
On 26.08.14 19:08, Gwen Shapira wrote:
I hope this helps:
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
"if you have more partitions than you have threads, some threads will
receive data from multiple partitions"
On Tue, Aug
I hope this helps:
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
"if you have more partitions than you have threads, some threads will
receive data from multiple partitions"
On Tue, Aug 26, 2014 at 10:00 AM, Vetle Leinonen-Roeim wrote:
> Hi,
>
> As far as I can see, t
Hi,
As far as I can see, the (otherwise great and very helpful)
documentation isn't explicit about this, but: given more partitions than
consumers, will all messages still be read?
I've discussed this with some people, and there is some disagreement, so
a clear answer to this would be greatl
Here's the thread dump:
https://gist.github.com/plaflamme/634411b162f56d8f48f6
There's a log-cleaner thread sleeping. Would there be any reason why it's
not writing to it's log-cleaner.log file if it's still running?
We are not using compression (unless it's on by default?)
Thanks,
Philippe
On
Hello Philippe,
You can get a thread dump and check if the log cleaner thread is still
alive, or it is blocked.
Also, are you using some compression on the messages stored on server?
Guozhang
Gu
On Tue, Aug 26, 2014 at 8:15 AM, Philippe Laflamme
wrote:
> Hi,
>
> We're using compaction on so
Hi,
We're using compaction on some of our topics. The log cleaner output showed
that it kicked in when the broker was restarted. But now after several
months of uptime, the log cleaner output is empty. The compacted topics
segment files don't seem to be cleaned up (compacted) anymore.
If there an
19 matches
Mail list logo