Thanks Jun.
I will try it.
On Sat, Apr 27, 2013 at 12:15 PM, Jun Rao wrote:
> It should work, but may not be well tested.
>
> Thanks,
>
> Jun
>
>
> On Fri, Apr 26, 2013 at 7:41 PM, Helin Xiang wrote:
>
> > Hi,
> >
> > We currently use Kafka 0.7.2.
> >
> > Is it OK to use different whitelist f
That was it. Thanks for the prompt response.
-drew
From: Jun Rao [jun...@gmail.com]
Sent: Friday, April 26, 2013 9:00 AM
To: users@kafka.apache.org
Subject: Re: ProducerThread NPE
Is it possible that you didn't set topic in ProducerData when calling
produ
It should work, but may not be well tested.
Thanks,
Jun
On Fri, Apr 26, 2013 at 7:41 PM, Helin Xiang wrote:
> Hi,
>
> We currently use Kafka 0.7.2.
>
> Is it OK to use different whitelist for different consumers in a same
> consumer group?
>
> Thanks
>
>
> --
> *Best Regards
>
> Helin Xiang*
Hi,
We currently use Kafka 0.7.2.
Is it OK to use different whitelist for different consumers in a same
consumer group?
Thanks
--
*Best Regards
Helin Xiang*
The only other thing being written to these disks is log4j (kafka.out), so
technically it is not dedicated to the data logs. The disks are 250GB SATA.
On Fri, Apr 26, 2013 at 6:35 PM, Neha Narkhede wrote:
> >- Decreased num.partitions and log.flush.interval on the brokers from
> >64/10k
>- Decreased num.partitions and log.flush.interval on the brokers from
>64/10k to 32/100 in order to lower the average flush time (we were
>previously always hitting the default flush interval since no
partitions
Hmm, that is a pretty low value for flush interval leading to higher disk
Thanks Jun, your suggestion helped me quite a bit.
Since earlier this week I've been able to work out the issues (at least it
seems like it for now). My consumer is now roughly processing messages at
the rate they are being produced with an acceptable amount of lag end to
end. Here is an overview
Thanks for the good answers.
Regards,
Libo
I don't know how Kafka's rollover algorithm is implemented, but this is
common behavior for other logging frameworks. You would need a separate
watcher/scheduled thread to rollover a log file, even if no events were
coming in. Logback (and probably log4j, by the same author) dispenses with
the watc
In a nutshell: High Level uses Consumer Groups to handle the tracking of
message offset consumption. SimpleConsumer leaves it all up to you.
The 0.7.x quick start shows examples of both:
http://kafka.apache.org/quickstart.html
On Fri, Apr 26, 2013 at 12:32 PM, Oleg Ruchovets wrote:
> By the
By the way, is there a reason why 'log.roll.hours' is not documented on the
apache configuration page: http://kafka.apache.org/configuration.html ?
It's possible to find this setting (and several other undocumented
settings) by looking at the source code. I'm just not sure why the
complete set o
By the way. What does high level consumer means? Is there other type of
consumers?
Thanks
Oleg.
On Fri, Apr 26, 2013 at 6:34 PM, Chris Curtin wrote:
> If using the high level consumer you'll need to run two different Groups to
> have both of your consumers receive every message. I'm assuming yo
I'd start on the Wiki:
https://cwiki.apache.org/confluence/display/KAFKA/Index
And the presentations:
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+papers+and+presentations
Also the archives for this list are sometimes informative:
http://mail-archives.apache.org/mod_mbox/kafka-users/
Thank you Chris. This is exactly what I tried to achieve.
What is the best source to learn Kafka. Actually my question was a very
basic and I didn't find any examples or detailed documentation.
Is there any book or good documentation ?
Thanks
Oleg.
On Fri, Apr 26, 2013 at 6:34 PM, Chris Curtin w
If using the high level consumer you'll need to run two different Groups to
have both of your consumers receive every message. I'm assuming you want
two different types of business logic to process each message? If so you
need to treat them as separate groups.
On Fri, Apr 26, 2013 at 11:28 AM, Ju
Have you looked at #4 in http://kafka.apache.org/faq.html ?
Thanks,
Jun
On Fri, Apr 26, 2013 at 8:19 AM, Oleg Ruchovets wrote:
> Hi.
>I have simple kafka producer/consumer application. I have one producer
> and 2 consumers. consumers has the same code , it is just executed it in
> differen
Hi.
I have simple kafka producer/consumer application. I have one producer
and 2 consumers. consumers has the same code , it is just executed it in
different threads. For some reason information produced by producer
consumed only by ONE CONSUMER.Second consumer didn't consumed any
information. M
https://issues.apache.org/jira/browse/KAFKA-881
Thanks.
On Fri, Apr 26, 2013 at 7:40 AM, Jun Rao wrote:
> Yes, for low volume topic, the time-based rolling can be imprecise. Could
> you file a jira and describe your suggestions there? Ideally, we should set
> firstAppendTime to the file creati
Is it possible that you didn't set topic in ProducerData when calling
producer.send()?
Thanks,
Jun
On Fri, Apr 26, 2013 at 7:04 AM, Drew Daugherty <
drew.daughe...@returnpath.com> wrote:
> Hi,
>
> We are using Kafka 0.7.2 and are seeing the following exceptions in the
> ProducerSendThread:
>
>
Yes, see http://www.apache.org/foundation/mailinglists.html
Thanks,
Jun
On Fri, Apr 26, 2013 at 6:28 AM, Ke Ren wrote:
> is there any way I can unsubscribe from this mail list?
>
>
> On Fri, Apr 26, 2013 at 2:11 PM, Francis Dallaire <
> francis.dalla...@ubisoft.com> wrote:
>
> > The lists are
Yes, for low volume topic, the time-based rolling can be imprecise. Could
you file a jira and describe your suggestions there? Ideally, we should set
firstAppendTime to the file creation time. However, it doesn't seem you can
get the creation time in java.
Thanks,
Jun
On Thu, Apr 25, 2013 at 11
Hi,
We are using Kafka 0.7.2 and are seeing the following exceptions in the
ProducerSendThread:
2013-04-25 13:00:56,557 [ProducerSendThread--416074535] ERROR
kafka.producer.async.ProducerSendThread - Error in handling batch of 2 events
java.lang.NullPointerException
at
kafka.producer.a
is there any way I can unsubscribe from this mail list?
On Fri, Apr 26, 2013 at 2:11 PM, Francis Dallaire <
francis.dalla...@ubisoft.com> wrote:
> The lists are managed by ezmlm, so you have to send an email to :
>
> users-subscr...@kafka.apache.org
>
> You will be added automatically.
>
> Frank
The lists are managed by ezmlm, so you have to send an email to :
users-subscr...@kafka.apache.org
You will be added automatically.
Frank
Yes securing the content of log messages at rest is important to us - which
favors message encryption.
Thanks for the responses.
Fergal.
On Tue, Apr 23, 2013 at 7:31 PM, Chris Curtin wrote:
> Also keep in mind that anything done at the transport (SSL for example)
> layer won't solve your 'at re
25 matches
Mail list logo