We are looking at using kafka 0.8-beta1 and high level consumer.
kafka 0.7 consumer supported backoff.increment.ms to avoid repeatedly
polling a broker node which has no new data. It appears that this property
is no longer supported in 0.8. What is the reason?
Instead there is fetch.wait.max.ms w
Hello,
What is purgatory? I believe the following two properties relate to
consumer and producer respectively.
Could someone please explain the significance of these?
fetch.purgatory.purge.interval.requests=100
producer.purgatory.purge.interval.requests=100
Thanks,
Priya
Hello,
I have been playing with kafka producer and consumer. I have created
different topics at different times with different replica and partition
options.
After 7 days of un-use, list-topics script does not list the unused topic.
However, I can see the log and index files for such topic partit
s, they'll be in purgatory (delayed) until the max
> > allowed time to respond has been reached, unless it has enough messages
> to
> > fill the buffer before that. The request will not end up in the purgatory
> > if you're making a blocking request (max wait
I am trying to send kafka metrics for display to ganglia server using
latest download from https://github.com/adambarthelson/kafka-ganglia.
Here's my kafka_metrics.json
{
"servers" : [ {
"port" : "",
"host" : "ecokaf1",
"queries" : [ {
"outputWriters" : [ {
"@class"
=\"ReplicaManager\",name=\"*\"",
> "attr": [
> "Count",
> "OneMinuteRate",
> "MeanRate",
> "Value"
> ]
>
>
> Unless more recent versions of kafka get rid of the q
Can this unintentional topic creation be avoided by setting
auto.create.topics.enable=false?
On Mon, Nov 4, 2013 at 9:40 PM, Jason Rosenberg wrote:
> Ok, so this can happen, even if the node has not been placed back into
> rotation, at the metadata vip?
>
>
> On Tue, Nov 5, 2013 at 12:11 AM, Ne
gt; ReplicaManager.ISRExpandsPerSec.MeanRate
> ReplicaManager.ISRExpandsPerSec.OneMinuteRate
> ReplicaManager.LeaderCount.Value
>
> ReplicaManager.PartitionCount.Value
>
>
>
> In other words, the bean name is include in the metric name when jmxtrans
> sends it out. In our
n't really care about the amount of
> > > > > satisfied
> > > > > > > requests or the size of the queue.
> > > > > > >
> > > > > > > Producer request
> > > > > > > - When is it added to purgatory (d
If you have 5 partitions and 3 consumers in one consumer group, the
consumers will balance all 5 partitions such that 2 consumers will get data
from 2 partitions each and 1 consumer will get data from remaining 5th
partition.
If you have 3 partitions and 5 consumers in one consumer group, 3 consume
It doesn't look like second broker is available. What is the broker list
property for each of the producers?
Best,
Priya Matpadi
> On Nov 11, 2013, at 12:12 AM, ji yan wrote:
>
> Hi Kafka Users
>
> I have a test setup at home with one machine hosting a zookeeper server a
Please specify both the brokers in same order for each producer as follows:
kafkaBrokerList=:,:
The reason I suspect second broker is not functioning as expected because
broker 0 is leader for both partitions as well as it is the only isr.
Assuming producer on second machine can connect to broker
Hello,
Is there any progress on this issue? We also experience socket leak in case
of network outage.
Thanks,
Priya
On Fri, Jan 24, 2014 at 7:30 AM, Jun Rao wrote:
> Thanks for find this out. We probably should disconnect on any exception.
> Could you file a jira and perhaps attach a patch?
>
>
Further more, the problem is not just restricted to ReplicaFetcherThread.
Kafka consumer server also leaks sockets due to SendThread using same code
. See below stack trace:
2014-01-23 06:48:09,699 INFO [org.apache.zookeeper.ClientCnxn]
(OurKafkaMessageFetcher-blah1-SendThread(pkafka3.our.com:218
Hello,
Do we have a date yet for 0.8.1 release?
Thanks,
Priya
On Fri, Jan 31, 2014 at 8:48 AM, Neha Narkhede wrote:
> The delete topic functionality is in progress (KAFKA-330). We were hoping
> to release 0.8.1 with that. So it's probably 1-2 weeks away. As for the
> rest of the issues, we pro
15 matches
Mail list logo