Hi,
Oh, I know about Yammer metrics, Ganglia, Graphite, and friends
My main Q was: can I get all Consumer and all Producer stats via Broker's JMX.
I think the answer is no :(
So I'm forced to look at the JMX of individual applications that act as Kafka
Consumers or Producers.
Btw. this is
We are creating a consumer with properties and I did not see a property that
screamed that it was to start at the beginning of a topic. Is there such a
property?
Thanks,
rob
[cid:image001.png@01CE891F.54E75000]
Rob Withers
Staff Analyst/Developer
o: (720) 514-8963
c: (571) 262-1873
On Thu, Jul 25, 2013 at 9:11 AM, Withers, Robert
wrote:
> We are creating a consumer with properties and I did not see a
> property that screamed that it was to start at the beginning of a
> topic. Is there such a property?
In v0.7, set 'autooffset.reset' to 'smallest'.
Jim
- - - - - - - - - -
in 0.8 you can set the property "auto.offset.reset" = "smallest" when
creating your ConsumerConfig ... this will override the default value of
"largest"
take a look at ConsoleConsumer.scala for more example if need be
/***
Joe Stein
Founder, Principal C
Thanks, Jim. I saw that in the 0.8 config as well.
I am trying to write a REST service that dumps all traffic in a given
topic/partition. The issue I seem to be facing now is the blocking API of the
consumerIterator. Is there any way we can ask whether the traffic is drained?
Perhaps a way
Thanks, Joe, I also see the answer to my other question, that the KafkaStream
is not on a different thread, but I automatically expect it to be since all
other uses we have had of the KafkaStream are stuffed in a Runnable. duh.
thanks,
rob
On Jul 25, 2013, at 11:41 AM, Joe Stein wrote:
> in
Oh boy, is my mind slow today. The tamasic cells woke up but the rajasic ones
stayed asleep, which is rather ironic, if you know what I mean. My only hope
is the sattvasic few.
The issue of threading is secondary to the blocking api. How can I know the
traffic is drained from a topic/partit
Hello,
Spring Integration extensions has a new module to support Kafka 0.8
integration. Currently, adapters for producer and the high level consumer
are available.
It is still in development. Here is the current snapshot of this support:
https://github.com/SpringSource/spring-integration-extensi
Hi guys, apologies in advance for the newb question:
I am running a 3 broker setup, and I have a topic configured with 100
partitions in the broker config. But I've noticed that what seems to happen is
that each broker gets 100 partitions and it looks kind of like this in the
consumer logs: 1-
You set the partition-count to 100 per broker. 3 brokers. 300 partitions total.
Philip
On Thu, Jul 25, 2013 at 11:29 AM, Ian Friedman wrote:
> Hi guys, apologies in advance for the newb question:
>
> I am running a 3 broker setup, and I have a topic configured with 100
> partitions in the broke
You can set the "consumer.timeout.ms" to have a ConsumerTimeoutException thrown
if the broker doesn't respond within that time period:
var done = False
val consumerIterator = initConsumer()
while(true) {
try {
val messageAndMetadata = consumerIterator.next() //
Awesome, thanks so much,
rob
On Jul 25, 2013, at 4:35 PM, Florin Trofin wrote:
> You can set the "consumer.timeout.ms" to have a ConsumerTimeoutException
> thrown if the broker doesn't respond within that time period:
>
> var done = False
> val consumerIterator = initConsumer()
>
On the broker side, we have jmx beans for producer/consumer request rate
and time from all clients. Each producer/consumer client has jmx beans that
tracks its own request rate and time.
Thanks,
Jun
On Thu, Jul 25, 2013 at 2:35 AM, Otis Gospodnetic <
otis_gospodne...@yahoo.com> wrote:
> Hi,
>
I assume this is 0.7. In 0.7, partitions are per broker. So, if you
configure 100 partitions, each broker will have 100 partitions. In 0.8,
partitions is at the cluster level and won't change when new brokers are
added.
Thanks,
Jun
On Thu, Jul 25, 2013 at 11:29 AM, Ian Friedman wrote:
> Hi gu
14 matches
Mail list logo