Hi,
I created KIP-175 to make some improvements to the ConsumerGroupCommand
tool.
The KIP can be found here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-175%3A+Additional+%27--describe%27+views+for+ConsumerGroupCommand
Your review and feedback is welcome!
Thanks.
--Vahid
Ouch, interesting.
If by chance auto offset commit failed? Is there is way to prove it
(something to search in a logs)?
On Mon, Jul 3, 2017 at 6:29 PM, Tom Bentley wrote:
> Hi Dmitriy,
>
> FTR, https://issues.apache.org/jira/browse/KAFKA-3806 is the issue Damian
> is referring to, but it doesn'
Hi Dmitriy,
FTR, https://issues.apache.org/jira/browse/KAFKA-3806 is the issue Damian
is referring to, but it doesn't quite fit what you describe because you
said your consumer was configured with enable.auto.commit = true, which
should keep committing even if there are no messages being consumed.
Hi Dmitriy,
It is possibly related to the broker setting `offsets.retention.minutes` -
this defaults to 24 hours. If an offset hasn't been updated within that
time it will be removed. So if your env was sitting idle for longer than
this period, then rebalanced, you will likely start consuming the
Hi all,
looking for some explanations. We running 2 instances of consumer (same
consumer group) and getting little bit weird behavior after 3 days of
inactivity.
Env:
kafka broker 0.10.2.1
consumer java 0.10.2.1 + spring-kafka + enable.auto.commit = true (all
default settings).
Scenario:
1. r
Thanks Damian !
That's was it, after fixing number compaction threads to be higher than 1,
it finally continue to consume stream.
On Fri, Jun 30, 2017 at 7:48 PM, Dmitriy Vsekhvalnov wrote:
> Yeah, can confirm there is only 1 vCPU.
>
> Okay, will try that configuration and get back to you guys.
I tried benchmarking kafka producer with acks=1 in 5 node cluster ..
Total transfer rate is ~950MB/sec .. Single broker transfer rate is less
than 200MB/sec..
Load Generator:
I've started 6 instance of http server where it writes to broker ... Using
wrk2 http benchmarking tool I was able to send
I have implemented org.apache.kafka.common.metrics.MetricsReporter and set it
up using metric.reporters in the server properties. I don’t see all the metrics
that I was expecting, for example I don’t see ‘LeaderElectionRateAndTimeMs’.
There seems to be another reporter you can implement and the
That exception is gone .. Thanks for the suggestion.
I followed the example from
https://github.com/confluentinc/examples/blob/3.2.x/kafka-streams/src/main/scala/io/confluent/examples/streams/algebird/CMSStore.scala#L258
..
regards.
On Mon, Jul 3, 2017 at 3:23 PM, Damian Guy wrote:
> Remove th
Remove the` logChange` from `flush` and do it when you write to the store.
i.e, in the BFStore + function
On Mon, 3 Jul 2017 at 10:43, Debasish Ghosh
wrote:
> Ok, so I make the following change .. Is this the change that u suggested ?
>
> // remove commit from process(). So process now looks as
Ok, so I make the following change .. Is this the change that u suggested ?
// remove commit from process(). So process now looks as follows:
override def process(dummy: String, record: String): Unit =
LogParseUtil.parseLine(record) match {
case Success(r) => {
bfStore + r.host
bfStore.f
`commit` is called by streams, you can see it in your stack trace above:
>
org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:280)
`commit` will subsequently call `flush` on any stores. At this point,
though, there will be no `RecordContext` as there are no records bei
The only place where I am doing commit is from Processor.process() .. Here
it is ..
class WeblogProcessor extends AbstractProcessor[String, String] {
private var bfStore: BFStore[String] = _
override def init(context: ProcessorContext): Unit = {
super.init(context)
this.context.schedu
Hi,
It is because you are calling `context.timestamp` during `commit`. At this
point there is no `RecordContext` associated with the `ProcessorContext`,
hence the null pointer. The `RecordContext` is only set when streams is
processing a record. You probably want to log the change when you write t
Hello everyone,
I'm interested in the maximum transfer rate of a Kafka broker. There
is a fair bit of performance figures in terms of messages per second,
but I'm more interested in MB/s with rather large messages, let's say
500 kB as an example.
I tried testing by writing with the apache kafka p
15 matches
Mail list logo