+1 (non-binding)
Built .tar.gz, created a cluster from it and ran a basic end-to-end test:
performed a rolling restart while console-producer and console-consumer ran
at around 20K messages/sec. No errors or data loss.
Ran unit and integration tests successfully 3 out of 5 times. Encountered
some
Thanks for reporting this Sam, could you check and confirm if this issue is
fixed in trunk? If not, we should file a JIRA.
Guozhang
On Wed, Jun 20, 2018 at 6:41 PM, Sam Lendle wrote:
> It looks like there is indeed a bug in kafka-streams 1.1.0. I think what
> was happening was the time spent p
It looks like there is indeed a bug in kafka-streams 1.1.0. I think what was
happening was the time spent processing each record in ns was being added to
the total metric instead of incrementing by 1 for each record. Looks like the
implementation has been changed in trunk. I don't see any commit
I’m trying to use the total metrics introduced in KIP-187
(https://cwiki.apache.org/confluence/display/KAFKA/KIP-187+-+Add+cumulative+count+metric+for+all+Kafka+rate+metrics)
For some metrics, the total and rates are not consistent. In particular, for
stream-processor-node-metrics, I’m seeing
Hello Kafka users, developers and client-developers,
This is the first candidate for release of Apache Kafka 2.0.0.
This is a major version release of Apache Kafka. It includes 40 new KIPs
and
several critical bug fixes. Please see the 2.0.0 release plan for more
details:
https://cwiki.apach
Make sense, Thank you Mani. I appreciate your time and effort.
ThanksArunkumar Pichaimuthu, PMP
On Wednesday, June 20, 2018, 1:18:24 AM CDT, Manikumar
wrote:
These metrics are meter type metrics, which tracks count, mean rate, 1-,
5-, and 15-minute moving averages.
You maybe observi
Thanks, I´ll do some additional digging. Not sure if it´s relevant, but for
the record numerical offset based resetting works like a charm.
2018-06-19 23:42 GMT+02:00 Emmett Butler :
> Thanks! I ask because I think it's possible that only having a single log
> segment (as your partition does) ham
Hello All,
I am setting maxTickMessages = 1 in Kafka consumer group while consuming
the record from the topic. It is giving me 2 records. I am not getting why
it is giving me 1 extra record of the mentioned size. Whenever I increase
the number it gives me one extra record.
Please reply if anyone
I didn't get much further. When I run with the 1.1.0 release version the
stacktrace looks slightly different, but still a very similar NPE, after
the same amount of time.
One observation is that I use a few different processors, and it seems
random which one gets caught in the stack trace.
I've pu
Hi,
I have set up a cluster with 3 nodes on RHEL 7 VM's.
All went well till the point that I wanted to test the setup. I created an
initial_topic and produced some messages to that topic.
When I try to consume using the console consumer, I do not get any
messages.
Now, If i alter the __consumer
10 matches
Mail list logo