Solved this problem! Because of SpringBoot dependency in parent POM, it
depends lots of components...
在 2018/12/18 上午10:54, big data 写道:
> When I remove/comment parent dependency like
> in Module A pom.xml, it seems ok, and streaming kafka's depends on
> kafka_2.11:0.8.2.1.
>
> The Modula
Also, I have read through that issue and KIP-360 to the extent my knowledge
allows and I don't understand why I get this error constantly when exactly
once is enabled. The KIP says
> Idempotent/transactional semantics depend on the broker retaining state
for each active producer id (e.g. epoch and
Hello 王美功,
I am using 2.1.0. And, I think you nailed it on the head, because my
application is low throughput and I am seeing UNKNOWN_PRODUCER_ID all the
time with exactly once enabled. I've googled this before but couldn't
identify the cause. Thank you!
Setting retry.backoff.ms to 5 brought the
Which version are you using? This
bug(https://issues.apache.org/jira/browse/KAFKA-7190) may increase the latency
of your application, try to reduce the retry.backoff.ms,the default value is
100 ms.
王美功
原始邮件
发件人:Dmitry minkovskydminkov...@gmail.com
收件人:usersus...@kafka.apache.org
发送时间:2018年12
I have a process that spans several Kafka Streams applications. With the
streams commit interval and producer linger both set to 5ms, when exactly
once delivery is disabled, this process takes ~250ms. With exactly once
enabled, the same process takes anywhere from 800-1200ms.
In Enabling Exactly-O
I see the list processor managed to smash may beautifully formatted HTML
message. For that reason I'm re-sending the sample code snippet in plain
text mode...
Here's a sample kafka streams processor:
KStream input =
builder
.stream(
inpu
Hello,
I'm trying to use kafka streams to aggregate some time series data using
1 second tumbling time windows. The data is ordered approximately by
timestamp with some "jitter" which I'm limiting at the input by a custom
TimestampExtractor that moves events into the future if they come in to
what Is your offset reset point (earliest/latest) ? Just because you can
read it by changing consumer group doesn’t mean you are reading the latest
data, but may be from the beginning.
On Wed, 19 Dec 2018 at 13:54, Karim Lamouri wrote:
> Hi,
>
> One of our brokers went down and when it came b
Hi,
One of our brokers went down and when it came back up the consumer of one
topic couldn’t read using the original consumer group.
This doesn’t output anything:
bin/kafka-console-consumer --bootstrap-server server --topic ASSETS
--group group_name
However, if I change the name of the group it