hen the kafka streams process is not restarted. Is this
expected behavior or maybe a bug?
Thanks,
Peter Levart
I see the list processor managed to smash may beautifully formatted HTML
message. For that reason I'm re-sending the sample code snippet in plain
text mode...
Here's a sample kafka streams processor:
KStream input =
builder
.stream(
inpu
to use in your code. More specifically:
Suppressed.untilWindowCloses(BufferConfig.unbounded())
--
and see if this issue still exists?
Guozhang
On Wed, Dec 19, 2018 at 1:50 PM Peter Levart wrote:
I see the list processor managed to smash may beautifully formatted HTML
message. For tha
t, the suppress(untilWindowCloses)
suppresses non-final results?
Regards, Peter
On 12/21/18 10:48 AM, Peter Levart wrote:
Hello Guozhang,
Thank you for looking into this problem.
I noticed that I have been using an internal class constructor and
later discovered the right API to create the Str
ultValueSerde
);
// register store
builder.addStateStore(windowStoreBuilder);
Here's the GroupByKeyWindowedTransformer implementation:
/**
* @author Peter Levart
*/
public class GroupByKeyWindowedTransformer implements
Transformer, VR>> {
public static TransformerSupplie
On 12/21/18 3:16 PM, Peter Levart wrote:
I also see some results that are actual non-final window aggregations
that precede the final aggregations. These non-final results are never
emitted out of order (for example, no such non-final result would ever
come after the final result for a
Hi,
It looks like a JVM bug. If I were you, 1st thing I'd do is upgrading
the JDK to the latest JDK8u192. You're using JDK8u92 which is quite old
(2+ years)...
Regards, Peter
On 12/27/18 3:53 AM, wenxing zheng wrote:
Dear all,
We got a coredump with the following info last night, on this e
/27/18 11:29 AM, wenxing zheng wrote:
Thanks to Peter.
We did a lot of tests today, and found that the issue will happen after
enabling G1GC. If we go with default settings, everything looks fine.
On Thu, Dec 27, 2018 at 4:49 PM Peter Levart wrote:
Hi,
It looks like a JVM bug. If I were you
wrote:
Thanks to Peter.
We did a lot of tests today, and found that the issue will happen after
enabling G1GC. If we go with default settings, everything looks fine.
On Thu, Dec 27, 2018 at 4:49 PM Peter Levart wrote:
Hi,
It looks like a JVM bug. If I were you, 1st thing I'd do is upgradi
Hi,
On 12/26/18 10:53 AM, MCG wrote:
I'm not talking about orderliness, but that the same consumer group, the same
partition, is consumed by multiple consumers. I use kafka-consumer-groups.sh
and org.apache.kafka.clients.admin.AdminClient to validate the results. Because
the same consumer gro
Hi Matthias,
Just a couple of questions about that...
On 12/27/18 9:57 PM, Matthias J. Sax wrote:
All data is backed in the Kafka cluster. Data that is stored locally, is
basically a cache, and Kafka Streams will recreate the local data if you
loose it.
Thus, I am not sure how the KTable data
hus,
the full state can be recreated without any data loss.
-Matthias
On 12/28/18 1:42 PM, Peter Levart wrote:
Hi Matthias,
Just a couple of questions about that...
On 12/27/18 9:57 PM, Matthias J. Sax wrote:
All data is backed in the Kafka cluster. Data that is stored locally, is
basica
Hi,
I suggest the following:
If you attach a new Processor/Transformer/ValueTransformer to your
topology using a corresponding supplier, you need to make sure that the
supplier returns a new instance each time get() is called. If you return
the same object, a single Processor/Transformer/Valu
nts consumed by the processor that belong to windows that have
already been flushed.
Regards, Peter
Thanks for the report,
-John
On Wed, Dec 26, 2018 at 3:21 AM Peter Levart wrote:
On 12/21/18 3:16 PM, Peter Levart wrote:
I also see some results that are actual non-final window aggregat
Hi John,
On 1/8/19 12:45 PM, Peter Levart wrote:
I looked at your custom transfomer, and it looks almost correct to
me. The
only flaw seems to be that it only looks
for closed windows for the key currently being processed, which means
that
if you have key "A" buffered, but don
On 1/8/19 12:57 PM, Peter Levart wrote:
Hi John,
On 1/8/19 12:45 PM, Peter Levart wrote:
I looked at your custom transfomer, and it looks almost correct to
me. The
only flaw seems to be that it only looks
for closed windows for the key currently being processed, which
means that
if you
Hi Aruna,
On 1/10/19 8:19 AM, aruna ramachandran wrote:
I am using keyed partitions with 1000 partitions, so I need to create 1000
consumers because consumers groups and re balancing concepts is not worked
in the case of manually assigned consumers.Is there any replacement for the
above problem.
processed
records. You usually retain them for enough time so you don't loose them
before processing them + some safety time...
Regards, Peter
Sven
Gesendet: Donnerstag, 10. Januar 2019 um 08:35 Uhr
Von: "Peter Levart"
An: users@kafka.apache.org, "aruna ramachandran"
And what do you mean by " remove the G1GC"? How did you do that?
Thanks!
-Original Message-
From: wenxing zheng [mailto:wenxing.zh...@gmail.com]
Sent: 28 grudnia 2018 04:05
To: Peter Levart
Cc: users@kafka.apache.org
Subject: Re: SIGSEGV (0xb) on TransactionCoordinator
Hi Pet
Hello!
I'm trying to understand the behavior of Kafka Streams consumers with
regards to max.task.idle.ms configuration parameter (default 0). The
documentation says:
max.task.idle.ms Medium Maximum amount of time a stream task
will stay idle when not all of its partition buffers cont
basis?
Thanks,
Peter
On 1/15/19 11:00 AM, Peter Levart wrote:
Hello!
I'm trying to understand the behavior of Kafka Streams consumers with
regards to max.task.idle.ms configuration parameter (default 0). The
documentation says:
max.task.idle.ms Medium Maximum amount of time a s
task for all partitions.
The task will stay in the "enforced processing state" until all
partitions deliver data again (at the same time).
If a second partition becomes empty, no additional delay is applied.
Hope this answers your question.
-Matthias
On 1/15/19 2:07 AM, Peter Le
n doing so, you might actually discover the cause of the
bug yourself!
I hope this helps, and thanks for your help,
-John
On Sat, Jan 12, 2019 at 5:45 AM Peter Levart
wrote:
Hi Jonh,
Thank you very much for explaining how WindowStore works. I have some
more questions...
On 1/10/19 5:33 PM, Jo
o attach a callback, like "foreach" downstream of
the suppression, you would see duplicates in the case of a crash. Callbacks
are a general "hole" in EOS, which I have some ideas to close, but that's a
separate topic.
There may still be something else going on, but I'm tryi
Hi John,
Haven't been able to reinstate the demo yet, but I have been re-reading
the following scenario of yours
On 1/24/19 11:48 PM, Peter Levart wrote:
Hi John,
On 1/24/19 3:18 PM, John Roesler wrote:
The reason is that, upon restart, the suppression buffer can only
"reme
Hello,
I'm using Java KafkaConsumer and I'm wondering what is the guarantee of
consuming messages when enable.auto.commit is set to true.
What is the earliest time the offsets of messages returned from last
poll() may be committed? Immediately after returning from poll() or upon
next call to
Hi, Joe
I think I observed a similar lockup as you describe in 3rd variant. The
controller broker was partialy stuck but other brokers still regarded it as
the controller. Unfortunately the broker was restarted by an unpatient
admin before I had a chance to investigate. The simpthoms were as follo
Hi Pushkar,
On 7/25/19 10:51 AM, Pushkar Deole wrote:
Hi All,
I am new to Kafka and still getting myself acquainted with the product. I
have a basic question around using Kafka. I want to store in a Kafka topic,
a string value against some keys while a HashMap value against some of the
keys. F
Hi Hu,
On 10/2/19 8:54 PM, Xiyuan Hu wrote:
Hi All,
I'm doing smoke testing with my Kafka Streams app(V2.1.0). I noticed
that below behaviors:
1) Out throughput of changelog topic could go up to 70mb/s while the
in-traffic is around 10mb/s.
2) When traffic is bumpy, either due to producer/consu
On 4/11/20 2:45 PM, Alex Craig wrote:
Yep, max poll interval is 2147483647 and session timeout is 12 (2
minutes). I don't have anything set for heartbeat.interval.ms, so it must
be using the default. (3 seconds I think?) Hmm, is it possible the
heartbeat might not happen if the client ap
30 matches
Mail list logo