Matthias,
That does make a lot of sense, so Streams never will create time its
always using a byproduct of a record time passed into it. Thus in theory
unless I force a change somewhere in a flow, the flow will stay as I start
it.
The confusing part is around joins, since 'stream time' is kinda
Could KAFKA-5172 cause similar observations?
Guozhang
On Thu, May 4, 2017 at 1:30 AM, Damian Guy wrote:
> It is odd as the person that originally reported the problem has verified
> that it is fixed.
>
> On Thu, 4 May 2017 at 08:31 Guozhang Wang wrote:
>
> > Ara,
> >
> > That is a bit weird, I
Done.
Guozhang
On Fri, May 5, 2017 at 11:44 AM, BigData dev
wrote:
> Hi,
> Could you please provide permission for creating KIP in Kafka.
>
> username: bharatv
> email: bigdatadev...@gmail.com
>
--
-- Guozhang
After a while the instance started running.
2017-05-05 22:40:26.806 INFO 85 --- [ StreamThread-4]
o.a.k.s.p.internals.StreamThread : stream-thread [StreamThread-4]
Committing task StreamTask 1_62
(this is literally the next message)
2017-05-05 23:13:27.820 INFO 85 --- [ StreamThread-4]
o
Warning, long message
*Problem*: Initializing a Kafka Stream is taking a lng time. Currently
at the 40 minute mark
*Setup*:
2 co-partition topics with 100 partitions.
First topic contains a lot of messages in the order of hundreds of millions
Second topic is a KTable and contains ~30k records
Thanks for confirming, Vahid. Unfortunately, that is not what I am seeing!
I see a consumer actively receiving messages from a topic, but its group
name doesn't show up when I run a list.
I will see if I can reproduce this using a test that I ran today after the
offsets expire. Perhaps I can pause
Hi Subhash,
Yes, it should be listed again in the output, but it should get fresh
offsets (with `--describe` for example), since the old offsets were
removed once it became inactive.
--Vahid
From: Subhash Sriram
To: users@kafka.apache.org
Date: 05/05/2017 02:38 PM
Subject:R
Have you tried increasing num.replica.fetchers on your brokers?
On Fri, May 5, 2017 at 12:59 PM, Archie wrote:
> Is there anyway I can speed up the rate at which the slave replicas will
> fetch data from master?
>
> I am using bin/kafka-producer-perf-test.sh to test the throughput of my
> produc
Hi Vahid,
Thank you very much for your reply! I appreciate the clarification.
Unfortunately, I didn't really try the command until today. That being
said, I created a couple of new groups and consumed from a test topic
today, and they did show up in the list. Maybe I can see what happens with
them
Hi Subhash,
The broker config that affects group offset retention is
"offsets.retention.minutes" (which defaults to 1 day).
If the group is inactive (i.e. has no consumer consuming from it) for this
long its offsets will be removed from the internal topic offset and it
will not be listed in the
Hi Shimi,
I’ve noticed with our benchmarks that on AWS environments with high network
latency the network socket buffers often need adjusting. Any chance you could
add the following to your streams configuration to change the default socket
size bytes to a higher value (at least 1MB) and let us
Hey everyone,
I am a little bit confused about how the kafka-consumer-groups.sh/
ConsumerGroupCommand works, and was hoping someone could shed some light on
this for me.
We are running Kafka 0.10.1.0, and using the new Consumer API with the
Confluent.Kafka C# library (v0.9.5) that uses librdkafka
Is there anyway I can speed up the rate at which the slave replicas will
fetch data from master?
I am using bin/kafka-producer-perf-test.sh to test the throughput of my
producer. And I have set a client quota of 50 MBps. Now without any
replicas I am getting throughput ~ 50MBps but when replicatio
Hi,
Could you please provide permission for creating KIP in Kafka.
username: bharatv
email: bigdatadev...@gmail.com
Not sure if I can follow (and what type of join you have in mind).
I assume we are still talking about windowed KStream-KStream joins.
For this scenario, and an inner join, the computation is correct as
implements. Only left/outer join have a single flaw that they might
compute a "false" result a
That part of time tracking is a little tricky.
Streams internally maintains "stream time" -- this model the progress of
your application over all input partitions and topics, and is based on
the timestamps return by the timestamp extractor. Thus, if timestamp
extractor returns even time, "stream t
That does actually, I never thought about a custom value object to hold the
Count/Sum variables. Thank you!
For the time semantics here is where I got hung up, copied from kafka
streams documentation:
Finally, whenever a Kafka Streams application writes records to Kafka, then
it will also assign
Hi Matthias
We have been thinking about this problem recently and thought, wouldn't it be
nice if a join could be configured to be '1 time', within the retention period
of the join window. So if a join has occurred already on a particular key,
further ones will be ignored for the remainder of t
Hi all!
We are running Kafka in a 3 node setup with Kafka and Zookeeper on each node.
The topics have 1 partition and 2 replicas, like:
Topic:someTopicPartitionCount:1ReplicationFactor:2
Configs:retention.ms=60
Topic: someTopicPartition: 0Leader: 2Replicas: 2,0
19 matches
Mail list logo