Hi Shimi,
Just to clarify, your scenario is that 1) you shutdown the single-instance
app, cleanly, 2) you restart the app with a single-instance, and that is
taking hours, right?
Did you use any in-memory stores (i.e. not RocksDB) in your topology?
Guozhang
On Tue, May 16, 2017 at 1:08 AM, En
Hi Shimi,
Could we start a new email thread on the slow booting to separate it from the
initial thread (call it "slow boot" or something)? Thank you. Also, could you
provide the logs for the booting part if possible, together with your streams
config.
Thanks
Eno
> On 15 May 2017, at 20:49, Shi
I do run the clients with 0.10.2.1 and it takes hours
What I don't understand is why it takes hours to boot on a server that has
all the data in RocksDB already. Is that related to the amount of data in
RocksDB (changelog topics) or the data in the source topic the processes
reads from?
On Mon, 15
Hello Shimi,
Could you try upgrading your clients to 0.10.2.1 (note you do not need to
upgrade your servers if it is already on 0.10.1, since newer Streams
clients can directly talk to older versioned brokers since 0.10.1+) and try
it out again? I have a few optimizations to reduce rebalance laten
I tried all these configurations and now like version 0.10.1.1 I see a very
slow startup.
I decreased the cluster to a single server which was running without any
problem for a few hours. Now, each time I restart this process it gets into
rebalancing state for several hours.
That mean that every ti
Yeah we’ve seen cases when the session timeout might also need increasing.
Could you try upping it to something like 6ms and let us know how it goes:
>> streamsProps.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, 6);
Thanks
Eno
> On May 6, 2017, at 8:35 AM, Shimi Kiviti wrote:
>
> Tha
Thanks Eno,
I already set the the recurve buffer size to 1MB
I will also try producer
What about session timeout and heart beat timeout? Do you think it should
be increased?
Thanks,
Shimi
On Sat, 6 May 2017 at 0:21 Eno Thereska wrote:
> Hi Shimi,
>
> I’ve noticed with our benchmarks that on AW
Hi Shimi,
I’ve noticed with our benchmarks that on AWS environments with high network
latency the network socket buffers often need adjusting. Any chance you could
add the following to your streams configuration to change the default socket
size bytes to a higher value (at least 1MB) and let us
Thanks Eno,
We still see problems on our side.
when we run kafka-streams 0.10.1.1 eventually the problem goes away but
with 0.10.2.1 it is not.
We see a lot of the rebalancing messages I wrote before
on at least 1 kafka-stream nodes we see disconnection messages like the
following. These messages
Hi Shimi,
0.10.2.1 contains a number of fixes that should make the out of box experience
better, including resiliency under broker failures and better exception
handling. If you ever get back to it, and if the problem happens again, please
do send us the logs and we'll happily have a look.
Tha
Hi Eno,
I am afraid I played too much with the configuration to make this
productive investigation :(
This is a QA environment which includes 2 kafka instances and 3 zookeeper
instances in AWS. There are only 3 partition for this topic.
Kafka broker and kafka-stream are version 0.10.1.1
Our kafka-
Hi Shimi,
Could you provide more info on your setup? How many kafka streams processes do
you have and from how many partitions are they consuming from. If you have more
processes than partitions some of the processes will be idle and won’t do
anything.
Eno
> On Apr 30, 2017, at 5:58 PM, Shimi
Hi Everyone,
I have a problem and I hope one of you can help me figuring it out.
One of our kafka-streams processes stopped processing messages
When I turn on debug log I see lots of these messages:
2017-04-30 15:42:20,228 [StreamThread-1] DEBUG o.a.k.c.c.i.Fetcher: Sending
fetch for partitions
13 matches
Mail list logo