Hi Damian,
The rest of the logs were INFO messages about offset being committed.
Anyways, the problem is resolved for now, after we increased the
max.poll.interval.ms.
For anyone else who is facing similar problem, please refer this thread.
https://groups.google.com/forum/#!topic/confluent-platf
Hi Mahendra,
Are you able to share the complete logs? It is pretty hard to tell what is
happening just from a few snippets of information.
Thanks,
Damian
On Wed, 22 Mar 2017 at 12:16 Mahendra Kariya
wrote:
> To test Kafka streams on 0.10.2.0, we setup a new Kafka cluster with the
> latest vers
To test Kafka streams on 0.10.2.0, we setup a new Kafka cluster with the
latest version and used mirror maker to replicate the data from the
0.10.0.0 Kafka cluster. We pointed our streaming app to the newly created
Kafka cluster.
We have 5 nodes, each running the streaming app with 10 threads. In
Thanks for the heads up Guozhang!
The problem is our brokers are on 0.10.0.x. So we will have to upgrade them.
On Sat, Mar 18, 2017 at 12:30 AM, Guozhang Wang wrote:
> Hi Mahendra,
>
> Just a kind reminder that upgrading Streams to 0.10.2 does not necessarily
> require you to upgrade brokers to
Hi Mahendra,
Just a kind reminder that upgrading Streams to 0.10.2 does not necessarily
require you to upgrade brokers to 0.10.2 as well. Since we have added a new
feature since 0.10.2 to allow newer versioned clients (producer, consumer,
streams) to talk to older versioned brokers, and for Stream
We are planning to migrate to the newer version of Kafka. But that's a few
weeks away.
We will try setting the socket config and see how it turns out.
Thanks a lot for your response!
On Mon, Mar 13, 2017 at 3:21 PM, Eno Thereska
wrote:
> Thanks,
>
> A couple of things:
> - I’d recommend movi
Thanks,
A couple of things:
- I’d recommend moving to 0.10.2 (latest release) if you can since several
improvements were made in the last two releases that make rebalancing and
performance better.
- When running on environments with large latency on AWS at least (haven’t
tried Google cloud), o
Hi Eno,
Please find my answers inline.
We are in the process of documenting capacity planning for streams, stay
> tuned.
>
This would be great! Looking forward to it.
Could you send some more info on your problem? What Kafka version are you
> using?
>
We are using Kafka 0.10.0.0.
> Are the
you looking at?
Thanks
Eno
> On Mar 13, 2017, at 12:37 AM, Mahendra Kariya
> wrote:
>
> Hey All,
>
> Are there some guidelines / documentation around capacity planning for
> Kafka streams?
>
> We have a Streams application which consumes messages from a topic wit
Hey All,
Are there some guidelines / documentation around capacity planning for
Kafka streams?
We have a Streams application which consumes messages from a topic with 400
partitions. At peak time, there are around 20K messages coming into that
topic per second. The Streams app consumes these
per MB cost is lower.
Thanks,
Jun
On Tue, May 14, 2013 at 11:31 PM, anand nalya wrote:
> Hi,
>
> We are capacity planning for kafka deployment (Replication factor 3) in
> production environment, the producer is producing data at 1.5Gbps. Total
> number of producers will be aroun
Hi,
We are capacity planning for kafka deployment (Replication factor 3) in
production environment, the producer is producing data at 1.5Gbps. Total
number of producers will be around 500 and there will be 100 consumers. How
many cores would be required to support them? And are there any known
12 matches
Mail list logo