Dear community,
I'd add to my topology a stateful operator - graced with a Store - demanded
to save some compuation A.
I'd like to implement it so that it can store, by the same key, a list of
values by appending A by events come in. Something similar e.g. in Apache
Flink, this can be achieved by
Hi All,
I am working on a use case where my multiple producers will be publishing
to the same topic.
For this I need to know what is the MAX_LIMIT of same producers writing to
the same topic?
Also is there any case that the create a pool of producers and these
instances can be shared?
Thanks
Pul
Hello Guozhang,
I understand.
Thank you very much for the answer.
Best Regards,
Bruno
On lun, 2018-07-23 at 16:35 -0700, Guozhang Wang wrote:
> I see.
>
> In that case, one workaround would be to query the state store
> directly
> after you know that no more updates would be applied to that
Hello Kafka users, developers and client-developers,
This is the fourth candidate for release of Apache Kafka 2.0.0.
This is a major version release of Apache Kafka. It includes 40 new KIPs
and
several critical bug fixes. Please see the 2.0.0 release plan for more
details:
https://cwiki.apac
Not really associated with Sarama.
But your issues sounds pretty much same i faced some time ago and fixed,
here it is: https://github.com/Shopify/sarama/issues/885
Try using msg.BlockTimestamp instead of msg.Timestamp and see if it helps.
On Tue, Jul 24, 2018 at 3:26 AM Craig Ching wrote:
> H
Hell All,
I am using kafka 1.0.2 client version . Kafka producer does not get timed
out if unable to connect to brokers. I only see a WARN message from the
cluster.What I am looking for is the producer to throw an exception.How can
this be achieved ?
2018-07-24 10:21:59.616 WARN 10280 --- [pool
Hi Kafka users email distribution list,
We have a six member broker pool. Traffic amongst the nodes is drastically
uneven - at the extremes we have one node transmitting and receiving %35 of
all traffic in and out of the pool, and another node transmitting only %1
of total traffic. We have attempt
Hey, thanks for that Dmitriy! I'll have a look.
On Tue, Jul 24, 2018 at 11:18 AM Dmitriy Vsekhvalnov
wrote:
> Not really associated with Sarama.
>
> But your issues sounds pretty much same i faced some time ago and fixed,
> here it is: https://github.com/Shopify/sarama/issues/885
>
> Try using
+1 (non-binding)
Built from source and ran quickstart successfully with both Java 8 and
Java 9 on Ubuntu.
Thanks Rajini!
--Vahid
From: Rajini Sivaram
To: dev , Users ,
kafka-clients
Date: 07/24/2018 08:33 AM
Subject:[VOTE] 2.0.0 RC3
Hello Kafka users, developers and cli
+1
Checked signatures
Ran test suite which passed.
On Tue, Jul 24, 2018 at 8:28 AM Rajini Sivaram
wrote:
> Hello Kafka users, developers and client-developers,
>
>
> This is the fourth candidate for release of Apache Kafka 2.0.0.
>
>
> This is a major version release of Apache Kafka. It includ
I missed your reply, I'm using 1.5.0. Will be upgrading to 1.5.1 soon.
On Tue, Jul 10, 2018, 10:15 PM Jeff Zhang wrote:
> Which flink version do you use ?
>
>
> Garrett Barton 于2018年7月11日周三 上午1:09写道:
>
> > Hey all,
> > I am running flink in batch mode on yarn with independant jobs creating
> > t
Hello Andrea,
I do not fully understand what does `nth-id-before-emission` mean here, but
I can think of a couple of options here:
1) Just use a key-value store, with the value encoding the list of events
for that key. Whenever a new event of the key gets in, you retrieve the
current list for tha
Hello Siva,
I'd suggest you upgrade to Kafka 2.0 once it is released (should be out
soon, probably this week) as it includes a critical performance
optimization for windowed aggregation operations.
Note that even if your broker is in older versions, new versioned clients
like Streams can still ta
Hi,
One possible reason is that the record file had been deleted by the
kafka-server.
By default, you can check the files in /tmp/kafka-logs/${topic-name}, where the
msgs stored.
Or you can get the exact file path from the server.properties.
>-Original Message-
>From: Rag [mailto:raghav
14 matches
Mail list logo