I guess you have 3 options:
1) if there is data for a different key, check if there is data is the
store that you want to flush
2) register an event-time punctuation (maybe for each key? or for a
range of keys?) and check on a regular basis if there is anything that
you want to forward
3) similar
Sounds about right.
On commit, both stores would be flushed to local disk and the producer
would be flushed to ensure all write to the changelog topics are done.
Only afterwards, the input topic offsets would be committed.
Because flushing happens one after each other for the store (and in no
par
+1. I ran the broker, producer, consumer, etc.
best,
Colin
On Tue, Oct 22, 2019, at 13:32, Guozhang Wang wrote:
> +1. I've ran the quick start and unit tests.
>
>
> Guozhang
>
> On Tue, Oct 22, 2019 at 12:57 PM David Arthur wrote:
>
> > Thanks, Jonathon and Jason. I've updated the release n
Hi List,
I'm incredibly new to the Kafka world and am trying to diagnose an issue we
have with ZooKeeperRequestLatencyMs hovering at ~ 1sec on one of our kafka
servers.
In a 6 node setup all but one of the nodes have ZooKeeperRequestLatencyMs
down around the 500ms.
The underlying hosts don't loo
+1. I've ran the quick start and unit tests.
Guozhang
On Tue, Oct 22, 2019 at 12:57 PM David Arthur wrote:
> Thanks, Jonathon and Jason. I've updated the release notes along with the
> signature and checksums. KAFKA-9053 was also missing.
>
> On Tue, Oct 22, 2019 at 3:47 PM Jason Gustafson
>
Thanks, Jonathon and Jason. I've updated the release notes along with the
signature and checksums. KAFKA-9053 was also missing.
On Tue, Oct 22, 2019 at 3:47 PM Jason Gustafson wrote:
> +1
>
> I ran the basic quickstart on the 2.12 artifact and verified
> signatures/checksums.
>
> I also looked o
+1
I ran the basic quickstart on the 2.12 artifact and verified
signatures/checksums.
I also looked over the release notes. I see that KAFKA-8950 is included, so
maybe they just need to be refreshed.
Thanks for running the release!
-Jason
On Fri, Oct 18, 2019 at 5:23 AM David Arthur wrote:
>
I'm less familiar with that part of the code but that sounds correct to me.
You're default request timeout is 300 seconds though, is that right? Seems
like it should be large enough in most scenarios. Did you see any network
outages around that time?
On Wed, Oct 16, 2019 at 10:30 AM Xiyuan Hu wro
Did you put transaction markers into account?
Each time a transaction is committed or aborted, a commit or abort
marker is written that occupies one offset.
-Matthias
On 10/17/19 8:38 AM, Ludwig Schmid wrote:
> Hello,
>
> in an application I use a producer and a consumer. The consumer polls d
Thanks for the details!
Seem Guozhang did a PR to improve the behavior:
https://github.com/apache/kafka/pull/7573
(I saw that you reviewed the PR already, just following up to close the
loop for other and make them aware of the change.)
-Matthias
On 10/15/19 1:17 PM, Javier Holguera wrote:
>
Hi there,
Around 3 AM we faced these errors in a Kafka client when trying to produce:
org.apache.kafka.common.KafkaException:
org.apache.kafka.common.errors.InvalidPidMappingException: The producer
attempted to use a producer id which is not currently assigned to its
transactional id
at
org.a
Everything has impact. You cannot keep churning loads of messages under the
same operating condition, and expect nothing to change.
You have know find out (via load testing) an optimum operating condition
(e.g. partition, batch.size etc.) for you producer/consumer to work
correctly. Remember that
I wanted to understand whether broker will be unstable with large number of
consumers or will consume face some issue like lag will increase?
On Mon, Oct 21, 2019 at 6:55 PM Shyam P wrote:
> What are you trying to do here ? whats your objective ?
>
> On Sat, Oct 19, 2019 at 8:45 PM Hrishikesh M
Hi Sophie,
Thank you for your elaborate response.
On Mon, Oct 14, 2019 at 10:13 PM Sophie Blee-Goldman
wrote:
> Honestly I can't say whether 256 partitions is enough to trigger the
> performance issues
> in 2.3.0 but I'd definitely recommend upgrading as soon as you can, just in
> case. On a
>
14 matches
Mail list logo