Thanks Colin for making a new RC for KAFKA-8564.
+1 (non binding)
I checked signatures and ran quickstart on the 2.12 binary
On Mon, Jun 24, 2019 at 6:03 AM Gwen Shapira wrote:
>
> +1 (binding)
> Verified signatures, verified good build on jenkins, built from
> sources anyway and ran quickstart o
Hi Kafka Streams user,
I have this usage of Kafka Streams and it works well that sets retention time
in KTable, both in the internal topics and RocksDB local states.
final KStream eventStream = builder
.stream("events",
Consumed.with(Serdes.Integer(), Ser
Hi Guozhang,
An update on this. I've tested your hotfix as follows:
* Branched from apache/kafka:2.2 and applied your hotfix to it. I couldn’t
build your forked repo, but the official repo was working. Version 2.4 was
resulting in test errors, so I branched off 2.2 which is the official version
Hey, this is a very apt question.
GroupByKey isn't a great example because it doesn't actually change
the key, so all the aggregation results are actually on records from
the same partition. But let's say you do a groupBy or a map (or any
operation that can change the key), followed by an aggregat
Hey Sendoh,
I think you just overlooked the javadoc in your search, which says:
> @deprecated since 2.1. Use {@link Materialized#withRetention(Duration)} or
> directly configure the retention in a store supplier and use {@link
> Materialized#as(WindowBytesStoreSupplier)}.
Sorry for the confusi
Hi there,
Recently we rolling upgraded our internal brokers from version 0.11.0.1 to
version 2.1.1 in different envs, with the next definition of JMX metrics for
Introscope Wily:
JMX|kafka.server|Fetch:queue-size,\
JMX|kafka.server|Fetch|*:byte-rate,\
JMX|kafka.server|Fetch|*:throttle-time,\
JM
Hello! I’m interested in trying to get my Kafka Consumer to keep eating
records. However, after a short period of time, it stops incrementing. How do
you usually get this to work? Below is a short configuration that I use for my
KafkaConsumer. Any help would be greatly appreciated.
hostname =
Hey Kevin,
could you give more context on what it means for `keep eating records` and
`stops incrementing`? In a typical use case, you should call `poll()` in a
while loop, and if you stop seeing new records, it could be either your
consumer is not working correctly, or your input volume is not fe
Is there anyway to copy consumers offset while moving data from one cluster
to another cluster?
I think it is not possible if you're using MirrorMaker. Confluent Platform has
a product called Replicator that do what you want.
Cheers
Barbosa
Enviado do Yahoo Mail no Android
Em seg, 24 24e jun 24e 2019 às 12:34, Mohit
Kumar escreveu: Is there anyway to copy
consumers offset while mo
Mohit, PR-6295 and KIP-382 introduce MirorrMaker 2.0 which was designed to
support this operation.
In a nutshell, MM2 maintains a sparse offset sync stream while replicating
records between clusters. The offset syncs are used to translate consumer
offsets for periodic cross-cluster checkpoints. Le
Hi all,
This vote passes with 6 +1 votes (3 of which are binding) and no 0 or -1 votes.
Thanks to everyone who voted.
+1 votes
PMC Members:
* Ismael Juma
* Guozhang Wang
* Gwen Shapira
Community:
* Kamal Chandraprakash
* Jakub Scholz
* Mickael Maison
0 votes
* No votes
-1 votes
* No votes
V
John,
Thanks for the nice explanation. When the repartitioning happens, does the
window get associated with the new partition i.e., now does a message with new
timestamp has to appear on the repartition topic for the window to expire ? It
is possible that there is new stream of messages coming
13 matches
Mail list logo