Hi
I am new to this mailing list, so not sure if this is the right place to
send this. Please let me know if it's not.
I believe I found a bug on the TopologyTestDriver.
I have a topology that aggregates on a KTable. This is a generic method I
created to build this topology on different topics I
Hi
I am trying to understand why a KTable to KTable left join is being called
twice when I receive a message on the right table.
Here is my Topology:
Serde authorSerde = ...
Serde> bSetSerde = ...
Serde> apSetSerde = ...
KTable authorTable = builder.table(AUTHOR_TOPIC,
Consumed.with(Serdes.String(
oin(B)", internally KafkaStreams also creates B.rightJoin(A), and
sets sendOldValues on A. Later, by calling tmp.leftJoin(C), it could be
setting sendOldValues on B, or something like that...
Is there any known issues on cascading leftJoins?
Thanks
Murilo
On Tue, 29 Jan 2019 at 14:10, Muri
lized. To be able to compute the correct result for this case, we
> need to send the old and new join result downstream to allow the
> downstream join to compute the correct result. It's storage/computation
> trade-off.
>
> Does this answer your question?
>
>
> -Matthi
Hi
After some research, I've come to a few discussions, and they all tell me
that Kafka Streams require the topics to be created before starting the
application.
Nevertheless, I'd like my application to keep retrying if a topic does not
exist.
I've seen this thread:
https://groups.google.com/forum/
gt; > > > In the mean time, you can try to work around it with the
> StateListener.
> > > > When Streams has a successful start-up, you'll see it transition from
> > > > REBALANCING to RUNNING, so if you see it transition to
> > PENDING_SHUTDOWN,
>
Hi
I have a complex KafkaStreams topology, where I have a bunch of KTables
that I regroup (rekeying) and aggregate so I can join them.
I've noticed that the "-repartition" topics created by the groupBy
operations have a very long retention by default (Long.MAX_VALUE).
I'm a bit concerned about the
to actively delete data from
> those topics after the records are processed. Hence, those topics won't
> grow unbounded but are "truncated" on a regular basis.
>
>
> -Matthias
>
> On 8/21/19 11:38 AM, Murilo Tavares wrote:
> > Hi
> > I have a comple
ache.org/confluence/display/KAFKA/KIP-107%3A+Add+deleteRecordsBefore%28%29+API+in+AdminClient
>
> There is `AdminClient#deleteRecords(...)` API to do so.
>
>
> -Matthias
>
> On 8/21/19 9:09 PM, Murilo Tavares wrote:
> > Thanks Matthias for the prompt response.
> >
Hi everyone
In light of the discussions about order guarantee in Kafka, I am struggling
to understand how that affects KafkaStreams internal *KafkaProducer*.
In the official documentation, this section (
https://docs.confluent.io/current/streams/concepts.html#out-of-order-handling)
enumerates
2 cau
> go with default configs.
>
>
> -Matthias
>
>
> On 12/2/19 12:02 PM, Murilo Tavares wrote:
> > Hi everyone
> > In light of the discussions about order guarantee in Kafka, I am
> struggling
> > to understand how that affects KafkaStreams internal *Kaf
Hi
I have a KafkaStreams application that's pretty simple, and acts as a
repartitioner... It reads from input topics and send to output topics,
based on a input-to-output topics map. It has a custom Repartitioner that
will be responsible for assigning new partitions for the data in the output
topic
standby tasks: []
On Wed, 5 Feb 2020 at 13:34, Murilo Tavares wrote:
> Hi
> I have a KafkaStreams application that's pretty simple, and acts as a
> repartitioner... It reads from input topics and send to output topics,
> based on a input-to-output topics map. It has a custom R
to have fixed the NPEs.
Thanks
Murilo
On Wed, Feb 5, 2020 at 1:34 PM Murilo Tavares wrote:
> Hi
> I have a KafkaStreams application that's pretty simple, and acts as a
> repartitioner... It reads from input topics and send to output topics,
> based on a input-to-output top
ology. It seems like a bug
> to me.
>
> Boyang
>
> On Thu, Feb 6, 2020 at 8:17 AM Murilo Tavares wrote:
>
> > Answering my own question, obviously this is a stateless application, so
> > there’s no reset needed. Mu bad.
> > But the NPE does seem to be linked to
Hi
I am currently doing a simple KTable groupby().aggregate() in KafkaStreams.
In the groupBy I do need to select a new key, but I know for sure that the
new key would still fall in the same partition. Because of this, I believe
the repartition would not be necessary, but my question is: is it poss
s.apache.org/jira/browse/KAFKA-4835
>
>
> Guozhang
>
> On Fri, Feb 28, 2020 at 7:01 AM Murilo Tavares
> wrote:
>
> > Hi
> > I am currently doing a simple KTable groupby().aggregate() in
> KafkaStreams.
> > In the groupBy I do need to select a new key, but I
of that ticket
> >> so I cannot say for sure. I saw the last update is on Jan/2019 so
> >> maybe it's a bit loose now.. If you want to pick it up and revive
> >> the KIP completion feel free to do so :)
> >>
> >>
> >> Guozhang
> >>
> >>
> >
Hi
I'm really excited about the new release for KafkaStreams.
I've been watching the new CoGroup feature, and now that this is out, I'm
trying to play around with it.
I wonder what would be the best way to do a
KTable.leftJoin(otherTable).leftJoin(yetAnotherTable)...
Taking the Customer example in
tream records*/)" and then it can be included in the
> co-group.
>
> Guozhang
>
>
> On Thu, Apr 16, 2020 at 1:23 PM Murilo Tavares
> wrote:
>
> > Hi
> > I'm really excited about the new release for KafkaStreams.
> > I've been watching the n
Hi Dumitru
The TopologyTestDriver you are using was designed to unit test your
topology, and will not work with the stack you run locally.
That said, if you want to test your topology, you first need to create the
fake input topic by calling “topologyDriver.createInputTopic()” (assuming
you are usi
lizer())
> > > val input = records.map { case (k, v) => new TestRecord(k, v) }
> > > topicInput.pipeRecordList(input.asJava)
> > >
> > > What could be the explanation?
> > > Thank y0u,
> > > Nicu
> > >
> > > On Tue, 21 Ap
Hi
I wonder how the KafkaProducer Metadata version works.
I have some KafkaProducers in Production that started failing and will not
recover after a topic was recreated (deleted and created again). Those are
shared producers, so I stopped publication of that topic before the
procedure, but I believ
Hi
I got the same behaviour yesterday while trying to upgrade my KafkaStreams
app from 2.4.1 to 2.7.0. Our brokers are on 2.2.1.
Looking at KAFKA-9752 it mentions the cause being two other tickets:
https://issues.apache.org/jira/browse/KAFKA-7610
https://issues.apache.org/jira/browse/KAFKA-9232
A
: null) is unavailable
or invalid due to cause: null.isDisconnected: true. Rediscovery will be
attempted.
Followed by
Discovered group coordinator broker-1:9092 (id: 2147483646 rack: null)
On Fri, 26 Feb 2021 at 09:59, Murilo Tavares wrote:
> Hi
> I got the same behaviour yesterday while tr
Hi
I have Kafka brokers running on version 2.4.1, with a KafkaStreams app on
2.4.0.
I'm trying to upgrade my KafkaStreams to v2.7.0, but I got my instances
stuck on startup.
In my understanding, I don't need any special procedure to upgraded from
KStreams 2.4.0 to 2.7.0, right?
The following error
root cause of this warning is not clear from the information you
> gave. Did you maybe reset the application but not wipe out the local
> state stores?
>
> Best,
> Bruno
>
> On 12.03.21 19:11, Murilo Tavares wrote:
> > Hi
> > I have Kafka brokers running on ver
cation? Did it
> work?
>
> Best,
> Bruno
>
> On 15.03.21 14:26, Murilo Tavares wrote:
> > Hi Bruno
> > Thanks for your response.
> > No, I did not reset the application prior to upgrading. That was simply
> > upgrading KafkaStreams from 2.4 to 2.7.
> >
&
5 Mar 2021 at 10:20, Bruno Cadonna
wrote:
> Hi Murilo,
>
> A couple of questions:
>
> 1. What do you mean exactly with clean up?
> 2. Do you have acleanup policy specified on the changelog topics?
>
> Best,
> Bruno
>
> On 15.03.21 15:03, Murilo Tavares wrote:
wn.
>
> I am wondering why you get a out of range exception after upgrading
> without clean up, though.
>
> A solution would be to clean up before upgrading in your large
> environment. I do not know if this is a viable solution for you.
>
> Best,
> Bruno
>
>
Hi. Looking for some insights here.
We use Kafka at a large scale, and have lots of microservices using Kafka
for all sorts of things.
Our biggest challenge nowadays is to track which topics are used and which
are not.
I have considered looking at consumer groups to identify which applications
cons
they should use consumer
> groups as well IIUC.
>
> Boyang
>
> On Thu, Oct 7, 2021 at 11:55 AM Murilo Tavares
> wrote:
>
> > Hi. Looking for some insights here.
> > We use Kafka at a large scale, and have lots of microservices using Kafka
> > for all sorts of thing
e to restrict data reads
>
> https://docs.confluent.io/platform/current/kafka/authorization.html#operations
> or I'm missing some context here.
>
> On Thu, Oct 7, 2021 at 12:44 PM Murilo Tavares
> wrote:
>
> > Hi Boyang
> > Thanks for your response.
> > Yes, I'
Hi
I have a large KafkaStreams topology, and for a while I have failed to
upgrade it from version 2.4.1 to 2.7.0, and this time to version 2.8.1.
(keeps stuck on rebalance loop)
I was able to revert it from v2.7.0 back to 2.4.1 in the past, but now I
can't rollback my code, as I get the following e
Hi
I have a large, stateful, KafkaStreams application that is on a never
ending rebalance loop.
I can see that Task restorations take a lng time (circa 30-45 min). And
after that I see this error.
This is followed by tasks being suspended, and the instance re-joining the
group and a new rebalan
l instances, before restarting with 2.4.1.
>
> -Matthias
>
> On 10/13/21 7:34 AM, Murilo Tavares wrote:
> > Hi
> > I have a large KafkaStreams topology, and for a while I have failed to
> > upgrade it from version 2.4.1 to 2.7.0, and this time to version 2.8.1.
> > (k
lease if it is not yet on that version?
>
> On Wed, Oct 13, 2021 at 8:02 AM Murilo Tavares
> wrote:
>
> > Hi
> > I have a large, stateful, KafkaStreams application that is on a never
> > ending rebalance loop.
> > I can see that Task restorations take a lng
What about Kafka-Connect?
Anyone has checked if any of the Confluent KafkaConnect docker images embed
log4j v2?
Thanks
On Mon, 13 Dec 2021 at 21:39, Luke Chen wrote:
> Hi all,
>
> Here's the comments for CVE-2021-44228 vulnerability *from SLF4J project*.
> REF: http://slf4j.org/log4shell.html
>
Hi
In order to get rid of some non-log4j vulnerabilities, I upgraded my
KafkaConnect docker image from 6.0.2 to 6.0.5.
It looks to me 6.0.5 (launched 5 days ago) is broken.
I get the following error:
[Errno 13] Permission denied: '/etc/kafka/connect-log4j.properties'
Command [/usr/local/bin/dub tem
Got the same error on 6.2.2
-- Forwarded message -
From: Murilo Tavares
Date: Wed, 15 Dec 2021 at 09:51
Subject: Issue on confluentic/cp-kafka-connect:6.0.5
To:
Hi
In order to get rid of some non-log4j vulnerabilities, I upgraded my
KafkaConnect docker image from 6.0.2 to
Hi
I have a KafkaStreams application with a reasonably complex, stateful
topology.
By monitoring it, we can say for sure that it is bounded by writing I/O.
This has become way worse after we upgraded KafkaStreams from 2.4 to 2.8.
(even though we disabled warm-up replicas by setting
"acceptable.reco
Also worth mentioning the Kafka community has released this official
announcement:
https://kafka.apache.org/cve-list
On Fri, 7 Jan 2022 at 09:28, Roger Kasinsky
wrote:
> Hi Franziska,
>
> When upgrading to Log4J 2.x.x, take extra care not to upgrade to a 2.x.x
> version that has a more recent s
Hi
I have a KafkaStreams application that is too heavyweight, with 9
sub-topologies.
I am trying to disable some unneeded part of the topology that is
completely independent of the rest of the topology. Since my state stores
have fixed, predictable names, I compared the topologies and I believe it
43 matches
Mail list logo