Hi Roger,
maybe I wasn't clear enough. I'm not using kafka by myself. I'm customer of the
MicroStrategy Plattform. MicroStrategy uses Kafka. Here is the problem. An old
Log4j 1.2 is delivered with kafka.
https://www.apache.org/dyn/closer.cgi?path=/kafka/3.0.0/kafka_2.13-3.0.0.tgz
kafka_2.13-3.
Log4j 2.x isn’t a drop-in replacement for 1.x. It isn’t a difficult change
but somebody does need to go through all the source code and do the work.
-Dave
From: Brosy, Franziska
Date: Monday, January 10, 2022 at 3:16 AM
To: users@kafka.apache.org
Subject: [EXTERNAL] AW: Log4j 1.2
Hi Roger,
Well. Hopefully there is someone who is able and willingly to do that work.
I'm so sorry that I can't help.
Best regards
Franziska
-Ursprüngliche Nachricht-
Von: Tauzell, Dave
Gesendet: Montag, 10. Januar 2022 14:30
An: users@kafka.apache.org
Betreff: Re: Log4j 1.2
Log4j 2.x isn't a
There are two KIPs already related to this effort
KIP-653
https://cwiki.apache.org/confluence/display/KAFKA/KIP-653%3A+Upgrade+log4j+to+log4j2
KIP-676
https://cwiki.apache.org/confluence/display/KAFKA/KIP-676%3A+Respect+logging+hierarchy
I believe the work is in progress, feel free to reach out
Thanks. Those KIPs show that there is a fair amount of work for this.
From: Israel Ekpo
Date: Monday, January 10, 2022 at 9:32 AM
To: users@kafka.apache.org
Subject: [EXTERNAL] Re: Log4j 1.2
There are two KIPs already related to this effort
KIP-653
https://urldefense.com/v3/__https://cwiki.apa
Hello
We are consuming two topics (A and B) and joining them, but I have noticed
no matter what I do, topic A gets consumed first in a batch and then topic
B , increasing *num.stream.threads* will only get topic A process a lot of
records faster. Topic B has lots of messages compared to Topic A
Hi Miguel,
I checked your code and it seems fine to me, so I would not suspect
anything with your logic. The next thing I'd suggest to check if you have
many cases where the same key gets deleted and then re-inserted (you can
add some logging at the `put` and `delete` calls`).
Guozhang
On Tue,
Hi Miguel,
I suspect it's due to the timestamps in your topic A, which are earlier
than topic B. Note that Kafka Streams tries to synchronize joining topics
by processing records with smaller timestamps, and hence if topic A's
messages have smaller timestamps, they will be selected over the other.