Hi,
I have a 20 brokers kafka cluster and there are about 50 topics to consume.
Between creating a consumer for each topic and creating a consumer for all
50 topics, what is the pros and cons?
What would be the suggested way if I enable auto commit for each 10 seconds?
Kafka client version is 0
Hi
I want to benchmark Kafka, configured such that a message that has been acked
by the broker to the producer is guaranteed to have been persisted to disk. I
changed the broker settings:
log.flush.interval.messages=1
log.flush.interval.ms=0
(Is this the proper way to do it?)
The impact is ve
Hello Sachin,
I just read John / Bill's comment on that ticket (was not on KAFKA-9533
before so it was kinda new to me), and I think the besides the rationale of
John which I agree since for KStream that returning null in value with a
non-null key could still have a valid meaning, the behavior has
Hi Sachin,
I am afraid I cannot follow your point.
You can still use a filter if you do not want to emit records
downstream w/o triggering any repartitioning.
Best,
Bruno
On Tue, Feb 25, 2020 at 6:43 PM Sachin Mittal wrote:
>
> Hi,
> This is really getting interesting.
> Now if we don't want a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Are you aware of KIP-557:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-557%3A+Add+emit+on
+change+support+for+Kafka+Streams
Seems it will address your use case?
- -Matthias
On 2/25/20 6:45 PM, Adam Rinehart wrote:
> Bruno and Guozhang,
Bruno and Guozhang,
Thank you for the replies. Between the 2 of you, I think I know how to code
what I wanted. I'm going with
stream.flatTransform(...).groupByKey().aggregate()
because an additional requirement that I hadn't stated in the original
message was I was planning on using a punctuate
Hi,
This is really getting interesting.
Now if we don't want a record to be emitted downstream only way we can do
is via transform or (flatTransform).
Since we are now reverting the fix for null record in transformValues and
rather change the docs, doesn't this add bit of confusion for users.
Conf
Hello Guozhang and Adam,
Regarding Guozhang's proposal please see recent discussions about
`transformValues()` and returning `null` from the transformer:
https://issues.apache.org/jira/browse/KAFKA-9533?focusedCommentId=17044602&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
you can use kafka-console-consumer that comes with your kafka
deployment, or you can install kafkacat (which I found more simple to
use)
brew install kafkacat
kafkacat -b your.broker.com:yourPORT -t yourtopic -c max-messages
On Tue, Feb 25, 2020 at 9:03 AM Doaa K. Amin
wrote:
>
> Hello,
> I'm
Hello,
I'm new to kafka and I'd like to write data from kafka to a CSV file in a Mac.
Please, advise.
Thank You & Kindest Regards,Doaa.
Hi,
I just configured SSL on 3 brokers
Here is my configuration:
I just replaced hostnames with dummy hostname.
inter.broker.listener.name=CLIENT
listeners=CLIENT://dummyhost.mycom.com:9092,SSL://dummyhost.mycom.com:9093
advertised.listeners=CLIENT://dummyhost.mycom.com:9092
#security.inter.broker
Hello experts,
I am facing errors while starting broker with advertised.listeners.
Can someone send me the sample configuration where all below settings are
mentioned with dummy hosts names alongwith SSL?
Listeners
advertised.listeners
inter.broker.listener.name
thanks
Sunil.
CONFIDENTIAL NOTE:
12 matches
Mail list logo