Hi,
When I run Consumer-perf-test from kafka bin directory to measure
performance, thne in the output file, it shows only headers.
Ot doesnt prints their values.
Any thought on this?
When I run producer-perf-test then it prints detailed information like,
Number of records read:
Number of records
Hi Dean,
Here's my producer conf, but might work better if you post yours for us to
look at. As you can see from the port number, the configured bootstrap
server is Kafka, not Zookeeper.
bootstrap.servers=kafka.service.consul:9092
batch.size=10
linger.ms=1
compression.type=none
client.id=
Hi everyone,
I was wondering if someone would be able to give an example of their
producer and consumer files they use when they run MirrorMaker. I've tried
running it but kept getting the error message:
[[2020-02-24 02:15:41,812] WARN [Producer clientId=mirror_maker_producer]
Connection to node
Hi,
I have a Topic with following config.
cleanup.policy = compact,delete
segment.bytes = 52428800 (~52 mb)
min.compaction.lag.ms = 180 (30 min)
delete.retention.ms = 8640 (1 day)
retention.ms = 25920 (3 days)
Ideally I would want the old records > 3 days to be deleted without prod
I tried to create a kafka streams application, but there was a bug in my
code. I believe there was a deadlock for my application and whenever I
tried to run an application instance with the same
StreamsConfig.APPLICATION_ID_CONFIG id it could not start. I had to create
another instance with differe
I think multi-way stream-stream join, beyond table-table join would be a
good to add. About joining with different keys, we have foreign-key join
for KTables only at the moment (KIP-213), and maybe we can follow that
direction as well.
Also in your case, if you can manage to transform your stream-
Hello Team,
I am using kafka_2.11-1.1.0 in my prod environment, just wanted to ask is
there any performance related issue or glitch can be possible on the leap
year day, Feb 29 2020 and the precaution we could take to avoid this issue?
Regards,
Iqbal
Hello
I'm Michael
I am writing a thesis for the evaluation of Kafka the right one for our new
project. I wanted to ask if I can use the Kafkalogo in my diploma thesis?
If so, which regulations do you have what it should look like? I look
forward to your answer Regards michael
There are lot of resources over the internet suggesting how to do that. I
would advise you to look over some. A simple search should lead you to them.
On Sun, Feb 23, 2020 at 10:14 PM Pradhan V wrote:
> Thank you, Sachin.
>
> Yes, I use Java based Producer/Consumer and came to know that the log
Thank you, Sachin.
Yes, I use Java based Producer/Consumer and came to know that the log4j
properties file name can be provided in the Java runtime option as -
Dlog4j.configuration=log4propertiesFile.
Any pointers on whether or how it can be done programmatically would be
greatly helpful.
Regard
Hi Liam,
Thanks a lot for valuable information and this really helps for my
assessment.
Thanks and Regards,
Naveen
On Thu, Feb 20, 2020, 8:38 PM Liam Clarke wrote:
> The specs of your broker machines look fine for your use case. But you'll
> need to run 3
> ZK nodes at least so that ZK can mai
I use log4j properties. I think there must be a way to pass log4j settings
programmatically as well.
If you are using java based producers/consumers then you can set log4j
properties and make them available to you classpath before starting those
applications.
On Sun, Feb 23, 2020 at 9:54 PM Prad
Hi,
KIP-150 is indeed cool, and I suppose it would be released as part of 2.5.
I can see some use cases of the new api where one can avoid multiple
aggregations.
I believe in the same lines if we can introduce two more api's.
1. to join two streams having different keys. This would help in trying
Any help regarding this, please?
On Fri 21 Feb, 2020, 5:30 PM Pradhan V, wrote:
> Hi,
>
> How can the log level of the Kafka clients (Producers/Consumers) be set?
>
> Is it by using a log4j.properties file? Or, is there a way it can be
> programmatically set, as well?
>
> What are the possible o
Hi,
All this makes perfect sense now and I could not be more clearer on how
kafka and streams handle times.
So if we use event time semantics (with or without custom timestamp
extractor) getting out of order records is something expected and ones
stream topology design should take care of it.
Righ
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
>> This really helped to understand that grace period takes care of
>> out of order records rather than late arriving records.
Well, the grace period defines if (or when) an out-of-order record is
consider late. Of course, per definition of "late',
16 matches
Mail list logo