Hi,
We are implementing exactly once in our application with help of
KafakaConsumer's offsetsForTimes API.
Out kafka producer is using transactional semantics as below : pseudo steps
It is enabled using : producerProps.put("transactional.id", "h3");
Hi,
I have some questions about kafka concept,
Thanks in advance for your answer:
1.I want when kafka are deleting kafka's partition (because it achieved to its
time or max volume) it send me a notification and says that "I was deleted
offset 22 od topic a and partition 2"
Can i do this in kafka
The Apache Kafka community is pleased to announce the release for Apache
Kafka 2.6.0
* TLSv1.3 has been enabled by default for Java 11 or newer.
* Significant performance improvements, especially when the broker has
large numbers of partitions
* Smooth scaling out of Kafka Streams applications
* K
Hello,
I have a Kafka cluster with 3 brokers (v2.3.0) and each broker has 2 disks
attached. I added a new topic (heavyweight) and was surprised that even if
the topic has 15 partitions, those weren't distributed evenly on the disks.
Thus I got one disk that's almost empty and the other almost fill
Thanks for driving the releas, Randall. Congratulations to all the
contributors! :)
Ismael
On Thu, Aug 6, 2020, 7:21 AM Randall Hauch wrote:
> The Apache Kafka community is pleased to announce the release for Apache
> Kafka 2.6.0
>
> * TLSv1.3 has been enabled by default for Java 11 or newer.
>
Thanks for driving the release Randall!
Congratulations to everybody involved - awesome work!
On Thu, Aug 6, 2020 at 5:21 PM Randall Hauch wrote:
> The Apache Kafka community is pleased to announce the release for Apache
> Kafka 2.6.0
>
> * TLSv1.3 has been enabled by default for Java 11 or newe
Hi Ben,
The documentation for the configs (broker, producer etc) used to function
as links as well as anchors, which made the url fragments more
discoverable, because you could click on the link and then copy+paste the
browser URL:
batch.size
What seems to have happened with the new layo
Plus one to Tom's request - the ability to easily generate links to
specific config options is extremely valuable.
On Thu, Aug 6, 2020 at 10:09 AM Tom Bentley wrote:
> Hi Ben,
>
> The documentation for the configs (broker, producer etc) used to function
> as links as well as anchors, which made
Kafka evenly distributed number of partition on each disk so in your case
every disk should have 3/2 topic partitions .
It is producer job to evenly produce data by partition key to topic partition .
How it partition key , it is auto generated or producer sending key along with
message .
On
Yeah, but it doesn't do that. My "older" disks have ~70 partitions, the
newer ones ~5 partitions. That's why I'm asking what went wrong.
On Thu, Aug 6, 2020 at 8:35 PM wrote:
> Kafka evenly distributed number of partition on each disk so in your case
> every disk should have 3/2 topic partition
What do you mean older disk ?
On 8/6/20, 12:05 PM, "Péter Nagykátai" wrote:
[External]
Yeah, but it doesn't do that. My "older" disks have ~70 partitions, the
newer ones ~5 partitions. That's why I'm asking what went wrong.
On Thu, Aug 6, 2020 at 8:35 PM wrote:
> Kafka
I initially started with one data disk (mounted solely to hold Kafka data)
and recently added a new one.
On Thu, Aug 6, 2020 at 10:13 PM wrote:
> What do you mean older disk ?
>
> On 8/6/20, 12:05 PM, "Péter Nagykátai" wrote:
>
> [External]
>
>
> Yeah, but it doesn't do that. My "older
Hi Peter,
AFAIK, everything depends on:
1) How you have configured your topic
a) number of partitions (here I understand you have 15 partitions)
b) partition replication configuration (each partition necessarily has a
leader - primary responsible to hold the data - and for reads and writes)
y
Hello,
I upgraded kafka from 0.10 to 2.5.0 and also I upgraded logstash from 2.4
to 7.5
when I have kafka 1.10 and logstash 2.4 the messages used to forward
without any problems. But after the upgrade I'm getting errors in both
logstash and kafka logs so I would like to know what is the compatibl
Are you getting any error at kafka broker or producing/consuming message ?
Can you please provide more detail how did you upgrade or what error you are
getting . it all depend how did you upgraded ?
On 8/6/20, 4:13 PM, "Satish Kumar" wrote:
[External]
Hello,
I upgraded kafka f
15 matches
Mail list logo