These are generated html files. As with all documentation the source of
truth lies in `apache/kafka`. For this case you'd need to look here
https://github.com/apache/kafka/blob/a39fcac95c82133ac6d9116216ae819d0bf9a6bd/storage/src/main/java/org/apache/kafka/server/log/remote/st
ב-13:58 מאת Josep Prat
<josep.p...@aiven.io.invalid>:
> Hi there,
> Documentation is in both repositories (
> https://github.com/apache/kafka-site
> and https://github.com/apache/kafka). To submit a PR, you need to fork the
> repo, make the changes and submit the PR. Y
Hi there,
Documentation is in both repositories (https://github.com/apache/kafka-site
and https://github.com/apache/kafka). To submit a PR, you need to fork the
repo, make the changes and submit the PR. You can start by submitting a PR
changing the necessary files under
https://github.com/apache
iceable for tiered storage (only time I’ve really understood how Kafka
> segments work and looked closely), Paul
>
> From: Matthias J. Sax
> Date: Tuesday, 25 February 2025 at 1:16 pm
> To: users@kafka.apache.org
> Subject: Re: Documentation and meaning of configuration 'ret
tiered storage (only
time I’ve really understood how Kafka segments work and looked closely), Paul
From: Matthias J. Sax
Date: Tuesday, 25 February 2025 at 1:16 pm
To: users@kafka.apache.org
Subject: Re: Documentation and meaning of configuration 'retention.bytes'
EXTERNAL EMAIL - U
technically correct" (ie,
engineering / nerd language) and "regular English", ie, how normal
people speak.
I regular English one would say, "I limit the size to 1GB", even if 1GB
is not a strict limit (never larger then 1GB), but technically a lower
bound.
I would appr
Hi,
I encountered a misunderstanding and I would like you to explain it to me
or if possible change the documentation.
The Kafka docs describes 'retention.bytes' configuration as:
This configuration controls the maximum size a partition (which consists of
log segments) can grow to befo
Hi Edgar,
> Is this the correct documentation on how to contribute code changes?
>
https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes#ContributingCodeChanges-PullRequest
Yes, it is.
For the KAFKA-15513 <https://issues.apache.org/jira/browse/KAFKA-15513>,
so
Thank you for quick response!
Is this the correct documentation on how to contribute code changes?
https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes#ContributingCodeChanges-PullRequest
Also I would like to ask you about another issue that I'm interested in -
mention it.
I've created KAFKA-16848 <https://issues.apache.org/jira/browse/KAFKA-16848>.
Welcome to open a PR to fix it. :)
Thanks.
Luke
On Tue, May 28, 2024 at 2:13 PM Zubel, Edgar
wrote:
> Hello,
>
>
>
> I would like to report a mistake in the *Kafka 3.7 Documentat
Hello,
I would like to report a mistake in the Kafka 3.7 Documentation -> 6.10 KRaft
-> ZooKeeper to KRaft Migration -> Reverting to ZooKeeper mode During the
Migration.
While migrating my Kafka + Zookeeper cluster to KRaft and testing rollbacks at
a different migration stages I hav
Hello Team,
From
https://cwiki.apache.org/confluence/display/KAFKA/KIP-900%3A+KRaft+kafka-storage.sh+API+additions+to+support+SCRAM+for+Kafka+Brokers
I understand that SCRAM authentication is available for Kraft kafka clusters.
However the official documentation is only referring to ZooKeeper
Hi Luke
Sure, I will create a ticket after creating a JIRA account.
Cheers.
From: Luke Chen
Date: Wednesday, 21 February 2024 at 8:59 pm
To: users@kafka.apache.org
Subject: Re: Possible bug on Kafka documentation
CAUTION: This email originated from outside of the organisation. Do not click
anks.
Luke
On Wed, Feb 21, 2024 at 4:23 PM Federico Weisse
wrote:
> In documentation from version 3.1 to version 3.4, it looks like the
> retries explanation has a bug related to
> max.in.flight.request.per.connection related parameter and possible message
> reordering.
> ht
In documentation from version 3.1 to version 3.4, it looks like the retries
explanation has a bug related to max.in.flight.request.per.connection related
parameter and possible message reordering.
https://kafka.apache.org/31/documentation.html#producerconfigs_retries
https://kafka.apache.org/32
Hi Team,
Any help on this query ?
From: Kaushik Srinivas (Nokia)
Sent: Tuesday, August 22, 2023 10:26 AM
To: users@kafka.apache.org
Subject: Need more clarity in documentation for upgrade/downgrade procedures
and limitations across releases
Hi Team,
Referring to the upgrade documentation for
Hi Team,
Referring to the upgrade documentation for apache kafka.
https://kafka.apache.org/34/documentation.html#upgrade_3_4_0
There is confusion with respect to below statements from the above sectioned
link of apache docs.
"If you are upgrading from a version prior to 2.1.x, please se
I am interested in contributing documentation to a project but is this a
good project for machine learning applications?
I see what you mean, that's pretty ugly. Can you file a bug report at
https://issues.apache.org/jira/issues/ so we can
track and follow up on this?
Thanks!
On Mon, Jul 12, 2021 at 7:35 AM Rasto Janotka
wrote:
> Hi,
> I am using your documentation it is written very nice and clea
Hi,
I am using your documentation it is written very nice and clean but there
is some small "bug in the format" in CSS/HTML and some sections (
table/code) are quite hard to read.
see:
https://kafka.apache.org/10/documentation/streams/developer-guide/config-streams
[image: Selecti
Hello Team,
Would like to upgrade my Kafka environment from 2.0.0 to 2.8.0, while going
through documentation at link -
https://kafka.apache.org/28/documentation.html#upgrade found no instructions
has been shared for upgrade to 2.8.0, only changes are present, can see
instructions till
ign or not).
Kind regards,
Tom
On Wed, Aug 12, 2020 at 5:59 PM John Roesler wrote:
> Hello Ahmed,
>
> Thanks for this feedback. I can see what you mean.
>
> I know that there is a redesign currently in progress for
> the site, but I'm not sure if the API/Config documentation
&g
Hello Ahmed,
Thanks for this feedback. I can see what you mean.
I know that there is a redesign currently in progress for
the site, but I'm not sure if the API/Config documentation
is planned as part of that effort. Here's the PR to re-
design the home page:
https://github.com/apache/
Dears kafka team,
Kindly not that Kafka documentation navigation is pretty hard
to the eyes and exhausting. Once I'm on section or reading configuration, I
can't know what section I'm currently looking at or under what category. This
is very annoying and very ha
Hello,
The latest docs (https://kafka.apache.org/documentation/#security_overview)
give the following command in section 7.2.1 "[t]o generate certificate
signing requests":
keytool -keystore server.keystore.jks -alias localhost -validity {validity}
-genkey -keyalg RSA -destkeystoret
Hi Jacob,
The Kafka code base is huge and the documentation is also very broad. It is
always likely that you will notice discrepancies between the current
implementation for a specific version of Kafka or ecosystem components when
compared to the reference documentation.
If you notice such
Hello Apache Kafka team,
comparing the 2.4.1 code state of KafkaProducer with documentation, i noticed
the following difference:
The „send(record, callback)“ method catches internally the apiException’s and
set it into Future-Object.
The callback object handles this exceptions afterwards.
But
ll be a mistake. highlight it and send it to
> > > d...@kafka.apache.org and if it is, we'll make a ticket and address it.
> > >
> > > On Thu, Feb 6, 2020 at 10:53 AM Fares Oueslati <
> oueslati.fa...@gmail.com
> > >
> > > wrote:
> > &g
ow it to be attached.
> >
> > It could very well be a mistake. highlight it and send it to
> > d...@kafka.apache.org and if it is, we'll make a ticket and address it.
> >
> > On Thu, Feb 6, 2020 at 10:53 AM Fares Oueslati >
> > wrote:
> >
> > &
t; > While going through the official docs
> > https://kafka.apache.org/documentation/#messageformat
> >
> > If I'm not wrong, I believe there is a mismatch between description of a
> > segment and the diagram illustrating the concept.
> >
> > I pointed out th
Thu, Feb 6, 2020 at 10:53 AM Fares Oueslati
wrote:
> Hello,
>
> While going through the official docs
> https://kafka.apache.org/documentation/#messageformat
>
> If I'm not wrong, I believe there is a mismatch between description of a
> segment and the diagram illustratin
Hello,
While going through the official docs
https://kafka.apache.org/documentation/#messageformat
If I'm not wrong, I believe there is a mismatch between description of a
segment and the diagram illustrating the concept.
I pointed out the issue in the attached screenshot.
Didn't r
If it's about generics, see
https://stackoverflow.com/questions/5297978/calling-static-generic-methods
On 12/28/19 8:50 AM, Guozhang Wang wrote:
> Hello Aurel,
>
> Maybe this helps:
> https://kafka.apache.org/24/documentation/streams/developer-guide/dsl-api.html
>
> Gu
Hello Aurel,
Maybe this helps:
https://kafka.apache.org/24/documentation/streams/developer-guide/dsl-api.html
Guozhang
On Fri, Dec 27, 2019 at 8:50 AM Aurel Sandu wrote:
> Hi all of you,
>
> I am reading the following code :
> ..
> KTable wordCou
Hi all of you,
I am reading the following code :
..
KTable wordCounts = textLines
.flatMapValues(textLine ->
Arrays.asList(textLine.toLowerCase().split("\\W+")))
.groupBy((key, word) -> word)
.count(Materialized.>as("counts-store"));
..
AM (8 days ago)
There are some old docs translating kafka.apache.org to Chinese but are on old
versions and out of date, e.g.:
http://cwiki.apachecn.org/pages/viewpage.action?pageId=2885670
https://www.bookstack.cn/books/apache-kafka-documentation-cn
Strictly speaking no one can prevent a
Hello:
I am a developer from China , I have read the design of kafka recently from
http://kafka.apache.org/documentation/.
I have an idea to translate the part of it , and Could you tell me whether it
is allowed to translate the content of your official website
to Chinese please
at 8:31 AM Guozhang Wang wrote:
> Tom,
>
> I think there is a documentation error on the `findSessions` function,
> where the last parameter should really be "the start timestamp of the
> latest session to search for".
>
> So back to your example, if you want to find &
Tom,
I think there is a documentation error on the `findSessions` function,
where the last parameter should really be "the start timestamp of the
latest session to search for".
So back to your example, if you want to find "any sessions that has an
overlap of [T2, T4)" y
y. I would expect the
> session to be returned since, according to the documentation that call
> fetches "any sessions with the matching key and the sessions end is >=
> earliestSessionEndTime and the sessions start is <= latestSessionStartTime"
> and obviously T5 >= T2
Hi,
I found a mismatch between the documentation in
the org.apache.kafka.common.serialization.Deserializer and the
implementation in KafkaConsumer.
Deserializer documentation sais: *"serialized bytes; may be null;
implementations are recommended to handle null by returning a value or null
r
ect the session to be returned
since, according to the documentation that call fetches "any sessions with the
matching key and the sessions end is >= earliestSessionEndTime and the sessions
start is <= latestSessionStartTime" and obviously T5 >= T2 and T0 <= T4. From
the b
4. I meant to say pattern matching. You can catch a match on the structure
(int, string, string) without explicitly setting the case class in a case
switch statement.
https://alvinalexander.com/scala/how-to-use-pattern-matching-scala-match-case-expressions
6. I’ve been using partial classes in
I'm not a scala expert and haven't touched it for 18 months, but with
respect to Mr. Singh, I'd like to clarify or question a few of his
points.
1. Statelessness is a tool; not an end in itself but a means to an
end. As someone on HackerNews says, "control your state space or die",
but the same gu
Not necessarily for Kafka, but you can definitely google “Java vs. Scala” and
find a variety of reasons . I did a study for a client and ultimately here are
the major reasons I found :
1. Functional programming language which leads itself to stateless systems
2. Better / easier to use stream pro
Hello,
Is anyone aware of any links or website where I can find information/case
study etc. to why Scala was the best choice for kafka design? I hope this
is not too much of a "Naive" question since I have had a very humble
introduction to Scala.
I understand that Scala is considered where distri
Thanks a lot!
On 12/18/17 12:46 PM, Dmitry Minkovsky wrote:
> You're welcome. Another one I found today
> https://docs.confluent.io/current/streams/developer-guide/dsl-api.html
>
>> groupedStream.windowedBy(TimeUnit.MINUTES.toMillis(5))
>
> should be
>
>> groupedStream.windowedBy(TimeWindows.of
You're welcome. Another one I found today
https://docs.confluent.io/current/streams/developer-guide/dsl-api.html
> groupedStream.windowedBy(TimeUnit.MINUTES.toMillis(5))
should be
> groupedStream.windowedBy(TimeWindows.of(TimeUnit.MINUTES.toMillis(5)))
in two spots.
On Mon, Dec 18, 2017 at 2:2
Thanks for reporting this!
We will fix it.
-Matthias
On 12/17/17 7:05 PM, Philippe Derome wrote:
> I agree with Dmitry's first comment, it really looks like the paragraph he
> points to under "Table" was pasted without edit from the one previously
> that pertained to "KStream".
>
> On Sun, Dec
I agree with Dmitry's first comment, it really looks like the paragraph he
points to under "Table" was pasted without edit from the one previously
that pertained to "KStream".
On Sun, Dec 17, 2017 at 5:31 PM, Dmitry Minkovsky
wrote:
> On https://docs.confluent.io/current/streams/developer-guide/
Also the javadoc here:
https://github.com/apache/kafka/blob/e5daa40e316261e8e6cb8866ad9a4eedcf17f919/streams/src/main/java/org/apache/kafka/streams/StreamsBuilder.java#L184-L185
Shouldn't it refer to the `Consumed`, given that it is provided in this
overload?
Sorry, I would post this to JIRA, but
On https://docs.confluent.io/current/streams/developer-guide/dsl-api.html
for version 4.0.0:
Under "Table", currently:
> In the case of a KStream, the local KStream instance of every application
instance will be populated with data from only a subset of the partitions
of the input topic. Collecti
:38 AM, Waleed Fateem
> wrote:
>
> > Hello,
> >
> > I created a JIRA (KAFKA-6301) for a minor change to the documentation but
> > it doesn't seem like I can assign the ticket to myself. Can someone help
> me
> > out?
> >
> > I'm also t
I have also added you to contributor list so you can assign to yourself now.
Guozhang
On Sun, Dec 3, 2017 at 8:38 AM, Waleed Fateem
wrote:
> Hello,
>
> I created a JIRA (KAFKA-6301) for a minor change to the documentation but
> it doesn't seem like I can assign the tick
Hello,
I created a JIRA (KAFKA-6301) for a minor change to the documentation but
it doesn't seem like I can assign the ticket to myself. Can someone help me
out?
I'm also trying to commit and push the change to the Kafka repository but
I'm getting the following error:
remote
I've been working with Kafka broker listeners and I'm curious is there
any documentation that explains what all of them apply to? Such as
CLIENT, PLAINTEXT, SASL/SSL, etc. I see the encryption part of the
documentation, but is it just inferred what these listeners apply to?
Th
2017 at 5:51 AM, diane wrote:
>
> > Hi
> >
> > I was trying to look at the documentation for the AdminClient API , but
> > the link from page
> > https://kafka.apache.org/documentation/#adminapi
> >
> > in the sentence "For more information abou
See:
https://github.com/apache/kafka/blob/1.0/clients/src/main/java/org/apache/kafka/clients/admin/AdminClient.java
On Thu, Nov 2, 2017 at 5:51 AM, diane wrote:
> Hi
>
> I was trying to look at the documentation for the AdminClient API , but
> the link from page
> https://k
Hi
I was trying to look at the documentation for the AdminClient API , but
the link from page
https://kafka.apache.org/documentation/#adminapi
in the sentence "For more information about the AdminClient APIs, see
the javadoc. "
To the URL:
https://kafka.apache.org/100/javadoc/inde
it’s owned by the first one. Per my understanding Zookeeper nodes
ownership if different from ACL and how Kafka broker authorize each other’s
replication operations (by creating acl nodes). And that’s why I understand
the documentation recommends to have the same SPN across all brokers to
connect to
MG>confusion between JAAS-security terminology and Kafka-SASL terminology?
From: Stephane Maarek
Sent: Sunday, February 19, 2017 7:28 PM
To: users@kafka.apache.org
Subject: Security Documentation contradiction / misleading ?
Hi,
I’m wondering if the offic
Hi,
I’m wondering if the official Kafka documentation is misleading. Here (
https://kafka.apache.org/documentation/#security_sasl_brokernotes) you can
read:
1. Client section is used to authenticate a SASL connection with
zookeeper. It also allows the brokers to set SASL ACL on zookeeper
Does the number of App instances and Zookeeper servers should be the same?
I understand the requirement of 2F+1 to tolerate F failures but this is to
tolerate failures of Zookeeper instances itself. But how about the number
of App instances ? For example say I have 3 zookeeper servers and I have 2
0.10.1.0 to 0.10.1.1 in
https://kafka.apache.org/documentation/#upgrade
Guozhang
On Tue, Jan 10, 2017 at 8:49 AM, Jeff Klukas wrote:
> I'm starting to look at upgrading to 0.10.1.1, but looks like the docs have
> not been updated since 0.10.1.0.
>
> Are there any plans to u
I'm starting to look at upgrading to 0.10.1.1, but looks like the docs have
not been updated since 0.10.1.0.
Are there any plans to update the docs to explicitly discuss how to upgrade
from 0.10.1.0 -> 0.10.1.1, and 0.10.0.X -> 0.10.1.1?
e new
consumer (and those docs are autogenerated), I'm pretty sure it's already
correct. If you search for fetch.wait.max.ms under the
https://kafka.apache.org/documentation#oldconsumerconfigs section you
should find it there.
-Ewen
On Mon, Nov 21, 2016 at 5:26 AM, Vincent Dautremont &l
Hi,
I just want to raise a flag concerning an error in the documentation.
it says :
> *fetch.max.wait.ms*
> The maximum amount of time the server will block before answering the
> fetch request if there isn't sufficient data to immediately satisfy the
> requirement given b
[mailto:dana.pow...@gmail.com]
Sent: Tuesday, March 29, 2016 9:06 AM
To: users@kafka.apache.org
Subject: Re: Documentation
I also found the documentation difficult to parse when it came time to
implement group APIs. I ended up just reading the client source code and trying
api calls until it made sense
Thanks for the details Dana!
I think this sort of thing could be worked into the new "Protocol Guide"
documentation: http://kafka.apache.org/protocol.html
On Tue, Mar 29, 2016 at 11:25 AM, Gwen Shapira wrote:
> Awesome summary, Dana. I'd like to fit this into our docs, but I&
error code, at which point the assignment dance begins from
scratch: consumers needs to send the sync request, leaders need to create
an assignment, etc.
Gwen
On Tue, Mar 29, 2016 at 9:05 AM, Dana Powers wrote:
> I also found the documentation difficult to parse when it came time to
> implement
I also found the documentation difficult to parse when it came time to
implement group APIs. I ended up just reading the client source code and
trying api calls until it made sense.
My general description from off the top of my head:
(1) have all consumers submit a shared protocol_name string
Does anyone have better documentation around the group membership APIs?
The information about the APIs are great at the beginning but get progressively
sparse towards then end.
I am not finding enough information about the values of the request fields to
join / sync the group.
Can anyone
Hi,
I noticed that in Section 1.3 of the documentation in "Step 3: Create a topic"
the example still use bin/kafka-topics.sh even tho in the deprecation notes of
0.9.0.0 the following is stated:
"Altering topic configuration from the kafka-topics.sh script
(kafka.admin.TopicCo
Hi there,
I want to implement the Offset Commit/Fetch api functionality in our in house
.net client.
https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-OffsetCommit/FetchAPI
It seems the documentation is incomplete and clearly not of the
You can find protocol documentation here (including a list of api key #s):
https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
-Dana
On Sun, Jan 31, 2016 at 5:46 PM, Heath Ivie wrote:
> To piggy back , where can I find the api key values?
>
> Sent fro
seeing inconsistencies in the way the document says it works with how it
actually works, specifically the fetch messages.
Could someone point me to the current documentation?
Thanks Heath
Sent from Outlook Mobile<https://aka.ms/qtex0l>
Warning: This e-mail may contain information pro
Hi Folks,
I am working through the protocols to build a c# rest api.
I am seeing inconsistencies in the way the document says it works with how it
actually works, specifically the fetch messages.
Could someone point me to the current documentation?
Thanks Heath
Sent from Outlook Mobile<ht
Hmm. Then the documentation needs to be fixed.
at this site: Apache Kafka it is asking consumers to use kafka-clients jar.
THere is no mention of old vs new consumer api.
On Thursday, October 8, 2015 1:27 PM, Ewen Cheslack-Postava
wrote:
ConsumerConnector is part of the old
, Feroze Daud
wrote:
> hi!
> where can I find a quickstart doc for kafka-client java api version 0.8.2 ?
> The documentation at http://kafka.apache.org/documentation.html does not
> seem to sync with the 0.8.2 API in the kafka-clients artifact.
> Specifically, I cannot
hi!
where can I find a quickstart doc for kafka-client java api version 0.8.2 ?
The documentation at http://kafka.apache.org/documentation.html does not seem
to sync with the 0.8.2 API in the kafka-clients artifact. Specifically, I
cannot find the class ConsumerConnector that is referenced here
That's exactly right. We've been talking about this internally at LinkedIn, and
how to solve it. I think the best option would be to have the broker throw an
error on offset commits until there are enough brokers to fulfill the
configured RF.
We've seen this several times now when bootstrapping
Hi,
My kafka cluster has a __consumer_offsets topic with 50 partitions (the default
for offsets.topic.num.partitions) but with a replication factor of just 1 (the
default for offsets.topic.replication.factor should be 3).
From the docs http://kafka.apache.org/documentation.html:
offsets.topic.
The Kafka documentation here (
http://kafka.apache.org/081/documentation.html#topic-config) mentions the
following as an example:
*> bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic
my-topic --partitions 1
--replication-factor 1 --config max.message.bytes=64000
--con
n for every topic.
Is there some documentation what has to be done in order to ensure
topic existence? Right now I'm using command line tool shipped with
kafka binary, but I would prefer to be able to do this without jvm
requirement.
Hi,
I have been going through http://kafka.apache.org/documentation.html and
read below for providing custom partitioner.
- provides software load balancing through an optionally user-specified
Partitioner -
The routing decision is influenced by the kafka.producer.Partitioner.
inte
You can use the DumpLogSegments tool to see if a log segment is indeed
corrupted.
Thanks,
Jun
On Mon, Jan 12, 2015 at 2:04 PM, Bhavesh Mistry
wrote:
> Hi Kafka Team,
>
> I am trying to find out Kafka Internal and how a message can be corrupted
> or lost at brokers side.
>
> I have refer to fol
Hi,
I think you could just email user@?
There was no attached image.
I think Jun committed something about this:
https://issues.apache.org/jira/browse/KAFKA-1481?focusedCommentId=14272057&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14272057
Otis
--
Monitoring * A
Hi Kafka Team,
I am trying to find out Kafka Internal and how a message can be corrupted
or lost at brokers side.
I have refer to following documentations for monitoring:
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Internals
http://kafka.apache.org/documentation.html#monitoring
I am
controls how often a
> producer
> > > will a null key partitioner will switch partitions that it is writing
> to.
> > > In my production app I set this down to 1 minute and haven’t seen any
> ill
> > > effects but it is good to note that the shorter you get
ducer
> > will a null key partitioner will switch partitions that it is writing to.
> > In my production app I set this down to 1 minute and haven’t seen any ill
> > effects but it is good to note that the shorter you get *could* cause
> some
> > issues and extra overhead.
g to.
> In my production app I set this down to 1 minute and haven’t seen any ill
> effects but it is good to note that the shorter you get *could* cause some
> issues and extra overhead. I agree this could probably be a little more
> clear in the documentation.
> -
> Andrew Jo
app
I set this down to 1 minute and haven’t seen any ill effects but it is good to
note that the shorter you get *could* cause some issues and extra overhead. I
agree this could probably be a little more clear in the documentation.
-
Andrew Jorgensen
@ajorgensen
On December 5, 2014 at 1:34:00
chalski,
michal.michal...@boxever.com
On 5 December 2014 at 18:32, Yury Ruchin wrote:
> Hello,
>
> I've come across a (seemingly) strange situation when my Kafka producer
> gave so uneven distribution across partitions. I found that I used null key
> to produce messages, gu
Hello,
I've come across a (seemingly) strange situation when my Kafka producer
gave so uneven distribution across partitions. I found that I used null key
to produce messages, guided by the following clause in the documentation:
"If the key is null, then a random broker partition
Hi
I was reading the protocol documentation on the wiki page:
https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol
But it seems that the description is not compete. For example there is no
information on the type of the different field (ApiKe, ApiVersion, ClientId
> > > Guozhang
> > >
> > >
> > > On Tue, Jun 17, 2014 at 9:56 PM, Daniel Compton > > (mailto:d...@danielcompton.net)
> > (mailto:d...@danielcompton.net)>
> > > wrote:
> > >
> > > > Hi
> > > >
> > &g
gt; >
> > Guozhang
> >
> >
> > On Tue, Jun 17, 2014 at 9:56 PM, Daniel Compton (mailto:d...@danielcompton.net)>
> > wrote:
> >
> > > Hi
> > >
> > > I was following the instructions for Kafka mirroring and had two
> > > sugge
instructions for Kafka mirroring and had two
> > suggestions for improving the documentation at
> > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330:
> > 1. Move "Note that the --zkconnect argument should point to the source
> > cluster's
Thanks Daniel for the findings, please feel free to update the wiki.
Guozhang
On Tue, Jun 17, 2014 at 9:56 PM, Daniel Compton
wrote:
> Hi
>
> I was following the instructions for Kafka mirroring and had two
> suggestions for improving the documentation at
> https://cwiki.apache
Hi
I was following the instructions for Kafka mirroring and had two
suggestions for improving the documentation at
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330:
1. Move "Note that the --zkconnect argument should point to the source
cluster's ZooKeeper...”
1 - 100 of 144 matches
Mail list logo