Hi Brilly,
Do I understand right, that you want your consumers to consume only from a
given partition / partitions based on a preconfigured value?
You need to copy the mechanism the default partitioner (if this is used on
producer side) uses to determine partition.
(
https://github.com/apache/kafk
Hi Per,
Unfortunately you can not extract this information from the client. Even if
you implement your own PartitionAssignor to supply this information to all
the consumers, KafkaConsumer has its ConsumerCoordinator implementation
hard wired, so you cannot extract that info. Here there is room for
Hi Adrien,
Every log.flush.offset.checkpoint.interval.ms we write out the current
recovery point for all logs to a text file in the log directory to avoid
recovering the whole log on startup.
and every log.flush.start.offset.checkpoint.interval.ms we write out the
current log start offset for al
Hi Waleed,
generally extra work is necessary only when the client uses a different
message format version than what is used in the broker log. Then the broker
has to convert between those formats.
In case of 0.8 and 0.9 there is no difference in the message format: both
use version 0.
Best regard
t means the old consumed data ? For example I have 34700 offset, it's to
> avoid reexposing
>
> 34000~34699 records to consumer after crash ?
>
>
> De : Andras Beni
> Envoyé : mardi 27 février 2018 06:16:41
&
You might want to send this to users-unsubscr...@kafka.apache.org .
On Tue, Feb 27, 2018 at 7:49 PM, Yuejie Chen wrote:
>
>
ncept 😊
>
>
> Best regards,
>
>
> Adrien
>
>
> De : Andras Beni
> Envoyé : mardi 27 février 2018 15:41:04
> À : users@kafka.apache.org
> Objet : Re: difference between 2 options
>
> 1) We write out one recovery point per lo
Hi Andrew,
It seems the throughput of the new cluster is smaller than that of the old
cluster. And for this reason MirrorMaker cannot send messages fast enough
(i.e. they expire). I recommend comparing the configurations.
For the hanging MirrorMaker instances, I think looking at stack dumps would
Hi Johnny,
As you already mentioned, it depends on the group.id which broker will be
the group leader.
You can change the group.id to modify which _consumer_offsets partition the
group will belong to, thus change which broker will manage a group. You can
check which partition a group.id is assigne
a hot spot on one or the broker...
>
> Thanks,
>
> Johnny Luo
>
> On 20/3/18, 10:03 pm, "Andras Beni" wrote:
>
> Hi Johnny,
>
> As you already mentioned, it depends on the group.id which broker
> will be
> the group leader.
> You can
Hi Emmett,
ListOffsets API tells you about the log segments belonging to (the given
partitions of) a topic.
I think I better explain how it behaves by example.
I have a topic called 'test2' with three partitions (0..2). I produced 2665
messages to its partition 0. I set up the topic so that it rol
Hi Kathick,
You probably want to add this line to your log4j.properties:
log4j.logger.org.apache.kafka=INFO
This will remove all DEBUG lines where the logger name starts with
org.apache.kafka.
HTH,
Andras
On Fri, May 11, 2018 at 9:28 AM, Karthick Kumar
wrote:
> Hi,
>
> I'm using tomcat node as
+1 (non-binding)
Built .tar.gz, created a cluster from it and ran a basic end-to-end test:
performed a rolling restart while console-producer and console-consumer ran
at around 20K messages/sec. No errors or data loss.
Ran unit and integration tests successfully 3 out of 5 times. Encountered
some
Congratulations, Manikumar!
Srinivas Reddy ezt írta (időpont: 2018. okt.
12., P 3:00):
> Congratulations Mani. We'll deserved 👍
>
> -
> Srinivas
>
> - Typed on tiny keys. pls ignore typos.{mobile app}
>
> On Fri 12 Oct, 2018, 01:39 Jason Gustafson, wrote:
>
> > Hi all,
> >
> > The PMC for Apach
+1 (non-binding)
Verified signatures and checksums of release artifacts
Performed quickstart steps on rc artifacts (both scala 2.11 and 2.12) and
one built from tag 2.1.0-rc0
Andras
On Wed, Oct 24, 2018 at 10:17 AM Dong Lin wrote:
> Hello Kafka users, developers and client-developers,
>
> This
+1 (non-binding)
Verified signatures and checksums of release artifacts
Performed quickstart steps on rc artifacts (both scala 2.11 and 2.12)
Andras
On Tue, Nov 13, 2018 at 10:51 AM Eno Thereska
wrote:
> Built code and ran tests. Getting a single integration test failure:
>
> kafka.log.LogClea
16 matches
Mail list logo